You are on page 1of 59

~; Ph, \ s <'t h') 1

0
...
(


)S5u-{/)c:J P€/J, !"e cf,VU)
!
th c;-
s· C;. ')< v L )e (f p.. N )
r
~ \

IX- QJ'\fm ~'r'\- 1;; PJl'l )DJuj~


'-J n \"l -e.""\.s "' ~ b ~ de ~
,.
lA HISTORICAL BACKGROUND

Though the word "Science" came to be used as late as the beginning of the 19th century, the
inquiry which we call"Science" today is very old. Hence it is not surprising that the questions
"What is the aim of Science?" and "What is the method of Science?" have a long history.
Aristotle worked out detailed answers to these questions. His philosophy of science which was
constituted by his answers to these questions exercised, like his scientific theories, tremendous
influence till the end of the 16th century. In fact, till the end of the 16th century Aristotle's
philosophy of science was the philosophy of science. In a nutshell, Aristotle's theory of science
is the following: ---t-
In science we start with particular observations regarding what is the case. Using the method of
indUCtion, we arrive at definitions which are statements about the essential nature of things. We,
then, using the method of deduction, on the basis of definitions, arrive at demonstrations which
show why things must be whatthey are. Thus, the aim of science is two fold: Definition and
. Demonstration and the method of science is two fold: Induction and Deduction. All science
must proceed from "What is" as given to us in particular observations to "whaf must be" as
shown by the demonstrations. The path- of science is an archtho')e two ~nd points are
observation and demonstration.

2
However, the questions t'What is the aim of Science?", "What is the method of
Science?" were raised afresh in the I of' century which saw the emergence of both modem
philosophy and modern science. By seeking to provide new answers to these questions, the 17th
century thinkers tried to worked out a new philosophy of science which would replace the
traditional philosophy of science that was provided by Aristotle and developed by his followers.
Thus the decisive break with the Aristotalian theory of science resulted in the birth of modern
philosophy of Science in the 11,th century. "

In the whole span of three centuries - from the beginning of the 1ih century till the
end of the 19th century - two views stand out prominently as answers to the question regarding
the aim and method of science. The first view is called Inductivism. According to Inductivism,
the method of Science is the method of induction. The second view is called Hypothesisism,
according to which the method of science is the method of Hypothesis. Francis Bacon is the
Father of Inductivism and Rene Dercartes is the Father of Hypothesisism. The two views
provide two distinct models of scientific practice, which we may call Baconian and Cartesian
models of Scientific practice.

According to Inductivism, the hallmarks of Scientific knowledge (i.e. .the new


scientific knowledge as distinct from the old one) are certainty and breadth This means that from
now on science must aim at that kind of knowledge' which on the one hand is definite and on the
other is more encompassing .. The search for definite knowledge led Inductivists to legislate that
Science must confine itself to. observations since we can be certain only about our observations.
In other words, Science, according to the "Inductivists, must involve no reference to anything
unobservable .. The means of realizing breadth, the Inductivistsfound in using the principle of
Induction. This principle is central to the .Inductivist view of scientific method. This principle
allows us to go from particular observations to a generaliiation. Thus according to the
Inductivists, scientific
..) .- knowledge achieves breadth by arriving <it generalizations on the basis of
.' .

particular observations using the principle of Induction. Breadth is achieved because each
generalization thus. ~ived at cryptically contains an Indefinite" number of as yet unmade
observations apart from the observations already made. Suppose ~e make observations "AI is b",
"A2 is b" and "A3 is b' and then arrive at a generalization "Alt"As are 'b", the latter cryptically
contains an indefinite number .of observations because "All As are b" is about both observations
made as well as those yet to be made. .. .

The aim of Science is to arrive at laws i.e. established inductive generalizations. By .


accumulating such. established generalizations which we call laws we will have fill enormous
amount of observations at our disposal and world is nothing but the totality of such observations
cryptically contained in the complete list of laws. Science, thus, begins with observations 'but'
also ends withobservations sincethe laws (i.e. the established inductive generalizations) contain
i.e. are about only observations and do not make any reference to any unobservables.

...,
.J
It must be noted that both Aristotle and Inductivists used the word "Induction", Bi
there is a difference between their uses. For Aristotle, induction is the method of arriving at wha
he calls 'Definitions' which, according to him, are the descriptions of essential nature of things
But in the hands of the Inductivists, induction has nothing to do with definitions or essentia
nature of things, According to them, Science has nothing to do with the so-called Definitions aru
there is no such thing as the essential nature of things, Induction, according to them, is only,
way of arriving at generalizations which when established by verification become laws, Science
is concerned with laws and not definitions, regularities in nature and not. the essential nature 01
things,

According to the Inductivists, as we have seen, the hallmarks of scientific knowledge


are certainty and breadth, As against this, Hypothesisism considers novelty and depth to be
hallmarks of science, That is to say, science must aim at procuring a type of knowledge which is
novel in the sense it is not given to us and cannot be given to us in observations; such knowledge
must have depth in the sense it must describe deeper reality i.e. a reality that lies behind what we
observe. In other words, so long as we make observations and arrive at generalizations, we are
not doing science. Real science, according to hypothesists (or hypothesisists), begins when we
put forth hypotheses and establish them. The word "Hypothesis" in the I ih century meant a
statement which describes unobservable entities which lie behind observations. Science must aim
"
at explaining what we observe in terms of unobservables Hypotheses have no place in scjence,
according to the Inductivists because according to them Hypotheses are supposedly descriptions
'1 of unobservables and the Inductivists maintain that science has no place for unobservables. But
according to hypothesists, hypotheses constitute the soul of science as they provide explanations

I
j!
J'
ij
of observations or facts in terms of unobservable entities. Hence, according to the hypothesists,
the main task of science is to generate hypotheses which are descriptions ofunobservable
entities. In short according to the Inductivists to do science is to observe and generalize;
according to the hypothesists to do science is to generate' hypotheses and explain what you
" observe in temis of unobservable entities.
il,

Let us call unobservable entities (like electrons, protons or in social sciences, classes)
I'l' theoretical entities. According to Inductivists, theoretical entities are unreal entities and the
1 theoretical terms which designate them stand for fictitious entities which we conjure up for the
I
I
purposes of prediction. But, according to Hypothesists, theoretical entities are as real as
observable/measurable entities and theoretical tenus designate them. Thus, Inductivists are Anti-
f! realists and Hypothesists are realists in connection with the status of theoretical/unobservable
1 entities. Some scientists and philosophers of science adopt Anti-realist position whereas many
others adopt the Realist position. In his celebrated paper "Essay on theMethodology of Positive
Economics", Milton Friedman supports the Anti-realist position.

:·1
lnductivism and Hypothesisism were thus rival methodologies advocating antagonistic \ iews
regarding the method of science. The two methodologies competed with each other fur
acceptance. Both had illustrious followers among scientists and philosophers. Hypothesis.sm ' 3J
an upper hand in the beginning. But Inductivism emerged as the dominant view, especially
because of the support it received from Newton, whose slogan "Hypothesis non-fingo" ( e do
not need hypothesis) became a watch word.

In fact, the popular formulation of the Inductivist posinon came from Newton himself
Epitomizing his Inductivist position in the general Scholium of his Principia, Newton say,
"What is not deduced from phenomena [i.e. observations] is to be called a hypothesis, and
hypothesis, whether metaphysical or physical, whether on occult qualities or mechanicr-', . a e
no place in experimental philosophy. In this philosophy, particular propositions are inferred from
phenomena and afterwards rendered general by induction".

Though Newton's own scientific practice squared more with Hypothesisism than with
Inductivism, his support to Inductivism enhanced its prestige and increased its credibility as a
theory of scientific method vis-a-vis its rival.

th
However, Inductivism faced a formidable challenge in the hands of David Hume, an 18
century philosopher who was himself an Inductivist. As we know, the principle of Induction is
used by us, knowingly or unknowingly, when we go from particular observations to a
generalization.

According to him, the Principle ofInduction needs justification because nothing in experienr
tells us that the principle is valid. Nor is it self-evidently true. However, Hume asserts, it annot
be justified on rational grounds. This is because he convincingly showed that any attempt to
justify it on rational grounds would lead to logical fallacies like circularity and infinite regress.
Hence he concluded that our belief in the Principle of Induction is not based on rational grounds,
but on an animal faith. The belief in it is irrational though pragmatically necessary for carrying
on with science.
'.'

"Hume's problem" or "the problem of Induction" has devastating consequence for


Inductivisrn., The Inductivist is forced to admit that in his scheme of science, science which is
taken to be embodiment of rationality itself is based upon an irrational faith viz. faith in t e
Principle of Induction.

Every Inductivist after Hume tried to ward off the ghost of Hume by solving the
problem of Induction i.e. by showing that our belief in the Principle of Induction was a rational.
one. However, no one succeeded. The problem has remained in the words of C.O.Broad "a
skeleton in the cupboard of philosophy".

5
~'.~J ,
~~:-.'

The intention behind providing all these historical details is to. clear the ground for a
I.. discussion of the Twentieth century developments in Philosophy Of Science. The first half of the
20th century Philosophy Of Science was dominated by a view called- Positivist Philosophy Of
Science. The second half of the Twentieth century Philosophy Of Science saw various reactions
---""a""g~a11"il~ls·t
Positivist Philosophy Of Science, which together may be brought under the rubric "Post
Positivist Philosophy Of Science".

• POSITIVIST PHILOSOPHY OF SCIENCE

Positivism (which is also called "logical Positivism') is a movement in Philosop y m ','1


the first half of the zo" century. Positivists debunked the whale of traditional Philosophy by
attacking Metaphysics which was the most important branch of Philosophy. Metaphysics was so
central to Philosophy that Philosophy was. virtually identified with Metaphysics. Metaphysics
sought to answer mast general and basic questions like "What is Ultimate Reality?" "Does 'God
exist? and if so what is God's relation with the world?", "Is there a soul?", "What is the relation
between the mind and the body?", "Is man free or are human actions determined?" etc.,.
Positivists maintained that Metaphysics was a spurious discipline because metaphysical t
statements (such as "Matter is Ultimate reality" or "Ultimate Reality is Spiritual" etc.,.) are
meaningless since they are not verifiable in experience. "A statement" they claimed "is
meaningful if and only if it is verifiable." Apart from being anti-metaphysical, they were
empiricists i.e. according to them, experience is the source of knowledge. They called
I
i
I
!
themselves "Nee-Empiricists" to distinguish themselves from the Traditional Empiricists of 17th i
and 18th centuries like Locke, Berkeley and Hume.

Positivists worked out a well-knit philosophy of science. Here are some of the central
tenets of the Positivist Philosophy of Science:

1. Science is qualitatively distinct from, superior to and ideal for all other areas of human
endeavor ..(Scientism)

2. The distinction, superiority and idealhood that Science enjoys IS traceable to its
possession of a method (Methodologism).

3. There is only one method common to all sciences, irrespective of their subject matter
. (Methodological monism).

4. That method which is common to all .sciences, natural or human, IS the method of
Induction (Inductivism).

5. The hallmark of science consists In the fact that its statements are systematically
verifiable.

.-
/

6. Scientific observations are or can be shown to be "pure" in the sense that they are theory- .
free.

7. The theories are winnowed from facts or observations.

8. The relation between observation and theory is unilateral in the sense theories are
dependent on observations whereas observations are theory-independent.
c..•..
9. Toi given set of observation statements, there corresponds uniquely only one theory (just
as from a given set of premises in an argum,~nt, onl~ one conclusion follows) .

. 10. Our factual judgments are value-neutral and our value judgments have no factual content
(Fact-Value Dichotomy thesis); hence, science being the foremost instance of factual
inquiry, does not have value commitments.

11. That all scientific explanation must have the following pattern:

L1 · .. ··.· · .... Ln
II ·..·..·.In·
Therefore, E.

where LJ Ln is a set oflaws, h .I, is a set of statements describing


initial conditions and E is the statement describing phenomenon to be explained. That is to say,
to explain a phenomenon scientifically is to deduce its description from a set of laws (which are
called "Covering Laws") via a set. of statements describing initial conditions. In sum, all
explanation worthy to be called 'scientific' must contain laws and involve deduction (Hence this
" is called Deductive -Nomologisrn where 'nomological' means 'concerning laws')

12. The aim of science is either economical description of phenomena or precise prediction
of facts and not providing an account of observations in terms of unobservables. Hence,
scientific theories are not putative descriptions of the unobservable world. The aim of
science has nothing to do with alleged reality of such a world (Anti-Realism).

13. Unlike other areas of activity, science is progressive in the sense scientific change is
always change for better, whereas other areas exhibit just change: the progress of science
consists in the accumulation of observations, on the one hand, and, cumulative growth of
theories, on the other hand. The latter means that any new theory includes the old theory
(Plus something). Thus the growth of science essentially exhibits continuity.

14. Science is objective in the sense that its theories are ~~ed on 'pure' observations or facts
which are theory free i.e. interpretation- independents' Interpretations may be subjective
but observations/facts are objective because they are free from interpretation/theory.

15. Science is rational because the principle of Induction which is central to the method of
science is •- -

7
Rationally defensible, inspite of Hurnes skepticism regarding its rational defensibility.

Positivists tried to justify the Principle of Induction by invoking the concept of pure
observation. According to them, theories are arrived at on the basis of the Principle of
Induction. If we can show that theories arevery closely related to pure observations, the
Principle of Induction stands rationally justified. They tried to workout a whole project to
demonstrate the rational justification of the Principle of Induction on these lines.

_ ATTTACK ON THE POSITIVIST PHILOSOPHY OF SCIENCE


-
It must be noted that the concept of pure observation is necessary for positivist
philosophy of science for showing that science is objective and that science is rational.
Hence the centrality of this concept to the positivist philosophy of science. Therefore, the
collapse of this concept led to the collapse of the positivist philosophy of science though
other theses of positivist philosophy of science listed above also were demolished by its
opponents. Let us briefly look at the arguments which demolished the positivist thesis of
pure observations i.e. thesis 6 in the list above.
1i Firstly, observations presuppose some principle of selection. We need relevant
1
observations. In science it is the problem that decides what is a relevant observation and
thus provides the principle of relevance. Hence, there cannot be observations without a
1 prior problem.
d1 As Popper says "Before we can collect data, our interest in data of a certain kind must be
1
1
1
aroused; the problem always comes first". It may be objected that we become aware of
the problems because of observations and hence observations come first and therefore·

,I
I I
positivists are right. But this objection does not hold.

Two persons might make same observations but one may come out with a problem and

I\ the other may not. Therefore, mere observations would not generate problems in science.
Usually problems are generated when there is a clash between what we observe and what
we eXVfct. Of the two persons making the same observations, one comes out with a
\ , problem because. he sees a conflict between what he observes and what he expects
whereas the other observer may have no - expectation which conflict with what he
observes. The former believes in a theory which produces certain expectations which
conflict with his observations and hence he comes out with a problem. In other words, a
prior belief in a theory is necessary for a problem to be generated and a prior awareness
of the problerri is' necessary fur making relevant observations. Thus theory preceeds
observations.

Secondly, in science observations are taken into account only if they are described
in a language that is currently used in a particular science. An observation, however
genuine, is no observation unless it is expressed in a recognized idiom. It is the theory
which provides the idiom _ - -

8
or language to be used to describe facts or observations. It is relevant to quote the wor s
of Pierre Duhem, a distinguished physicist and philosopher:

"Enter a laboratory; approach the table crowded with an assortment of apparatus: an


electric cell, silk covered copper wire, small cups of mercury, spools, a mirror mounted
on an iron bar; the experimenter is inserting into small openings the metal ends of ebony-
headed pins: the iron bar oscillates and the mirror attached ~o it throws a luminous band
upon a celluloid scale: the forward backward motion of this spot enables the physicist to
observe the minute oscillations of the iron bar.. But ask him what he is doing. Will he
answer "I am studying the oscillations of an iron bar which carries a mirror?" No, he will
say that he is measuring the electrical resistance of the spools. If you are astonished, if '.
you ask him what his words mean, what relation they have with the phenomenon he has
been observing and which you have noted at the same time as he, he will answer that
your question requires a long explanation and' that you should take a course in
electricity."

Thirdly, most of the observations in science are made with the help of
instruments. These instruments are constructed or designed in accordance with the
specifications provided by some theories. These theories, one may say, form tae software
of these instruments. Belief in the reliability of these instruments implies the acceptance
of these theories which have gone into the making of these instruments. Thus,
observations presuppose prior acceptance of theories.

a
Fourthly observations in science need to be legitimized i.e. ratified by theory.
An. example makes the point clear.· We all know that Galileo used some telescopic
observations to support the Helio-centric theory against' the geo-centric theory of.
Ptolemy and his followers. His opponents did not consider the telescopic observations
adequ~te. Why did they not? No doubt,they had belief in the reliability of telescope; ~-.
: they had no problem in using telescope for terrestrial purposes i.e. making observations '
.. of earthly objects. They opposed the extension of telescopic 'observations to the celestial
i!
sphere i.e. regarding heavenly bodies. Their argument was that the normal factors like.
;
background and neighbourhood which help our normal perceptions are absent in the sky. .
. Further, it is impossible to directly verify whether telescopic observations of heavenly ,
bodies are accurate. They rightly demanded from Galileo a theory of light which would
justify the extension of the telescope from terrestrial to celestial sphere. Galileo had no
such theory. But he rightly believed that such a theory could be provided in future so that
telescopic observations' would get justification. Thus, while the opponents of Galileo ..
insisted that the telescopic observations be justified by an: optical theory' at the same time
as their acceptance, Galileo maintained that the justification could be ptovided
subsequent to their acceptance. It may be noted that both sides accepted that the
telescopic.observations needed justification ill terms .of a theory of light.

9
All this does not mean that observations are theory-dependent whereas theories are
observation-independent. Observations and theory are interdependent though it is not
easy to clarify what the nature of this interdependence is". However, positivists were
wrong in Claiming that observations are theory-independent. To say that observations are
theory dependent is to say that observation is not a passive reception but an active
participation of our cognitive faculties equipped with prior knowledge which we call
theory. After al!, observations are not 'given' but 'made'.

NOTES

1. The Poverty of Historicism London: Routledge & Kegan Paul 1957 Page 121
2. Quoted in Hanson N R Obsetrvation and Explanation; A Guide to Philosophy of
Science London; George Allen & Unwin ·1972 page 4

iO
J
•• KARL POPPER'S PHILOSOPHY OF SCIENCE
Karl Popper was the first to react against the positivist philosophy of science. In fact he started
attacking it Quite early. But his attack on and his alternative to the positivist philosophy of science
came to be widely known at the beginning of the second half of the zo" century. Popper's theory of
science, particularly his theory of scientific method has won a lot of admirers among scientists and
philosophers. As we know positivists tried to work out a sophisticated version of Inductivism. Popper
worked out a sophisticated version of Hypothesisism. In what follows, we shall briefly consider his
views on the nature of science.

According to Popper, the central task of philosophy is'not to solve Hurne's problem or problem of
Induction as thought by Positivists. This is because (1) the problem of Induction cannot be solved and
(2) it need not be solved because the method of science is not the-method of Induction. The central task
of philosophy of science, Popper maintains, is to solve what he calls the problem of demarcation or I
Kant's problem i.e. the problem of identifying the line of demarcation between science and non-science.
Popper maintains that what distinguishes science from the rest of our knowledge is the systematic
falsifiability of scientific theories. Thus falsifiability is the line of demarcation between science and·
non-science. Falsifiability is the criterion of scientificity. A statement is scientific if and only it is
falsifiable.

Scientific theories are falsifiable in the sense that they transparently state under what conditions they.
would be rejected as false. Whenever scientific theories are advanced, it is also apparent under what
conditions they turn out to be false so that we try to bring about those conditions in order to falsify our
claims. In other words, a model scientific theory or statement should readily yield test implications and
thus' lend itself to falsification. It should not seek to survive by not yielding test implications. i.e. not
stating under what conditions it becomes false. It is in this connection, Popper attacks Marxism as a
pseudo-scientific theory. When Marx propounded his theory of the dynamics of the Capitalist society
his theory was scientific because it was falsifiable since it yielded test implications such as !I'
u.
disappearance of middle classes, revolution in industrially advanced societies, reduction in the value of
the wages etc. However, ·the test implications were not borne out i.e. the predictions failed. Hence the
theory which was scientific proved to be a false ;theory. But the followers of Marx tried to explain away
the failure of Marx 'spredictions by taking recourse to adhoc explanations and thus insisted that there
was nothing wrong with the theory. In the process they went on building safety valves for the 'iheory'
with the result the theory became unfalsifiable:' A religious theory about the world is, ofcourse, also
unfalsifiable. But the prepounders of religious theories about the world never claim scientificity for
their views whereas Marxist do' so very vehemently. Hence, Marxist theory is not only unfalsifiabJe .
and therefore non-scientific, but also pseudo-scientific. It is this pretension to be scientific while being
unfalsifiable that makes the theory pseudo-scientific. 11 .

In. accordance with what he considers to be the hallmark of scientific. theories, Popper puts .
forward what hionsiders to be an adequate model of scientific method. He characterizes his model of
scientific method as Hypothetico -Deductive model. According to him, the method of science is not
method of In~tion but ~e method of Hy?othetico-~edu~i~n. What ar~ th~ fundamental differe~ces
between theseLmethodologIcal models?' FIrstly, the inductivjst model maintains that our observations
are theory-independent and therefore are indubitable. -That is to say, since observations are theory-;

"''' .. .
~-

\\
.4>-
dependent, they have probability value L It also says that our theories are only winnowed from
" observations and therefore our scientific theories have the initial probability value 1 in principle. Of
course, inductivists admitted that in actual practice the theories may contain something more than what
observation statements say, with the result our actual theories may not have been winnowed from
observations,

Hence, the need for verification arises. Popper rejects the inductivist view that our observations are
theory-free and hence rejects the idea that our observation statements have probability equal to 1. More
importantly, he maintains that theories are not winnowed from observations or facts, but are free
creations of human mind. Our scientific ideas, in other words, are not extracted from our observations;
they are pure·inventions. Since our theories are our own constructions, not the. functions of anything
like pure observations, which according to Popper are anfvay myths, the initial probability' of our
scientific theories is zero. •.

From this it follows that whereas according to the inductivists what scientific tests do is to
merely find out whether our scientific theories are true,according to Popper scientific tests cannot
establish the truth of scientific theories even when the tests give positive results. If a test gives a
positive result, the inductivists claim that the scientific theory is established as true, whereas according
to Popper all that w~"claim is that our theory has not yet been falsified. In Popper's scheme no amount
of positive result 01 scientific testing can prove our theories. Whereas the inductivists speak of
confirmation of our theories in the face of positive results of the tests, Popper only speaks of
corroboration. In other words, in the inductivist scheme we can speak of scientific theories as
established truths, where as in the Popperian scheme a scientific theory however well supported by
evidence remains permanently tentative. We' can bring out the fundamental difference between
verificationism (inductivism) and falsificationisrn (Hypothetico-Deductivism) by drawing on the
analogy between two systems of crimina! law. According io one system, the judge has to start with the I
'assumption that the accused is: innocent and consequently unless one finds evidence against him.rhe 1I

should be declared innocent. According to theother, thejudge has to start with the assumption that the
accused is a culprit and consequently, unless evidence goes in his favour, he should be declared to be a
culprit. Obviously tlle latter system of criminal law 'ish~rsher than the former. The inductivist scheme
is analogous to the formerkind of criminal law, whenCas the Hypothetico-Deductive scheme is akin to -
the latter one. . u· .

. In the \ inductivist scheme of observation, tentative generalization verification and'


confirmations constitute, the, steps of scientific procedure. In the Popperian scheme we begin with a
problem, suggest a hypothesis as a tentative solution, try to falsify our solution by deducing the test
implications of our solution,' try to show that the implications are not borne out and consider our
solution to, be corroborated if repeated attempts to falsify' it fails.
i~
If
Thus problem, tentative solution, falsification and corroboration constitute the steps of scientific
. procedure. Popper's theory of scientific method is calle:d Hypothetico- Deductivism because, according
to him, the essence of scientific

t.
, practice consists in deducing the test implications of our hypothesis and attempt to falsify the latter by
/s~ow.ing that. the. former ?o not ?btain, whereas ~ccording to I~du~tivism. ,the essence o~ scientific
practice consists m searchmg for mstances supportmg the generalization arnved on the basis of some
observations and with the principle of induction.

Popper claims that the Hypothetico-Deductive model of scientific method is superior to


inductivist model for the following reasons: Firstly, it does justice to the critical spirit of science by
maintaining that the aim of scientific testing is to falsify our theories and by maintaining that our
scientific theories are, however corroborated, permanently remain tentative. In other words, the
Hypothetico-Deductivist view presents scientific theories as permanently vulnerable with the sword of . ;
I

possible falsification always hanging on their head. The inductivist- view of scientific method makes
science a safe and defensive activity by portraying scientific testing as a search for confirming instances
.and by characterizing scientific theories as 'established truths. According to Popper, the special status
I
I1
.accorded to science is due to the fact that science embodied an attitude which is essentially open-minded
and anti-dogmatic. Hypothetico-Deductivism.is an adequate model of scientific practice because it gives
central place to such an attitude. Secondly, Popper thinks that if science had followed the inductivist
path; it would not have made the progress it has. Suppose a scientist has arrived at a generalization. If
I
he follows the inductivist message, he will go in search of instances which establish it as a truth. If he
finds an instance which conflicts with his generalization, what he does is to qualify his generalization
saying that the generalization is true except in the cases where it has to be held unsupported. Such
qualifications impose heavy restrictions on the scope' of the generalization. This results in scientific
theories becoming extremely narrow in their range of applicability. But if a scientist follows the
Hypothetico-Deductivist view, he will throw away his theory once he comes across a negative instance
instead of pruning it and fitting it with the known positive facts. Instead of being satisfied with a theory,
tailored to suit the supporting observations, he will look foran alternative which will encompass not
only the observations. which supported the old theory but also the observations which went against the
old theory and more importantly which will yield fresh test implications. - The theoretical progress
science has made can be explained only by the fact that science seeks to come out with bolder and
bolder explanations rather than taking recourse to the defensive method of reducing the scope of the
theories to make them consistent with.facts. Hence, Popper claims that the Hypothetico-Deductive
model gives an adequate account of scientific progress. According to him, if one accepts the inductivists
account of science one fails to .give ~y,explanation of scientific progress. Thirdly, the Hypothetico-
Deductive view according to Popper avoids the predicament encountered by inductivist theory in the
face of Hume's challenge. As we have seen, Hume conclusively showed that the principal of induction'
could n?t be justified. on. logical gr~~ds. .If Hum~ is right, thru science is b~e~ upon ~ irrati.onal faith.
According to Hypothetico-Deductivist VIew, science does not use the principle of induction at all.
Hence, even though Hume is right itdoesnot matter since science follows the Hypothetico-Deductivis,t
lines of procedure. Also, Popper seeks to establish that inductivism and Hypothetico-Deductivism are
so radically different that the latter in no, way facefany threat akin to the one faced by the former. In this
connection, he draws our attention to. the logical asymmetry between verification, the central component
of the inductivist scheme, and 'falsification, the central component of the Hypothetico-Deductivist
scheme. They are logically asymmetrical in the sense that one negative instance is sufficient for
conciusively falsifying a theory, whereas no amount of positive instances are sufficient to conclusively
verify a theory. It maybe recalled that' Hume was able to come out with the problem of induction

,.
ptec:isely because a generalization (all theories according to Inductivism are generalizations) cannot be
lcondusively verified.

Haw does Popper characterize scientific progress? According to. him, one finds in the history of
I science invariable transitions from theories to. better theories. What does the ward 'better' stand far? It
I may be recalled that, according to. Popper, no. scientific theory however corroborated can be said to be
'true'. Hence, Popper drops the very cancept of truth and replaces it by the concept of Verisimilitude
(truth-likeness or truth-nearness) in his characterization of the goal of science. In"other wards, though
science cannot attain truth, i.e. though our theories can never be said to. be true, science can set far itself
I the goal of achieving higher and higher degrees of Verisimilitude i.e ~uccessive scientific theories can

progressively approximate to. truth. Sa, in science we gofrom theory to. better theory and the criterion
for betterness is Verisimilitude. But what'is the criterion for Verisimilitude? The totality 'af the test
'implicatiOI1s of a hypothesis constitutes what he calls 'the empirical content' of the hypothesis. The
'totality of the test implications which are borne aut constitute the truth 'content of the hypothesis and the
totality of the test implications which are no~orne aut is called the falsity content of the hypothesis. The
criterion of the Verisimilitude of a theory is nothing but truth content minus the falsity content of a
theory. In the actual history of science we always find, according to. Popper, theories being replaced by
I
better theories, that is, theories with higher degree of Verisimilitude. In other wards, of the two. P
successive theories, at any time in the History of Science, we find the successor theorypossessing
greater Verisimilitude and is therefore better than its predecessor. In fact, according to him, a theory is
rejected as false only if we have an alternative which is better than the ane at hand in the sense that it \I, \
has more test implications and a greater number of its test implications are already borne aut. The r\
growth of science is convergent in the sense, that the successful part of the old theory is retained in the
i: 1
successor theorywith the result the old theory.becomes alimiting case -of the new one. The growth of q I

science thus shows continuity. In other words.iit is the convergence of the oldtheory into the new one I

:hat provides continuity in thegrowth of science. It must also be noted inthisconnection that unlike the
.nductivists or Positivists, Popper is a Realist in the sense, according to. him, scientific theories are about
ill unobservable world. This means that the real world of the unobsevables though can never be captured
.\
)y our theories entirely is becomingmore a.r{d-more available to us. Popper cantends that the greater and t,
.~r.ea~erthe Verisimihtude attained by our theories evidence that though the gap between Truth and' our
heories can never be.completely filled, it can be.progessively reduced with the result, the real world of
l
mobservable will be mare and mare like what our theories say though not completely so. ' .'~
I
How does Popper establish. the objectivity of ~ scientific knowledge? Inductivists s?~gh~ t9 .
stablish the objectivity of science by showing that scientific theories are based upon pure observations.
'he so-called 'pure observations were supposed to. be-absolutely theory-free. They are only 'given' and .
ence, free from the subjective interpretations, Popper, as we have seen, rightly rejects the idea of pure
bservations, Consequently, her cannot accept-the inductivist account of the objectivity of science. What
ngenders scientific objectivity.. according to Popper, is not the passibility ofpure observation, but the
ossibility of inter-subjective testing. In short, science is objective because it is public and it is public
ecause its claims are intersubjectively testable.
, ;

.I ' ..' To the question, "Which comes first, observation or theory?" the inductivist answers
r'obs~rva.tion' Popper answers 'earlier observation or earlier theory,' To him the question is as
illegitimate as the question, "Which comes first egg or hen?" which can be 'only answered by saying
'earlier egg or earlier hen?'

It will be convenient if we list the main theses of Popper's philosophy of science arranged in a
manner Isomorphic with our list of the theses of the positivist philosophy of science:

1. Science is qualitatively distinct from, superior to and ideal for all other areas of human
endeavour (scient ism). , .
i,
2. The distinction, superiority and idealhood that science enjoys is traceable to its possession
of a method (Methodologisrp.) . • ! •

3. Ther is onl~ one m~thod common to all sCien~f irrespective of their subject matter
(Methodological Morusm) .
4. That method which is common to all -sciences, natural and human, is the method of
Hypothetico-Deduction(Hypothetico-Deductivism)
5, The hallmark of science (i.e. the distinguishing' mark of science) consists in the fact that its
statements are systematically falsifiable (falsifiability).
6. Scientific observations are not and cannot be shown to be pure; that is, they are theory-
dependent
7. Theories are not winnowed from observations or facts; they are pure inventions of human
mind i.e. only conjectures and not generalizations based on 'pure observations'
8. The relation between observation and theory is one of interdependence.
9, To a given set of observation-statements there might correspond more than one theory.
10. Our f~ctual judgements' may have, value commitments and our value judgements may
have.cognitive content(hence fact .. valuedichotomyis unacceptable); science is not value
neutral but the value commitments can 'be critically discussed and therefore they are not
subjective. .
11. All scientific explanations must have deductive-nomological pattern and thus the thesis of
. Deductive-Nomologism is acceptabie.·
12. The airri;of science is to provide an aCCOlJ1lt of observable, world in terms of unobservable
entities' an4 t«, provide accounts or' those unobservablerentities in terms of furth~r
unobservable .entities. Unobservable entities are, therefore, real and our theories are
putative descriptions of such real entities ('Realism').' .
13. Unlike other, areas of human activity, there is progress in science which consists in going
from one theorytoa better theory. Here.ibetter' means 'more true'. 'More.to true' means
'greater correspondence between theory and reality' and 'reality' means 'the world pf
unobservables'. I~ short, science is progressive in the sense our successive theories in any -
, \ domain of science exhibit greater and greater verisimilitude or truth-nearness i.e. the match
between ~ur theories and reality .. Unlike positivists, Popper rejects the idea that progress of
fP science is characterized by cumulative. growth of theories. Acco~ding to him, a new theory ".' -

.is entirely new and not an old theory plus an epselon as Positivists thought. Thus, in
Popper's scheme, the growth of science is essentially discontinuous. Of course, Popper I
makes some room for' continuity also when-he says that old theory (atieast true part of it) is I' ,
a limiting case of the new theory. i,:., .,

, '"=+> l.j,~ ..., \


(: .

,
15
.. I?

i
I
Ir
If
14. Science in not objective in the sense scientific theories are based on pure observations as
positivists thought because there are no pure observations. Science is objective in the sense
its theories are inter-subjectively testable.
15. Lastly, science is not rational in the sense the principle of Induction can be rationa y
justified as Positivists thought. The principle of Induction cannot be rationally justified; nor
is it used by science. Science is rational in the sense it embodies critical thinking. Apart
from insisting that our theories be falsifiable, science has institutional mechanisms for
practising and promoting critical thinking. What is rationality other than critical thinking?

"
It may be noted that P-ositivists and Popper differ from each other. The theses (1),
(2), (3) and (11) are common to both Positivists and Popperians. Popper rejects most of other theses of
.Positivists, especially their central thesis which concerns the idea of pure observation. Finally, he agrees ,
with the Positivists that science is uniquely progressive, objective and rational; but his notions of
progressiveness, objectivity and rationality of science are entirely different from those of Positivists.

Due to so much of agreement between Popper and Positivists, it is usually said that
Popper is a semi-positivist. We can at least say that his departure from the Positivist view of science is
not radical. Let us now look at a more radical departure from positivism and thus a more radical version
of the post-positivist philosophy of science put forth by Thomas S. Kuhn. Before we do so an objection
against Popper's position can he mentioned though manyother objections can be raised.

• CRITICISM AGAINST KARL POPPER'S PHILOSOPHY OF SCIENCE

, A serious lacuna in Popper's position concerns his idea of scientific progress. First
of all, according to Popper, the growth of science is essentially discontinuous in the sense a new theory
which displaces an old theory is not the old theory plus an epselon because it is entirely new. Yet, he
seeks to make room for continuity in the growth of science by insisting that the old theory is a limiting'
case of the new theory. In this connection he cites an example ofNewtonian mechanics and Relativistic
mechanics, The former is the limiting case' of the .latter in the sense in a certain domain both giy'~ the
same results. Thus the:( former is contained 'in the'Tatter. Hence there is some ~continuity in the ,
growth of science. But Popper overlooks the fa~t ,iliaf:'such examples of an old theory being a limiting
case of the new one are rare. For example, itis absurd to say that Phlogiston theory is a limiting case of
oxygen theory or that Ptolemy's theory is a limitingcaseof Copernican theory. Secondly.Popper
says that successive theories in any, domain exhibit increasing verisimilitude i.e. truth nearness. That is,
.
reality constituted by unobserv~ble~ii.tities'
' " , .,'
'is ..more \Iikewhat a new theory says than what its . immediate
' ' .
predecessor says. This means that following Popjier we have to say that the ultimate constituents of
matter 'are 'more'like fields as the-present physicaltheory says than like particles (atoms) as claimed by
Newtonian theory. This is unintelligible. What does it mean to say that the ultimate constituents of
matter .are more like fields than" particles called atoms? Either they are like fields or like 'particles.
Thirdly, when Popper says a new theory' is better than the old one (in the sense it is more true)" he
assumes that the two theories can be compared. This means that they have something common which
makesthem comparable. But this has been ably questioned, byThomas Kuhn who sought to show that
when one fundamental theory replaces another, the two-theories are so radically different as to make any
talk of comparison between, them' highly questionable. It is to his views we shall now turn.

[6
?
THOMAS KUHN'S PHILOSOPHY OF SCIENCE

Thomas Kuhn's work The Structure of Scientific Revolutions is a milestone in the


history of the zo" century of philosophy of science. A brief exposition of his basic ideas are in order.

According to Kuhn, in the life of every major science there are two stages (l ) Pre- .
paradigmatic stage and (2) paradigmatic stage. In the pre-paradigmatic stage one finds more"than one
mode of practising that- science. That is, there was a time in the history of Astronomy when different -
schools of Astronomy practised .Astronomy differently. So is the case with Physics, Chemistry and
biology. In that stage their situation was similar to that which obtains today in areas like art, philosophy
and even medicine wherein divergent' modes of practising these disciplines co-exist. Today we' speak of
schools of Art (e.g. painting), schools of Philosophy and systems/schools of medicine. But today we do
not speak of schools of astronomy or physics or chemistry or biology.

This is.according to Kuhn, areas like art, philosophy and medicine did not, and cannot make-a transition
from pre-paradigmatic stage to paradigmatic stage which marks the disappearance of plurality, that is,
disappearance of schools. In other words, the transition means re-placement of plurality by monolith i.e.
a uniform mode of practice. Such a transition is made possible, Kuhn claims, by acquisition of a
paradigm. When a 'science makes such a transition, we may say, it has become 'mature' or 'science' in
the proper sense of the term. Astronomy was the first to make such a transition followed by Physics,
Chemistry and Biology in that order. Social sciences are still, according to him, in the pre-paradigrnatic
stage, though Economics is showing signs of such a transition. This is evident from the fact that in
Social Sciences there is no consensus on fundamentals as we can see.prevalence of distinct schools in.
every Social Science. .

.. So, the transition to maturity is effected by acquiring a paradigm 'by a science. The' question'
is' "What is a paradigm?" .
'~ .• ..' ~

,We' all know that Ptolemy's .Almagest Newton's Principia and Darwin's' Origin of
the Species are path breaking works in the areas of Astronomy. Physics and Biology
respectively. According to Kuhn,these works provided paradigms for these disciplines. 'They
did so by specifying the exact manner in which these disciplines ought to proceed. They laid the
ground' rules, regarding what problems' these disciplines must -tackle arid how to tackle them.
Hence, paradigms are, "Universally recognized achievements that for a time provide model
problems and solutions to community of practitioners." \ A paradigm specifies what the -
ultimate constituents 'of that sphere of reality which a particular science is inquiring into are.
Secondly, it identifies the model-problems. Thirdly, it specifies the possible range of solutions. . lA

Fourthly, it provides the necessary strategies and techniques for solving the problems. Lastly, it
provides examples which show how to solve certain problems. In other words, a paradigm is 'a
disciplinary matrix of a professional group. Once a science possesses a paradigm, it develops
what Kuhn calls, a 'normal science tradition'. Normal science is the day-to-day research activity.
purporting to force of nature into conceptual boxes provided-by the paradigm.
The practitioners of normal science, that is, a scientist who engages in day-to-day research,
internalizes the paradigm by professional education. This explains th~ prevalence of textbook
culture in science education. .

Of course, scientific practice is not exhausted in terms' of day-to-day research or


'normal science'. When a paradigm fails to promote fruitful, interesting and smooth normal
science, it is considered to be in acrisis. The deepening of the crisis leads to the replacement of .
the existing paradigm by a new one. This proces~ of replacement is called 'scientific revolution'. I! .
Therefore, scientific revolutions are "the tradition-shattering complements' to the tradition bound
activity of normal science.v' Thus, ~nce a science enters the paradigmatic stage" it .is
characterized by (1) normal science and (2) revolutions.In sheer temporal terms normal science
occupies much larger-span than revolutionary science. That is to say, science is revolutionary
once a while and mostly it is non-revolutionary or normal. Also the scientific activity engaged in
by most of the practitioners can be characterised aptly in terms of normal science. Because of
this temporal and numerical magnitude w.e can say that much of the scientific activity 'as we
ordinarily encounter" is normal though this' normal course is occasionally interrupted by
revolutions which change the form, content and direction of the process of the scientific activity,
which is basically normal by-which we mean a non-revolutionary committed and tradition bound
activity. Normal science demands a thorough going convergent thinking and hence is preceded
by an education that involves 'a dogmatic initiation in a pre-establishedtradition that the student
is not equipped to evaluate':'. "Normal science is an activity that purports not to question the
existing paradigm but to (1) "Increase the precision.;.. of the existing theory by attempting to
adjust the existing theory or existing observation in order to bring the two into closer and closer
agreernent.?" and 2) "to extend the existing theory ,to areas that itis expected to coverbut in
which it has never before been tried."s In other words, normal science consists in solving puzzles
that are encountered
,
in forcing nature into the conceptual boxes supplied by the reigning . rli
paradigm. \ 1I

It is in this: way Kuhn attempts to account for the smooth, defined and directional
character of day-to-day scientific research, in terms of the features' of what he .calls ''Normal I

Science". Normal science has no room for any radical thinking. It is limited to the enterprise of
I '
li'
solving certain puzzles-in accordance with the rules specified by the paradigm. These rules are
never questioned but only accepted and .followed. The aim of. scientific education is to ensure
that the paradigm is internalized by a student. In other words, the professional training in science
consists in- accepting the paradigm as, given and equipping oneself to' promote the cause of the . 'I'
paradigm _by giving a greater precision and further elaboration. j'; The day-to-day scientific - It
research does not aim at -anything fundamentally new but only at tire application of what has
already been, given, namely the theoretical ideas and the practical guidelines for solving certain
puzzles. It is in this Sense normal science is highly a tradition-bound activity.

.• ',' ..
I
I!

)8
~
+-
But it is this tradition bound activity which makes science a successful enterprise.
Kuhn says "Normal science, the puzzle solving activity, is highly - cumulative enterprise,
eminently successful in its aim: the steady extension of the scope and precision of scientific
knowledge. In all these respects it fits with great precision the most usual image of scientific
work. Yet one standard product of the scientific enterprise is missin
r'·
Normal science does not
aim at novelties of fact or theory and when successful finds none' "In order to reconcile the
undeniable fact of novelty that science exhibits by making new discoveries with some what
hackneyed phenomenon of normal science, it is necessary to show that "research .under a
paradigm must be a particularly effective way of inducing paradigm change.v'But, how? '

As pointed out earlier, normal science purports to force nature into' the conceptual
boxes provided by the reigning paradigm by solving puzzles in accordance with the guidelines
provided by the paradigm whose validity is accepted without question. During this process of
puzzle solving, certain hurdles may be encountered. We then speak of "anomalies" That is, an
anomaly arises when a puzzle remains puzzle defying every attempt to resolve it within the 11,'
framework of the paradigm. But, appearance of one or two anomalies is not sufficient to
overthrow a paradigm. The ushering in of the era of a new paradigm has to be preceded by the
appearances of not one or two anomalies, but many, not minor anomalies, but major ones. In
order to declare a paradigm to be crisis-ridden, what is needed is an accumulation of major
anomalies. But there is no clear cut and objective criterion to decide which anomalies are major
and how many anomaliesmust accumulate to declare a paradigm to be crisis-ridden. In other"
words, there is no criterion which decides whether a perceived anomaly is only a puzzle or, the
i
I symptom-of a deep crisis. The issue will be decIded by the community of the practitioners of the
I
'discipline through the judgment of its peers. Once the s~ientific community declares the existing
!
:I paradigm to be crisis-ridden, the search for the 'alternative begins Of course the' crisis-ridden I!'
paradigm will not be given up until and unless a new theory is accepted in' its place. It is only
during this transitional period of search for the new paradigm that the scientific debates become
radical.

During theprocess of the search for an alternative, the scientific community has to make a
choice between competing theories. In this, choice, theevaluation procedures of normal science
are of no help, "for these depend in part uP9n a particular paradigm _and that paradigm is at
issue.,,8 The issue concerning the paradigm choice cannot be settled by logic and experiment
alone. What ,ultimately matters is the consensus 'ofthe'relevarit scientific community. In 'other Ir:,
words, the choice of a theory as the new paradigm has to be understood in terms of the value
judgments which -a, community of scientific practitioners exercise 'in the context in. which they
find themselves. While choosing aparticular theory' for the status of the new paradigm the
scientific community might advance arguments that seek to show that the chosen theory solves _I~
"important" problems.Ts more 'simple' than the rest etc. But these are all value judgments since
there is no objective criterion to decide-which problem -is important and what is simple etc. In
other words, that theory is chosen which fits the.value commitments of a scientific community:
Hence, the question of choice becomes the question of value. Kuhn points out, "that question of
value can be answered only in terms of criteria that lieoutside of normal science altogether, and

In

it is that recourse to external criteria that most obviously makes paradigm debates revolutionary."
9 Thus a paradigm choice cannot be explicated in the neutral language of mathematical equations
and experimental procedures, but in terms of specific perceptions which a scientific community
as a social entity entertains about what it considers to be the basic values of its professional
enterprise. In other words, the ultimate explanation of a theory choice is not methodological but
sociological.

Hence in Kuhn's scheme, the idea of scientific community as a social entity is axiomatic. That'
to say, according to him, "If the term "paradigm" is to be successfully ex~licated, scientific
communities must first be recognized as having an independent existence".' This means that
one must explain scientific practice in terms of paradigms and paradigmatic changes a~d the
later are to be explicated in terms of a particular 'scientific community which shares the
paradigms and brings about paradigmatic changes. Thus, the concept of scientific community is
basic to the concept of paradigm. The concept of scientific community can be explicated in
sociological terms. Hence, the ultimate terms of explication of scientific activity are
sociological.

.What is the relation between the old paradigm, which is overthrown and the new
paradigm which succeeds it? Kuhn's answer to this question is extremely radical. According-to
him, in no obvious sense one can say that the new paradigm is better or truer than the old one.
Kuhn maintains that the two successful. paradigms cut the world differently. They speak
different languages. In fact when a paradigm changes, to put it metaphorically, the world
changes: With his characteristic lucidity he says, "the transition from a paradigm in crisis to new
one from which a new tradition of normal science .can emerge is far from accumulative process,
one that is achieved by an articulation or. extension of the old paradigm. Rather it is a
reconstruction of the field from new fundamentals, a reconstruction that changes some of the
field's most elementary theoretical generalizations as well as many of its ... methods and
applications.vi This apart Kuhn contends ·that the two paradigms talk different languages, Even
.if the same terms are used in two paradigms, the terms have different meanings. What ean be
said in the language of one. paradigm cannot be translated in to the other language. Based on
these reasons, Kuhn claims that th~. relation between two successive paradigms is
incommensurable. No wonder Kuhn compares paradigm shift to gestalt shift .. With this the idea
of scientific progress.as a continuous process and the idea of truth as the absolute standard stand
totally repudiated. .Kuhn advances what might appear to be an undiluted relativism according to
which truth is intra-paradigmatic and not inter-paradigmatic. That is, to. say what is true is
relative to a paradigm and there is no truth lying outside all paradigms.

,. POPPER VERSUS KUHN

; Some of the radical implications of Kuhn's position can be brought about by


juxtaposing his viewswith those of Popper. The hallmark of science according to Popper is j
critical thinking. In fact science exemplifies critical thinking at its best. Since critical thinking
considers nothing to be settled. and lying beyond all doubt, fundamental disagreements and
divergent thinking must and-in-fact do characterize science.
,
'-As we have seen, according to Kuhn, what constitutes the essence of scientific practice is ' - II

., science and we have also seen why normal science is a highly tradition-bound activity, an
activity made possible by a consensus among the practitioners who share a paradigm. Thus if
Popper sees the essence of science in divergent thinking and fundamental disagreements, Kuhn
sees the essence of science in convergent thinking and consensus. In other words, the hall .ark
of science according to Kuhn is tradition-bound thinking. In fact, according to Kuhn, vha.
distinguishes science from other areas of creative thinking is that whereas in science one finds
institutional mechanisms of enforcing consensus, the other areas suffer from perpetual
disagreements even on fundamentals.

Secondly, if Popper considers the individual to, be the locus of scientific activity, Kuhn
bestows that status upon the scientific community. Both positivists and Popper looked upon
science as the sum total of the work of individual scientists working in accordance with a method
though the Positivists and Popper fundamentally differed on the characterization of that method.
As opposed to this individualistic account of scientific enterprise, Kuhn propounds a .
collectivistic view of scientific activity. In Kuhn's scheme, it is the scientific community which
constitutes the pillar of stability and locomotive of change. This is borne out by the fact that
according to Kuhn the scientific community has institutional mechanism, like peer review, by
which it can settle all the issues such as whether an anomaly is a symptom of crisis, how many
anomalies suffice to warrant the search for an alternative paradigm, what factors are t be
considered in choosing a new theory' for the status of the paradigm etc.

Thirdly, Popper and Kuhn differ fundamentally in their attitude towards the transition
from one theory to another theory inscience. According to Popper, we can explain every .c of
theory change in terms of certain norms which science always adopts and follows metic lously.
In fact, scientific rationality consists' ill following these norms, But Kclm contends that an,
adequate explanation of theory change must be in terms of the value judgments made by a
community while making ·the choice. Accordinguo Kuhn, recourse to the so-called,
methodological norms explains nothing. From the point of view of PoppervKuhn is aI1
irrationalist . .

because hesets aside methodologicalnorms and.seeksto 'explain theory change exclusivelyin,


terms of non-rational or sociological factors like value commitments of a professional group.
Whatever be the merit of Popper's attack on Kuhn as an irrationalist, we can say that Kuhn's
construalof scientific practice is sociological. That is to. say, according to him, scientific activity-
cannot be understood by trying to find out the absoiute standards which have guided the-
scientific activity in all ages. It can only be understood in terms of the specific judgments which-
a community makes at a particular juncture regarding what it considers to be its value.
commitments as a professional group.

The above juxtaposition between Popper and Kuhn brings out the radical implications of
Kuhn's views regarding the nature of scientific practice. However, in one respect Kuhn is very
close to Popper.

-_/

~.• f' . ": ".


'r-

• H~
~, like positivists, contends that there is something unique to science though they differ in
their explanation of what that uniqueness consists in. Positivists maintain that the hallmark of
science is the systematic verifiability of its claims. According to Popper, the uniqueness 0,
science consists in the systematic falsifiability of theories. Accounting to Kuhn, it is consensus
which marks out science from other areas of human endeavor. That is to say,. Kuhn, like·
Positivists and Popper, does not question whether science is really unique. That is to say, instead
of raising critical questions about the status science has acquired in the contemporary cu ture,
\
f

Kuhn only seeks to provide an alternative account of how it has acquired that status. In that se
Kuhn's position is quite conservative. ,.

In this lesson and the preceding one, we had a brief look at the zo" century thinking on'
the nature of science. It is very difficult to decide which view is the correct one, though Positivist
view has been shown to be 'highly inadequate. The question is "How should we practise social
sciences so as to make them scientific".

Some, social scientists take the Positivist recommendation: "collect data, extract a generalization,
verify the generalization and formulate a law". Those social scientists who are inspired by the
Popperian view take seriously the Popperian advice "Formulate a problem, provide a tentative
solution, try to falsify it, if the solution survives treat it as a corroborated theory but not as a
confirmed one". Still others go by Kuhn's view of science and think that the task of social
sciences today is to arrive at paradigms in different social scientific- disciplines. This will enable
social sciences to overcome, ideological commitments which generate differences even at a
fundamental level. According 'to them, the consensus so generated will bring social sciences'
1\
!I close to natural- scierices. -

NOTES
.. ~
- II
1. Kuhn T S The Structure of Scientific Revolutions (First published in 1962) Chicago;
Chicago University Press 1970 p viii \
2. Ibid p.6 \ ,
3. Kuhn T S The Essential Tension :, Selected Studies in Scientific Traditioh and Change
Chicago: ChicagoUniversity Press 1977 P 229 \
4. !bid p 223
5. !bid
6. The Structure of Scientific Revolutions ..P.2I
; .
7. Ibid p 53 : I
8. Ibid p 94 , I

9. !bid 110
10. The Essentiai Tension p 295 ,
I1 11. The Structure of Scientific Revol~trons pp 84-85
Paul Feyerabend's Philosophy of Science:

Till now we have briefly looked at some of the milestones in the history of
the Twentieth Century Philosophy of Science. We now add to our discussion the
views of Paul K. Feyerabend. His book Against Mothod (1975) marks a radical
departure from the mainstream philosophy of science. It does so by. questioning the
doctrine that science is qualitatively district from, superior to and ideal for non-
scientific approaches to the world, natural or human since it -alone possesses a
method - a view Feyerabend very ably seeks to demolish. Consequently, he calls
into question the received image of science as the" embodi~ent. progr-essiveness,
'objectivity and rationality. .

It is necessary to note the experience that drove Feyeraberid to this


conclusion 'which he subsequently substantiated with brilliant arguments. -The
University of California, Berkeley initiated a programme in which men and women
belonging to marginalized sections of the population in United States were invited
to the University. They were made to listen to the lectures by eminent scientists
and scholars of the University about the achievements "of Science, virtues of
rational thinking etc., Feyerabend was also invited to address them. When
Feyerabend entered the venue and-saw his audience, he began to wonder whether
he had any right to address them and impart lessons to them when he did not
know anything about the conditions of their life, their struggles for existence, the
problems they faced, the oppression and alienation they suffered. He felt that his
belief that he had the right to speak down to those whose life he did not share was
due to the intellectual arrogance he had imbibed because of his education
including education in science, philosophy etc., The book Against Method was an
"attempt to demolish some of the dogmas that have generated the myth of the
hegemony of the modern west over the rest of the world.

As said earlier, Feyerabend's butt of attack is the claim of almost all


philosophers of science that there is something called the method of science,
though they may differ about what that the method is. Feyerabend has given two
important argument against such a claim, the first one being historical and the
second one being logical.

The first argument is this: the method is a set of Canons which according to
philosophers science follows and must follow to live up to its image and pursue its
goal. Various theories of Scientific method such as+inductivism and hypothetico-
deductivism construe these canons differently. Feyerabend's challenge is the
following: give me any method i.e., a set of canons. I show that they are violated
some time -or the other, and they are violated deliberately and, more importantly,
such violate~ led to fundamental breakthroughs which would not have been
possible-, without setting aside. every possible methodological canon. In this
connection, Feyerabend provides .. large numbers of detailed case studies,
particularly that of Galileo, to substantiate his position.

We now come to the 'Iogical-argument againsfmethodologism. According to


Feyerabend what is common to all the theories of scientific method is the claim
1
:. 'that a new scientific theory must satisfy he two following conditions in order to
have initial acceptance, viz (a) correspondence condition, and (b) consistency
condition. According to (a) the new theory must correspond to i.e., square with well
established facts; according to (b) the new theory must be consistent with i.e.,
square with well established old theories in a domain for which the new theory .
proposed. Now, Feyerabend rightly contends that (b) can be reduced to (a) because
we insist on consistency condition since old theories correspond to well established
fact. So, to put methodologism on the defensive it is s'ufficient to show that the
condition (a) is unreasonable and this is what he seeks to show.

Suppose in the place old theory, T, (e.g. geo- centic theory) a new theory T'
(say, Helio - Centric theory) is proposed. The new theory, T' will not be supported
by established observations/facts. But from this can we assume that T' is.
unworthy of taking. it up seriously? According to Feyerabend, it is absolutely
unreasonable to do so. This is because the so-called well established observations
may not be really well established. They may appear to be so because they are
understood by us in terms of the old theory which we have hitherto accepted. To
realize that the old 'facts' are not well established we need to go beyond the old
theory and adopt a vantage point and it is this vantage point that new theory, T'
could provide. But by insisting that the ne-w theory, T', must .correspond to old
facts, is to elimiriate all possibility of re-evaluating the so-called well-established
facts. Thus, correspondence condition which is the apple of the eyes of all
methodologists is counter-productive because it sets at naught the possibility of-
critically evaluating the old facts and therefore the old theory. Such a condition is
status quoist and hence regressive.

On the basis of these arguments against methologosim, Feyerabend works


out his own conception of observation-theory relation. This conception is radically
different from the positivist notion of the unilateral dependence of. theory over
observation and even the Popperian notion of the linear interdepence of
observations and theory. We may call Feyerabends conception "Dialectical inter-
dependence of theory and observation." To elucidate his view, Feyerabend gives the
analogy of the Marxist theory of social dynamics.

According to Marx, any society has to be understood in terms a two-tier


structure-substructure or base and super-structure or edifice. The substructure is
constituted by what Marx calls 'production relations' or 'property relations which
correspond to a certain mode of production. The superstructure is constituted by
the ideology of a society which justifies the production relations as the most
natural ones.

Take, for example, the feudal society. It has production relations or property
relations or the econornic relations among the classes such as that of feudal lords,
': serfs, middle class -etc., The mode of production is feudal mode of production as the
means of production are owned by the feudal class. The feudal ideology justifies the
feudal class structure which serves .the interest of the feudal class.

Now, a new mode of production comes into existence. It is the capitalist


mode of production since the new means of production that correspond to the new
·2
, mode of production is owned by the new class-the capitalist class. However, the
hew mode of production can not develop beyond a point as the old (i.e., feudal)
production relations that square were with old (feudal) mode of production will not
allow it to grow. The old production relations act as fetters on the new mode of
production.

Hence, the new mode of production needs new production relations. That is,
the new mode of production needs replacement 'of old production relation by new
ones. A social revolution is nothing more than the replacement" of old production
relations by new ones brought about by capturing political power by the new class -
the capitalist class - with or without violence. Once. the. social revolution gets
completed, a new ideology - the new. superstructure - emerges that justifies the
new social or production -relations as the most natural.

• Analogously, when a new theory emerges it 1S not , allowed to grew by the


exciting facts which square with the old theory. So, the new theory needs its own
observations/facts. That is, the empirical basis of the old theory does not support
the new theory. The new theory has to build its own empirical basis. Once such ar
empirical basis it established, the new facts / observations are taken to be wel'
established. As Marx says regarding the nature of theorizing in. science, unlike in
architecture, in science we-first build castles in the air and then build the
(empirical) foundations for it (in the form of facts), rather than the other way round.
To enable a new theory to build its own empirical basis, the new theory mus b
given initial ratification with minimum factual support. While some facts may be
completely new, many facts are new in the sense they are interpreted in terms of
the new theory. Feyeraberid establishes his position on observation / fact - theory
relation by taking, among other things - the replacement of Geo-centric theory by
Helio - centric theory.

We now briefly mention some of the important consequences of Feyerabendls


critique of what he calls ' law and order' philosophy of science i.e., the
philosophical approach to science whose' axiomatic claim is that there is something
. called the method of science:

alAs we have seen Feyerabend's rejection of the idea of the method of science
involves the repudiation of anything like a condition which a new theory has to
satisfy in order to be worthy of initial acceptance. If this is so, there will be
proliferation of theories as no theory can be rejected at the initial stage. Feyerabend
welcomes such a development. Even if the proliferation of theories lead to some sort
of anarchy, anarchy is better than dictatorship. According to him, every new theory
even if it conflicts with well established facts or well established theories, must be
allowed to be worked upon as so that it will develop its own empirical basis. If some
or most or even all the new theories fail to develop their own empirical basis, they
will die on their own. Let us not kill them in the name of the conditions specified by
the champions of methodologism. Let us allow' all babies to survive. If some babies
die on ~heir own, let them die. But let us not kill any baby just because it does not
live up to the so called minimum standard of health at birth.
2.5
3

. ".
In this connection, he attacks Kuhrr's position also. No doubt, Feyerabcnd
acknowledges, Kuhn does not share the dogma of methodologism. Kuhn maintains
that neither the stability (theory/paradigm retention) nor the dynamics
(theory /paradigm change) is determined by canons of scientific method, ind ucti re
or hypothetico-deductive. The canons of scientific method underdetermine 0
decision regarding the retention of a theory and our the choice among the theories
competing for acceptance.

But even then, Kuhn allows for only one paradigm at a time and hence, 1 e
the methodologists, valorizes monolithic thinking. According to Feyerabend, science
hardly exhibits, except during times of stagnation, such monolithic 'orientation.
Even if Kuhn is right in saying. that the hallmark of science is' its coflsensus-
oriented monolithic mode of thinking, it is high time we give up this mode of
thinking. Pluralistic way of thinking in science must not .be viewed. as a necessary
evil but as something desirable in itself. So, Feyerabend looks forward for a post-
paradigmatic phase of science in which science exhibits pluralistic mode of
thinking akin to what Kuhn characterizes as 'pre-paradigmatic phase' which
according to him was superseded by 'Paradigmatic phase' by entering into which a
science becomes mature by acquiring a paradigm that brings into effect only one
mode of practicing that science all over-the world.

b) One important question in the philosophy of Human or Social Sciences is


whether Human/Social Sciences in order to be genuinely scientific, should-follow
'natural sciences- by adopting the method of Natural sciences which is the method
of science. Methodological dualists claim that the method of Human/Social Science
is one of interpretation and that of Natural sciences is one of induction or
hypothetico-deduction. This is because the aim of Human/Social Science is
Understanding whereas the aim of Natural sciences is explanation. Against the
methodological dualists, methodological monists claim that there is only one
method for all Sciences-Natural and Human/Social, irrespective of the difference in
their subject matter. Of course, the methodological monists differ about what that
one common method is. -Some say it is the method of induction and others claim it
is one of hypothetico-deduction.

Feyerabend rejects both views. This is because there is no such thing as


method even for Natural sciences and' hence, the question of Human/Social
Sciences following the method of Natural sciences simply does not arise.
Human/ Social Scientists should not entertain the very talk of following the
methodological guidelines of Natural Sciences and equally, they should not
entertain the idea of one method for Human/Social Sciences.

-
Kuhn identifies the consensus oriented thinking as the hallmark of Natural
Sciences arid directly or indirectly advises the practitioner of human sciences to
follow naturai sciences by bringing-into. existence 'a paradigm in their respective
fields and put an end to the non-monolithic thinking in their disciplines.
Feyerabend will'reject such. an advice. On the contrary, he will say, if human
sciences have a plurality of approaches whereas, as Kuhn says, natural sciences do
not, .it is the natural sciences which have to follow human/ social sciences by -
making room for plurality of approaches as their permanent feature.
4 20
.,.
-t'


c) One of the implications of Feyerabend's anti-methodologism concerns the very
nature of modern education. Our education is Science-centered because we believe
that Science is unique and it is unique because it has a method. Once the idea of_
the method is realized to be unsustainable, there is no reason to allow science t.J
establish a monopology over education. We rightly removed religion from the centre
of our educational practices. But we put science on the citadel vacated by religion.
A democratic education must not allow anyone way of looking at the world to have
a monopoly on education. Education in a true democracy must expose a student to
different ways of looking at the World-Science, Religion, Myths, art, literature etc.,
and when a student becomes a adult, she or he can -thus make an informed choice
of her/his own. To-day we are ~xpose a student to only one way of lookirigat the
world-viz,that of science. The choice of scientific way of looking at the world is not
an informed choice and in fact it is not choice at all. He/ she therefore is not even
• convinced of even the scientific view of the world. This is clear from the fact that
even supposedly scientific minded individuals, professional scientists included, fall
at the feet of first "ideological street singer" they come across as a "holy" man.

d) More important consequence concerns the myth of experts. Ordinary people's


lives are sought to be shaped in the name of 'development'. It is assumed by policy
makers that what ordfnary people know is no knowledge at all and hence they
showed not have any say regarding matters concerning their own lives. The
delegitimation of people's knowledge is justified on the ground that the knowledge
o_fexperts has a scientific basis as their views are arrived on-the basis of scientific
method. Since the clout which experts have is based upon a- myth that they
possess the secrets of scientific method, we should not allow any privilege to
experts regarding the issues pertaining to the life of the people who should be the
sole judges regarding what kind of society is desirable and how.people's knowledge
can play a legitimate role in shaping a society. It is high time we reject the western
model of development which is thrust on non-western societies on the basis of a
spurious claim that such a model of development is worked out on the basis of
scientific method.

e) Finally, Feyerabend calls into question or rather rejects the idea that there is a
hard and line between science and non-science .. Positivists draw such a lime
between them on the basis of systematic verifiability of scientific theories.
Hypothetico-deductivists do so cm the basis- of systematic falsifiability of scienctific
theories. Kuhn draws such a line by identifying consensus oriented thinking or
possession of paradigm as what separates science from non-science. Feyerabend
rejects the idea of such a line of demarcation. Does it mean that he does not
distinguish between Science and, say, magic. No, he does distinguish. What he
_means by rejecting. such a line of demarcation is that line is not absolute or
unchanging. It is relative, shifting and contextual in the sense each society at
different times draws the line in its. own way. The line is there but it is context
bound. It is historical and not fixed.

-j
Minds & Machines (2007) 17:135–167
DOI 10.1007/s11023-007-9060-8

Three Paradigms of Computer Science

Amnon H. Eden

Published online: 31 July 2007


 Springer Science+Business Media B.V. 2007

Abstract We examine the philosophical disputes among computer scientists


concerning methodological, ontological, and epistemological questions: Is computer
science a branch of mathematics, an engineering discipline, or a natural science?
Should knowledge about the behaviour of programs proceed deductively or
empirically? Are computer programs on a par with mathematical objects, with mere
data, or with mental processes? We conclude that distinct positions taken in regard
to these questions emanate from distinct sets of received beliefs or paradigms within
the discipline:
– The rationalist paradigm, which was common among theoretical computer
scientists, defines computer science as a branch of mathematics, treats programs
on a par with mathematical objects, and seeks certain, a priori knowledge about
their ‘correctness’ by means of deductive reasoning.
– The technocratic paradigm, promulgated mainly by software engineers and has
come to dominate much of the discipline, defines computer science as an
engineering discipline, treats programs as mere data, and seeks probable,
a posteriori knowledge about their reliability empirically using testing suites.
– The scientific paradigm, prevalent in the branches of artificial intelligence,
defines computer science as a natural (empirical) science, takes programs to be
entities on a par with mental processes, and seeks a priori and a posteriori
knowledge about them by combining formal deduction and scientific
experimentation.

A. H. Eden (&)
Department of Computer Science, University of Essex, Colchester, Essex, UK

A. H. Eden
Center for Inquiry, Amherst, NY, USA

123
136 A. H. Eden

We demonstrate evidence corroborating the tenets of the scientific paradigm, in


particular the claim that program-processes are on a par with mental processes. We
conclude with a discussion in the influence that the technocratic paradigm has been
having over computer science.

Keywords Philosophy of computer science  Ontology and epistemology of


computer programs  Scientific paradigms

1 Introduction

In his seminal work on scientific revolutions, Thomas Kuhn (1992) defines scientific
paradigms as ‘‘some accepted examples of actual scientific practice... [that] provide
models from which spring particular coherent traditions of scientific research’’. The
purpose of this paper is to investigate the paradigms of computer science and to
expose their philosophical origins.
Peter Wegner (1976) examines three definitions of computer science: as a branch
of mathematics (e.g., Knuth 1968), as an engineering (‘technological’) discipline,
and as a natural (‘empirical’) science. He concludes that the practices of computer
scientists are effectively committed not to one but to either one of three ‘research
paradigms’1. Taking a historical perspective, Wegner argues that each paradigm
dominated a different decade during the 20th century: the scientific paradigm
dominated the 1950s, the mathematical paradigm dominated the 1960s, and the
technocratic paradigm dominated the 1970s—the decade in which Wegner wrote his
paper2. We take Wegner’s historical account to hold and postulate (§5) that to this
day computer science is largely dominated by the tenets of the technocratic
paradigm. We shall also go beyond Wegner and explore the philosophical roots of
the dispute on the definition of the discipline.
Timothy Colburn (2000, p. 154) suggests that the different definitions of the
discipline merely emanate from complementary interpretations (or ‘views’) of the
activity of writing computer programs, and therefore they can be reconciled as such.
Jim Fetzer (1993) however argues that the dispute is not restricted to definitions,
methods, or reconcilable views of the same activities. Rather, Fetzer contends that
disagreements extend to philosophical positions concerning a broad range of issues
which go beyond the traditional confines of the discipline: ‘‘The ramifications of this
dispute extend beyond the boundaries of the discipline itself. The deeper question
that lies beneath this controversy concerns the paradigm most appropriate to
computer science’’. Not unlike Kuhn, Fetzer takes ‘paradigm’ to be that set of
coherent research practices that a community of computer scientists share amongst
them. By calling the disagreements ‘paradigmatic’ Fetzer claims that their roots

1
To which Wegner also refers as ‘cultures’ or ‘disciplines’ interchangeably.
2
The ‘‘Denning report’’ (Denning et al. 1989) authored by the task force which was commissioned to
investigate ‘‘the core of computer science’’ also lists three ‘‘paradigms’’ of the discipline: theory/
mathematics, abstraction/science, and design/engineering. According to this report, these paradigms are
‘‘cultural styles by which we approach our work’’. They conclude however that ‘‘in computing the three
processes are so intricately intertwined that it is irrational to say that any one is fundamental’’.

123
Three Paradigms of Computer Science 137

extend into philosophical questions of existence (ontological) and knowledge


(epistemological) about computers and programs:
Some of the most important philosophical issues that arise within this context
concern questions of a philosophical character. These involve ‘‘ontic’’ (or
ontological) questions about the kind of things computers and programs are, as
well as ‘‘epistemic’’ (or epistemological) questions about the kind of
knowledge we can possess about things of this kind. (Fetzer 1993)
Like Fetzer and Wegner, we contend that computer scientists generally subscribe
to distinct paradigms, which emanate from distinct, inconsistent, and mutually
exclusive methodological positions concerning the choice of methods for investi-
gating programs (MET, §1.1), ontic positions concerning the nature of programs
(ONT, §1.2) and epistemic positions concerning the nature of knowledge about
them (EPI, §1.3). In the remainder of this section we examine the philosophical
disputes among computer scientists. Seeking to spell out the philosophical position
underlying each paradigm of computer science, we proceed in the following
sections to examine the tenets of each, contending that—

(§2) The rationalist paradigm, which was common among theoretical computer
scientists, defines the discipline as a branch of mathematics (MET-RAT),
treats programs on a par with mathematical objects (ONT-RAT), and seeks
certain, a priori knowledge about their ‘correctness’ by means of deductive
reasoning (EPI-RAT).
(§3) The technocratic paradigm, promulgated mainly by software engineers,
defines computer science as an engineering discipline (MET-TEC), treats
programs as mere data (ONT-TEC), and seeks probable, a posteriori
knowledge about their reliability empirically using testing suites (EPI-TEC).
(§4) The scientific paradigm, prevalent in artificial intelligence, defines computer
science as a natural (empirical) science (MET-SCI), takes programs to be on
a par with mental processes (ONT-SCI), and seeks a priori and a posteriori
knowledge about them by combining formal deduction and scientific
experimentation (EPI-SCI).
Since arguments supporting the tenets of the rationalist and technocratic
epistemological positions have already been examined elsewhere (e.g., Colburn’s
(2000) detailed account of the ‘verification wars’), their treatment in §2 and §3 is
brief. Instead, we expand on the arguments of complexity, non-linearity, and self-
modifiability for the unpredictability of programs and conclude that knowledge
concerning certain properties of all but the most trivial programs can only be
established by conducting scientific experiments.
In §4 we proceed to examine seven properties of program-processes (temporal,
non-physical, causal, metabolic, contingent upon a physical manifestation, and non-
linear) and conclude that program-processes are, in terms of category of existence,
on a par with mental processes. This discussion shall lead us to concur with Colburn
and conclude that the tenets of the scientific paradigm are the most appropriate for
computer science. Nonetheless, in §5 we demonstrate evidence for the dominance of

123
138 A. H. Eden

the technocratic paradigm which has prevailed since Wegner (1976) described the
1970s as the decade of the ‘technological paradigm’ and examine its consequences.
Our discussion will lead us to conclude that this domination has not benefited
software engineering, and that for the discipline to become as effective as its sister,
established engineering disciplines it must abandon the technocratic paradigm.

1.1 The Methodological Dispute

Computer science textbooks, classics, research articles, conferences, and curricula of


undergraduate programs are dominated by radically different methods of conducting
research and teaching about computer programs. Mathematical methods of
investigation guide the research in computability, automata theory, computational
complexity, and the semantics of programming languages; design rules of thumb,
extensive testing suites, and regimented development methods dominate the
branches of software engineering, design, architecture, evolution, and testing; and
the methods of natural sciences, which combine mathematical theories with scientific
experiments, govern the research in artificial intelligence, machine learning,
evolutionary programming, artificial neural networks, artificial life, robotics, and
modern formal methods. This methodological incongruity manifests itself in many
ways. For example, in some research institutes computer science is a department in
the school of mathematics, in others it is a part of the engineering faculty, while other
computer science departments are grouped with the natural sciences.
The dispute concerning the definition of the discipline and its most appropriate
methods of investigation can thus be paraphrased as follows:
MET Is computer science a branch of mathematics, on a par with logic, geometry,
and algebra; is it an engineering discipline, on a par with chemical or aeronautical
engineering; or is it indeed a natural, experimental (empirical) science, on a par
with astronomy and geology? Should computer scientists rely primarily on
deductive reasoning, on test suites and regimented software development process,
or should they employ scientific practices which combine theoretical analysis with
empirical investigation? How is the notion of a scientific experiment different
from a test suite, if at all? What is the relation between theoretical computer
science and computer science?

We shall demonstrate that the methods employed by each paradigm of computer


science emanate from the stance that each paradigm takes in the ontological (ONT,
§1.2) and the epistemological disputes (EPI, §1.3), examined below.

1.2 The Ontological Dispute

We take the notion of a computer program to be central to computer science. In this


paper we focus our discussion in the ontological dispute concerning the nature of
programs.
In his discussion in questions that arise from Artificial Life (‘A-Life’), Eric Olson
poses the following ontological question:

123
Three Paradigms of Computer Science 139

What ontological category would computer [programs] belong to? Are they
supposed to be material objects? ... If so, what matter; and if not, what are they
made of? ... Events or processes? Platonic complexes of pure information? ...
If not, where are they? ... Are they located in space and time at all? ... Or are
the traditional ontological categories of the philosophers adequate to account
for this new phenomenon? (Olson 1997)
We take into consideration all sorts of entities that computer scientists
conventionally take to be ‘computer programs’, such as numerical analysis
programs, database and World Wide Web applications, operating systems,
compilers/interpreters, device drivers, computer viruses, genetic algorithms,
network routers, and Internet search engines. We shall thus restrict most of our
discussion to such conventional notions of computer programs, and generally
assume that each is encoded for and executed by silicon-based von-Neumann
computers. We therefore refrain from extending our discussion to the kind of
programs that DNA computing and quantum computing are concerned with.
The ontological dispute in computer science may be recast in the terminology we
shall introduce below as follows:
ONT Are program-scripts mathematical expressions? Are programs mathemat-
ical objects? Alternatively, should program-scripts be taken to be just ‘a bunch of
data’ and the existence of program-processes dismissed? Or should program-
scripts be taken to be on a par with DNA sequences (such as the genomic
information representing a human), the interpretation of which is on a par with
mental processes?
Below we clarify some of the technical terms mentioned in ONT and in the
remainder of this paper.

Terminology

We seek to distinguish between two fundamentally distinct senses of the term


‘program’ in conventional usage: The first is that of a static script, namely a well-
formed sequence of symbols in a programming language, to which we shall refer as
a program-script. The second sense is that of a process of computation generated
by ‘executing’ a particular program-script, to which we shall refer as a program-
process. Any mention of the term ‘program’ shall henceforth apply to both senses3.
Rather than attempting to define these terms formally we shall illustrate them
with an example. Each program-script is associated with a programming language.
What distinguishes a program-script from a mere sequence of symbols is the
requirement that program-scripts are expressions that are well formed according to
the syntactical and semantic rules of a specific programming language.
The programming languages we are concerned are generally divided into
machine and high order programming languages. By machine programming
language we shall refer to programming languages which restrict themselves to
3
For example, the statement ‘programs are abstract’ shall be taken to assert that ‘program-scripts and
program-processes are abstract’.

123
140 A. H. Eden

Table 1 Program-script encoded in a machine programming language5


75 E0 73 75 E1 73 FA D3 75 52 D5 75 74 F9 A2 21 F0 73
58 71 F9 A2 21 F0 73 30 73 E4 73 31 73 E5 73 32 73 E6
73 33 73 E7 44 70 34 F6 43 73 E6 43 73 E7 44 70 35 F6
73 34 73 E0 73 35 73 E1 75 60 5E D5 75 31 D3 75 30 F6

primitive machine instructions for a given von-Neumann, silicon-based, mass


produced class of microprocessors. For example, the Intel 8086 class of
microprocessors effectively defines a specific machine programming language,
such that a program-script encoded therein4 is decipherable by any computer based
on the 808X microprocessor family. In such machines, each program-script is
represented as a configuration of electrical charges of the machine’s memory,
normally transcribed in binary or hexadecimal code, as demonstrated in Table 1.
Not surprisingly, programs in machine programming languages proved to be
exceptionally difficult for humans to understand, reason about, and adapt.6 Worse
still, rapid developments in computing and microprocessor technology make
programs encoded for older generation microprocessors obsolete along with the
class of machines for which they were specifically tailored.
Improvements in the processing power of computers during the 1950s have
enabled the introduction of high-order programming languages (IEEE 1990), also
compiled or interpreted programming languages. High-order programming lan-
guages allow programmers to harness the power of other powerful programs, such
as compilers (interpreters) and operating systems, which interpret and execute
program-scripts encoded in these languages. As a result from their prevalence,
claims about the ‘text of the program’ (e.g., Hoare’s §2) most commonly refer to
programs encoded in high-order programming languages, such as the program-
script depicted in Table 2.
The second sense of the word ‘program’ is that of a process (also thread, task, or
bot). The term program-process is thus reserved to that entity which is generated
from executing a program-script in the appropriate operational environment. Once
generated, a copy of the program-script is loaded into the computer’s memory (the
program’s ‘image’), followed by executing of the first instruction copied to the
image. For example, executing the program-script in Table 2 with the numbers 4
and 5 as input using Intel Pentium IV PC equipped with the Linux operating system
and the COMLISP compiler (Georick et al. 1997) shall generate to a program-process
which calculates the value of the expression 3+(4·5), the proceedings of which are
depicted in Table 3.
Confusion concerning the notion of a computer program can often be traced to
the two senses of the term. But program-scripts must not be confused with program-

4
Also known as machine code or object code.
5
The program adds 3 to the product of two numbers, encoded in the 8086 microprocessor assembly
language (Adapted from Georick et al. 1997).
6
For example, consider the difficulty of spotting and correcting errors in the program in Table 1.

123
Three Paradigms of Computer Science 141

Table 2 Program-script encoded in Lisp7


ðdefine example
ðlambda ðx yÞ
ðþ ð yÞ 3ÞÞÞ

Table 3 Steps in a sample program-process generated from executing the program in Table 2
[ ðexample 4 5Þ
ðþ ð 4 5Þ 3Þ
ðþ 20 3Þ
23

processes: The first is an inert sequence of symbols; the second is a causal and a
temporal entity. Any number of program-processes can be potentially be generated
from each program-script. Furthermore, certain operating systems allow the
simultaneous generation of a large number of program-processes from a single
program-script executed concurrently by a single microprocessor. For example, my
Personal Computer can generate and concurrently execute large numbers of
program-processes from the program-script in Table 2.

1.3 The Epistemological Dispute

Program specifications are statements that assert our expectations from a program.
If specifications are defined before the program-script is encoded they can be used to
articulate the objectives of the encoding enterprise and drive the software
development process, which is often complex and arduous. For example, a
specification asserting that the program-script in Table 2 indeed calculates the sum
of the product of two numbers and the number 3 can be formally specified as a
lambda expression:

kxy:x  y þ 3 ð1Þ
In more conventional notation, (1) can also be represented as a two-place function:

exampleðx; yÞ ¼ x  y þ 3 ð2Þ

Having formally articulated the specification, the correctness of a program can be


taken to mean the extent to which it meets its specifications. The question of
correctness can thus be recast as the question whether any or all of the program-
processes that can, shall, and have been generated from program-script pss meet or
‘satisfy’ specification s. The hypothesis ‘program pss is correct with relation to s’
therefore asserts that pss satisfies s. For example, the correctness of example

7
The program adds 3 to the product of two numbers, encoded here in the syntax of Scheme (Abelson and
Sussman 1996), a dialect of Lisp

123
142 A. H. Eden

Table 4 Sample informal specifications


• Program x does not cause the space shuttle to explode
• Program x translates French into English
• Program x is a computer virus
• Program x never lets unauthorized persons to access sensitive data
• Program x never terminates unexpectedly
• Program x takes a regular expression (a string of text) and returns a list of World Wide Web
documents sorted by their ‘relevance’ to this expression
• Program x detects whether the face of person y appear in any given picture
• Program x executes with visibly identical outcome regardless of the operating system used

(Table 2) can be defined by the extent to which it satisfies specification (2). If the
specification is articulated in a mathematical language, as in (2), it is referred to as a
formal specification, in which case the question of ‘correctness’ is well-defined.
Most specifications however are not quite as simple as (2). Specifications may
assert not only the outcome of executing a particular program-script (e.g., adding a
record to a database of moving a robotic arm) but also how efficient are the
program-processes generated therefrom (e.g., how long it takes to carry out a
particular calculation) and how reliable they are (e.g., do they terminate
unexpectedly?). For this reason, fully formulated specifications are not always
feasible, as demonstrated by the specifications in Table 4.
Indeed, although the correctness of a program can be a source of considerable
damage, or even a matter of life and death, it may be very difficult—or, as Fetzer
and Cohn claimed, altogether impossible—to establish formally. And while
executing a program-script in various circumstances (‘program testing’) can
discover certain errors, no number of tests can establish their absence8. For these
reasons, the problem of program correctness has become central to computer
science. If correctness cannot be formally specified and the problem of establishing
it is not even well-defined then is it at all meaningful to ask whether a program is
correct, and if so then what should ‘correctness’ be taken to mean and how can it be
established effectively? These questions are at the heart of the epistemological
dispute:
EPI Is warranted knowledge about programs a priori or a posteriori?9 In other
words, does knowledge about programs emanate from empirical evidence or from
pure reason? What does it mean for a program to be correct, and how can this
property be effectively established? Must we consider correctness to be a well-
defined property—should we insist on formal specifications under all circum-
stances and seek to prove it deductively—or should we adopt a probabilistic
notion of correctness (‘probably correct’) and seek to establish it a posteriori by
statistical means?
8
A statement most widely attributed to Dijkstra.
9
We follow Colburn (2000) in taking a priori knowledge about a program to be knowledge that is prior
to experience with it, namely knowledge emanating from analyzing the program-script, and a posteriori
knowledge to be knowledge following from experience with observed phenomena, namely knowledge
concerning a given set of specific program-processes generated from a given script.

123
Three Paradigms of Computer Science 143

Of all the philosophical questions we shall examine, computer scientists have


been most explicit in the position they take concerning the epistemological dispute.

2 The Rationalist Paradigm

By the rationalist paradigm we refer to that paradigm of computer science which


takes the discipline to be a branch of mathematics, the tenets of which have been
common among scientists investigating various branches of theoretical computer
science, such as computability and the semantics of programming languages. Tony
Hoare summarized the tenets of the rationalist paradigm as follows:
(1) Computers are mathematical machines. Every aspect of their behaviour
can be defined with mathematical precision, and every detail can be deduced
from this definition with mathematical certainty by the laws of pure logic. (2)
Computer programs are mathematical expressions. They describe with
unprecedented precision and in every minutest detail the behaviour, intended
or unintended, of the computer on which they are executed. ... (4)
Programming is a mathematical activity... its successful practice requires
determined and meticulous application of traditional methods of mathematical
understanding, calculation and proof10.

2.1 The Rationalist Methods

Concerned primarily with what is today taken to be the foundations of the


discipline, theoretical computer science is the oldest and the most rigorously
established branch of computer science. In the first decades following the work of
Turing (1936; Turing and Copeland 2004), who is widely considered to be the father
of the discipline, computer science has largely been identified with what von-
Neumann described as the mathematical investigation of ‘‘the extent and limitations
of mechanistic explanation’’. During the 1930s, influential mathematicians such as
Turing, Church, and Kleene developed mathematically potent theories which sought
(and succeeded) to lend precision to intuitive notions of mechanistic computation
(also effective computation). One of the earliest triumphs of theoretical computer
science has been the mathematical proof according to which all the different
mathematical notions of mechanistic computation on offer—turing machines,
lambda expressions, and recursive functions11—are computationally equivalent.
This important result lent considerable support to what came to be known as
Turing’s Thesis (also Church–Turing Thesis, Copeland 2002), according to which
any ‘mechanistic’ process of computation can indeed be represented as the process
of computation by a turing machine, and by extension, as an algorithm, a recursive
function, etc.
10
Delivered, according to Mahoney (2002), in 1985 during his Inaugural Lecture as Professor of
Computation at Oxford.
11
Which were later accompanied by algorithms and abstract state machines.

123
144 A. H. Eden

During the 1940s the first electronic computers appeared, and with them emerged
the contemporary notions of computer programs (§1.2). A mathematical proof
demonstrating that programs encoded in machine programming languages are
computationally equivalent to the mathematical notions of mechanistic computation
on offer has established the relevance of deductive reasoning to modern computer
science. In particular, computational equivalence implied that any problem which
can be solved (or efficiently solved) by a turing machine can be solved by executing
a program-script encoded in a machine programming language (§1.2), and vice
versa, namely, that any problem which cannot be (efficiently) solved by a turing
machine also cannot be (effectively) solved by executing a program-script encoded
in a machine programming language. For this reason machine programming
languages are described as ‘turing-complete’ languages. High-order programming
languages have thus appeared in a rich mathematical context, the design of which
was heavily influenced by the mathematical notions of mechanistic computation on
offer. For example, the striking resemblance between the Lisp program in Table 2
and the lambda expression specifying it (1) emanates directly from the commitment
of the designer of the Lisp programming language (McCarthy 1960) to lambda
calculus.
The fundamental theorems of the theories of computation have remained relevant
notwithstanding generations of exponential growth in computing power. Time has
thus secured the primacy of deductive methods of investigation as a source of
certain knowledge about programs and led many to concur with Hoare. For
example, Knuth justifies his definition of computer science as a branch of
mathematics (Knuth 1968) as follows:
Like mathematics, computer science will be somewhat different from other
sciences in that it deals with man-made laws which can be [deductively]
proved, instead of natural laws which are never known with certainty. (Knuth
1974)
The rationalist stance in the methodological dispute can thus be summarized as
follows:
MET-RAT Computer science is a branch of mathematics, writing programs is a
mathematical activity, and deductive reasoning is the only accepted method of the
investigating programs.
MET-RAT is justified by the rationalist ontological and epistemological positions
examined below.

2.2 The Rationalist Ontology

Discovering the Turing-completeness of programming languages has established


that every program-process can be adequately represented by some turing
machine, and by extension, by an algorithm, a recursive function, and by any
other computationally equivalent mathematical notion of mechanistic computa-
tion. The powerful insights that these mathematical notions of programs offer
have led Hoare (1986), Dijkstra (1988), and Lamport (1977) to claim that

123
Three Paradigms of Computer Science 145

program-scripts are mathematical expressions.12 This premise motivates the


rationalist position in the ontological dispute (ONT), which can be recast and
justified as follows:
ONT-RAT Program-scripts are mathematical expressions. Mathematical expres-
sions represent mathematical objects. A program p is that which is fully and
precisely represented by sp. Therefore p is a mathematical object.

Functional and logic programming languages lend considerable support to ONT-


RAT. Indeed, the striking similarity between the Lisp program in Table 2 and the
recursive function defined in expression (1) can be taken to demonstrate that what is
(fully and precisely) represented by a lisp program is indeed an recursive function, a
position which can be argued as follows:
ONT-RATfunction A program-script sp encoded in any turing-complete program-
ming language (e.g., Lisp) is a mathematical expression representing a recursive
function fp. A program p is that which is fully and precisely represented by sp.
Therefore p is the mathematical function fp.
A possible objection to the rationalist ontology stems from the proliferation of
the kinds of mathematical objects on offer. Why, indeed, should programs be taken
to be recursive functions or turing machines rather than algorithm or any other
(computationally equivalent) class of mathematical objects? But the proliferation of
mathematical explanations can be taken to corroborate rather than weaken ONT-
RAT. Indeed, computational equivalence precisely means that any deduction that
can be made from choosing one mathematical explanation of mechanistic
computation can be made from the other. Stronger objections to ONT-RAT are
examined in §4.3.
ONT-RAT raises metaphysical questions concerning the nature of mathematical
objects. Prima facie, ONT-RAT may be taken to commit the rationalist to a platonist
position (e.g., Balaguer 2004). Plato’s sphere of perfect existence consists of ideal
universals (or Forms), such as mathematical objects, which are abstract (intangible,
non-physical), eternal, observer-independent, non-mental, and immutable entities,
that can only be perceived thoughour intellects (which Plato takes to be a yet
another sensory organ). Universals are taken to exist regardless whether humans are
aware of their existence are unaffected by the creation or destruction of any number
of particulars. A platonist justification to ONT-RATfunction can be recast in these
terms as follows:
ONT-RATplatonism A program-script sp is a mathematical expression. A program
p is that which is fully and precisely represented by sp. Hence, p is a mathematical
object. Mathematical objects are platonic universals. Therefore, programs are
universals.

12
Dijkstra (1988) offered an explanationto how this ‘fact’ escaped mathematicians and programmers
alike: ‘‘Programs were so much longer formulae than [mathematics] was used to that [many] did not even
recognize them as such.

123
146 A. H. Eden

ONT-RATplatonism has some interesting consequences. It implies that the lambda


calculus, abstract automata, as well as every build and every version of Windows
XP (and of every operating system) were discovered rather than invented (Turner
2007). It also implies that every program that has ever been written, will ever be
written, or can be written, exists eternally in the sphere of perfect existence,
regardless whether it is ‘discovered’, encoded, or executed.13
However, most theoretical computer scientists have refrained explicitly
committing computer programs to any particular category of existence. Despite
its appeal, we found no reference to ONT-RATplatonism or to any particular branch
of metaphysics. Indeed, ONT-RAT is also in line with other positions in
metaphysics, such as conventionalism and intuitionism. The objections to ONT-
RAT we examine in §4.3 shall therefore focus on the inadequacy of mathematical
objects as an account for the apparent properties of programs.

2.3 The Rationalist Epistemology

Hoare (1986) is explicit in his commitment to the primacy of a priori, certain


knowledge about programs and to the role of mathematical deduction in
establishing it. This position was shared by those who like Hoare sought to
establish mathematically the (formal) semantics of programming languages, most
notably Dana Scott and Christopher Strachey (1973). Hoare (1969) himself offers
an axiomatic theory formulated in the classical mathematical logic. By Hoare’
Logic, each stage in the computation process is represented by a state that can be
captured by a set of axioms in mathematical logic {P}. The consequences of
executing a particular statement s is represented as that state {Q} which results
from applying that rule of inference s0 which is associated with the statement s to
{P}. The Hoare triple {P}s{Q} can therefore be taken to represent the semantics
of statement s. For example, the intended behaviour of program example
(Table 2) can be represented by the following Hoare triple:

fx; y 2 Ng ðexample x yÞ foutput ¼ x  y þ 3g ð3Þ

The proof of correctness of the script in Table 2 shall proceed with the attempt
to prove (3) by employing the rules of inference of Hoare Logic. Once
established, such a mathematical proof shall thus secure the correctness of the
program-script in Table 2 with certainty otherwise reserved to mathematical
theorems.
Other efforts in delivering formal semantics have followed Hoare’s example in
the attempt to prove program correctness using other axiomatic theories. In
particular, Scott’s denotational semantics (Stoy 1977) harnessed the axioms of
Zermelo-Fraenkel to prove program correctness.

13
Bill Rapaport (2007) notes that such a position has interesting consequences on the question whether
programs can be copyrighted or patented.

123
Three Paradigms of Computer Science 147

The mathematical investigation of semantics of programming languages has been


at least partially successful. If certain simplifying assumptions on the programming
language are taken then some of the properties of the program-script and some of
the consequences of executing it can indeed, at least in principle, be formally
deduced. However, such a notion of program correctness required not only that
specifications (§1.3) are fully and formally defined—a potentially unfeasible task
(Table 4), and that programs-scripts can be fully and precisely represented in the
same formal language, but also that these mathematical expressions lend
themselves to the deductive process of formal verification. In 1962 John McCarthy
even suggested that, not only that program correctness can be deductively proven,
but also that it should be possible to mechanize the process of checking such
proofs:
It should be possible to eliminate debugging. ... Instead of debugging a
program one should prove that it meets its specifications, and this proof should
be checked by a computer program. (McCarthy 1962)
Indeed for the rationalist correctness is a well-defined, a priori notion which must
be proven mathematically. Hoare dismisses whatever pragmatic arguments against
this epistemological position and claims that a posteriori knowledge emanating
from experience (e.g. ‘debugging’) must be dismissed as ineffective, anecdotal and
unscientific:
I find digital computers of the present day to be very complicated and rather
poorly defined. As a result, it is usually impractical to reason logically about
their behaviour. Sometimes, the only way of finding out what they will do is
by experiment. Such experiments are certainly not mathematics. Unfortu-
nately, they are not even science, because it is impossible to generalize from
their results or to publish them for the benefit of other scientists. (Hoare, in
Fetzer (1993))
The rationalist epistemological position can thus be recast as follows:
EPI-RAT Programs can be fully and formally specified, and their ‘correctness’ is
a well-defined problem. Certain, a priori knowledge about programs emanates
from pure reason, proceeding from self-evident axioms to the demonstration of
theorems by means of formal deduction. A posteriori knowledge is to be
dismissed as anecdotal and unreliable.

EPI-RAT is in line with rationalism in traditional epistemology (Markie 2004)


which holds that pure reason alone, as opposed to sense experience, play a role in
our attempt to gain knowledge, and that a priori knowledge is superior to
a posteriori knowledge. This motivated our choice to refer to the rationalist
paradigm of computer science as such.
EPI-RAT is intimately tied with the rationalist’s ontological commitment to
mathematical objects (ONT-RAT). While empirical evidence can give us some

123
148 A. H. Eden

intuition about the nature of mathematical objects such as numbers, triangles, and
(set-theoretic), e.g., by adding up apples or by drawing triangles on paper, such
evidence only offer anecdotal knowledge. If programs are taken to be mathematical
objects (ONT-RAT) and the methods of computer science are the methods of
mathematical disciplines, then knowledge about programs can only proceed
deductively. Indeed, a rationalist position towards knowledge in branches of pure
mathematics such as geometry, logic, arithmetic, topology, and set theory largely
dismiss a posteriori knowledge as unreliable, ineffective, and not sufficiently
general.
Objections to EPI-RAT are examined in the following sections.

3 The Technocratic Paradigm

By the ‘technocratic paradigm’14 we refer to that paradigm of computer science


which defines the discipline as a branch of engineering, proponents of which
dominate the various branches of software engineering, including software design,
software architecture, software maintenance and evolution, and software testing. In
line with the empiricist position in traditional philosophy, the technocratic paradigm
holds that reliable, a posteriori knowledge about programs emanates only from
experience, whereas certain, a priori ‘knowledge’ emanating from the deductive
methods of theoretical computer science is either impractical or impossible in
principle.

3.1 The Technocratic Methods

Wegner describes the background to the emergence of the technocratic paradigm,


echoing what we shall refer to as the argument of complexity (§3.2):
During the 1970s emphasis shifted away from ‘‘pure research’’ towards
practical management of the environment, not only in computer science but
also in other scientific areas. Decreasing hardware costs and increasingly
complex software projects created a ‘‘complexity barrier’’ in software
development which caused the management of software-hardware complexity
to become the primary practical problem in computer science. Research was
directed away from the development of powerful new programming languages
and general theories of programming language structure towards the
development of tools and methodologies for controlling the complexity, cost
and reliability of large programs (Wegner 1976)15.
14
tech  noc  ra  cy n: A government or social system controlled by technicians, especially scientists
and technical experts. (The American Heritage1 Dictionary of the English Language: Fourth Ed., 2000.)
15
These events have led to the seminal NATO conference held in the fall of 1968 (Naur and Randell
1969) concerning the trouble that the software industry had been experiencing in producing reliable
computing systems. In the introduction to the conference’s report, Robert McClure (2001) argues that
although the term ‘software engineering’ was not in general use at that time, its adoption for the titles of
these conferences was deliberately provocative and played a major role in gaining general acceptance for
the term.

123
Three Paradigms of Computer Science 149

The technocratic turn away from the methods of theoretical computer science,
indeed away from all scientific practices, was most explicitly articulated by John
Pierce:
I don’t really understand the title, Computer Science. I guess I don’t
understand science very well; I’m an engineer. ... Computers are worth
thinking about and talking about and doing about only because they are useful
devices, which do something for somebody. If you are just interested in
contemplating the abstract, I would strongly recommend the belly button.
(Pierce 1968)
Indeed the technocratic doctrine contends that there is no room for theory nor for
science in computer science. During the 1970 this position, promoted primarily by
software engineers and programming practitioners, came to dominate the various
branches of software engineering. Today, the principles of scientific experimen-
tation are rarely employed in software engineering research. An analysis of all
5,453 papers published during 1993–2002 in nine major software engineering
journals and proceedings of three leading conferences revealed that less than 2% of
the papers (!) report the results of controlled experiments. Even when conducted,
the statistical power of such experiments falls substantially below accepted norms
as well as the levels found in the related disciplines (Dybå et al. 2006).
Instead of conducting experiments, software engineers use testing suites, the
purpose of which is to establish statistically the reliability of specific products of the
process of manufacturing software. For example, to establish the reliability of a
program designed for operating a microwave oven, software engineering educators
speak of a regimented process of software design (although a precise specification
of which is hardly ever offered), followed by an ‘implementation’ phase during
which the program-script is encoded (about which little can be said), concluding
with the construction of a testing suite and executing (say) 10,000 program-
processes generated from the given program-script. If executed in a range of actual
(rather than hypothetical) microwave ovens, such a comprehensive test suite
furnishes the programmer with statistical data which can be used to quantitatively
establish the reliability of the computing system in question, e.g., using metrics
such as probability of failure on demand and mean time to failure (Sommerville
2006).
Evidence to the decline of scientific methods is found in textbooks on software
engineering (e.g., Sommerville 2006). Rarely dedicating any space to deductive
reasoning16 and never to the principles of scientific experimentation in empirical
sciences, such textbooks cover the subjects of software design, software evolution,
and software testing, focusing on manufacturing and testing methods borrowed from
traditional engineering trades. Much discussed topics include models of software
development lifecycles, methods of designing testing suites, reliability metrics, and
statistical modelling.
The position of the technocratic paradigm concerning the methodological dispute
can thus be recast as follows:
16
At most, lip-service is paid to the role of verification in ‘safety-critical software systems’.

123
150 A. H. Eden

MET-TEC Computer science is a branch of engineering which is concerned


primarily with manufacturing reliable computing systems, a quality determined
by methods of established engineering such as reliability testing and obtained by
means of a regimented development and testing process. For all practical
purposes, the methods of theoretical computer science are dismissed as ‘naval
gazing’.
The technocratic methods of investigation are primarily motivated by the
technocratic epistemological position.

3.2 The Technocratic Epistemology

So far, there has been little philosophical discussion of making software


reliable rather than verifiable. ... If another view of software could arise ..., the
interests of real-life programming and theoretical computer science might both
be better served. (DeMillo et al. 1979)
The technocratic rejection of the premises of the rationalist epistemology (EPI-
RAT) rely on the argument of complexity for the inadequacy of deductive
reasoning, articulated by Richard DeMillo, Richard Lipton, and Alan Perlis as
follows:
Back in the real world ... the specifications for any reasonable compiler or
operating system fill volumes—and no one believes that they are complete. ...
The input assertions for these algorithms are not even formulable, let alone
formalizable. (DeMillo et al. 1979)
Indeed, whether a particular program-process meets our expectations depends on
the idiosyncrasies of the compiler, the operating system, and the particular computer
executing it, which are determined by the commercial concerns of their respective
vendors. These factors place specifications such as those listed in Table 4, as well as
the programs implementing them, at a level of complexity which does not lend itself
to formal deduction. By this argument, the inevitable conclusion is that formal
deduction is ineffective establishing the correctness of all but the most trivial
computer programs
The argument of complexity receives further corroboration from the technolog-
ical progress during the past three decades since it was first articulated. Since 1979,
the average size of programs and operating systems grew in at least four orders of
magnitude. More importantly, the complexity of compilers, operating systems,
microprocessors, and input is today compounded by component-based software
engineering technologies (Szyperski 2002), such as JavaBeans, .NET, and CORBA.
These technologies gave rise to gigantic programs such as Internet search engines
and electronic commerce applications which consist of hundreds of software
components (e.g., dynamically linked libraries, server-side and client-side threads),
whose construction is often ‘outsourced’ or otherwise delegated to a range of

123
Three Paradigms of Computer Science 151

independent commercial bodies or individual volunteers17 and which execute on


any one of a wide range of microprocessors (i.e., in a ‘heterogeneous environment’).
The notion of ‘input’ with regard to these programs has also been much further
complicated as signals and data arrive to these programs from innumerable other
interacting programs, many of which can be as complex as autonomous software
agents (Fasli 2007), and which communicate via vast and very complex
communication networks. Any form of deductive reasoning about such programs
requires the representation of petabytes18 of instructions and data in every one of the
components of the program and of every computer, operating system, and network
router that is involved (directly or indirectly) in their execution. Since these often
change during the lifespan of a program-process, the very notion of a program-script
is therefore not well-defined, specifications are not well-defined, and deductive
reasoning about their de facto representations is an idealization that is as unrealistic
and ineffective as, say, deductive reasoning about the individual atomic particles of
airplanes and power stations.
From the analogy to airplanes and power stations, DeMillo et al. conclude that
only probabilistic methods such as those employed by statistic mechanics and
thermodynamics can effectively establish any knowledge about such gargantuan
engineering feats:
How then do engineers manage to create reliable structures? ... They have a
mature and realistic view of what ‘‘reliable’’ means; in particular, the one
thing it never means is ‘‘perfect’’. There is no way to deduce logically that
bridges stand, or that airplanes fly, or that power stations deliver electricity.
(DeMillo et al. 1979)
According to DeMillo et al. the argument of complexity is so compelling that any
resistance thereto amount to ‘symbol chauvinism’:
It is nothing but symbol chauvinism that makes computer scientists think that
our structures are so much more important than material structures that (a)
they should be perfect, and (b) the energy necessary to make them perfect
should be expended. We argue rather that (a) they cannot be perfect, and (b)
energy should not be wasted in the futile attempt to make them perfect.
(DeMillo et al. 1979)
Rather than taking ‘correctness’ to be a certain, formally defined property,
computer scientists must learn from the established branches of engineering that
more realistic notions of correctness are in place, meaning probabilistic notions of
reliability:
It is no accident that the probabilistic view of mathematical truth is closely
allied to the engineering notion of reliability. Perhaps we should make a sharp

17
For example, the Debian GNU/Linux 3.1 version of the Linux operating system (Debian 2007) is the
product of contributions made by thousands of individuals that are entirely unrelated except in their
attempt to improve it.
18
One petabyte (1PB) is 1,024 terabytes or 250 bytes.

123
152 A. H. Eden

distinction between program reliability and program perfection—and concen-


trate our efforts on reliability. (DeMillo et al. 1979)

The technocratic position concerning the epistemological dispute may be recast


in terms of the argument of complexity as follows:
EPI-TEC It is impractical to specify formally or to prove deductively the
‘correctness’ of a complete program. A priori, certain knowledge about the
behaviour of actual programs is therefore unattainable. If at all meaningful,
‘correctness’ must be taken to mean tested and proven ‘reliability’, a posteriori
knowledge about which is measured in probabilistic terms and established using
extensive testing suites.

Fetzer (1993) and Avra Cohn (1989) offer what is essentially an ontological
argument for an even stronger epistemological position, to which we shall refer as
the argument of category mistake. According to this argument, a priori knowledge
about the behaviour of machines is impossible in principle:
A proof that one specification implements another—despite being completely
rigorous, expressed in an explicit and well understood logic, and even checked
by another system—should still be viewed in context of many extra-logical
factors which affect the correct functioning of hardware systems. (Cohn 1989)
The technocratic position concerning the nature of knowledge can be justified by
the argument of category mistake as follows:
EPI-TECOnt It is impossible to prove deductively the correctness of any physical
object. A priori, certain knowledge about the behaviour of actual programs is
unachievable. If at all meaningful, ‘correctness’ must be taken to mean tested and
proven ‘reliability’, a posteriori knowledge about which is measured in
probabilistic terms and established using extensive testing suites.

Peter Markie (2004) defines empiricism as that school of thought which holds
that sense experience is the ultimate source of all our concepts and knowledge.
Empiricism rejects pure reason as a source of knowledge, indeed any notion of
a priori, certain knowledge, claiming that warranted beliefs are gained from
experience. Thus, EPI-TEC and EPI-TECOnt are in line with the empiricist
philosophical position.
The argument of complexity won the hearts of many computer scientists. As a
result, the technocratic doctrine has come to dominate software engineering journals
(IEEE TSE) and conferences (ICSE), contributions to which are traditionally judged
by experience gained from actual implementations—‘‘concrete, practical applica-
tions’’—which must be employed to demonstrate any thesis put forth, may it be
theoretical or practical. Software engineering classics such as the 1969 NATO
report (Naur and Randell 1969) and the grand ‘‘Software Engineering Body of
Knowledge’’ project (Abran and Moore 2004) hold a posteriori knowledge to be

123
Three Paradigms of Computer Science 153

superior on all other knowledge about programs and dismiss or neglect the role of
formal deduction. Same position is widely embraced in all branches of software
design. For example, the merits of design patterns (Gamma et al. 1995) and
architectural styles (Perry and Wolf 1992) are measured almost exclusively in terms
of the number of successful applications thereof.

3.3 The Technocratic Ontology

The records of the NATO conference on software engineering (Naur and Randell
1969) quote van der Pohl in suggesting that program-scripts are themselves just
‘‘bunches of data’’:
A program [script] is a piece of information only when it is executed. Before
it’s really executed as a program in the machine it is handled, carried to the
machine in the form of a stack of punch cards, or it is transcribed, whatever is
the case, and in all these stages, it is handled not as a program but just as a
bunch of data. (Van der Poel, in Naur and Randell 1969)
If mere ‘‘bunches of data’’, representing a configuration of the electronic charge
of a particular printed circuit, program-scripts are on a par with (the manuscript of)
Shakespeare’s Hamlet and (the pixelized representation of) Botticelli’s The Birth of
Venus. Therefore ‘that which can be represented by data’ can be just about anything,
including non-existent entities such as Hamlet and Venus. The existence of those
putative abstract (intangible, non-physical) entities must therefore be rejected.
This objection can be attributed to a nominalist position in traditional
metaphysics. Nominalism (Loux 1998) seeks to show that discourse about abstract
entities is analysable in terms of discourse about familiar concrete particulars.
Motivated by an underlying concern for ontological parsimony, and in particular the
proliferation of universals in the platonist’s putative sphere of abstract existence, the
nominalist principle commonly referred to as Occam’s Razor (‘‘don’t multiply
entities beyond necessity’’) denies the existence of abstract entities. By this
ontological principle, nothing exists outside of concrete particulars, including not
entities that are ‘that which is fully and precisely defined by the program script’
(ONT-RAT). The existence of a program is therefore unnecessary.
The technocratic ontology can thus be summarized as follows:
ONT-TEC ‘That which is fully and precisely represented by a script sp’ is a
putative abstract (intangible, non-physical) entity whose existence is not
supported by direct sensory evidence. The existence of such entities must be
rejected. Therefore, ‘programs’ do not exist.
Indeed, the recurring analogies to airplanes, power stations, chemical
analyzers, and other engineered artefacts for which no ontologically independent
notion of a program is meaningful seems to support ONT-TEC. But while ONT-
TEC is corroborated by a nominalist position, it is not committed thereto. In
absence of an explicit commitment to any particular school of thought in

123
154 A. H. Eden

metaphysics, it is impossible to determine whether ONT-TEC is indeed motivated


by nominalism.

4 The Scientific Paradigm

The scientific paradigm contends that computer science is a branch of natural


(empirical) sciences, on a par with ‘‘astronomy, economics, and geology’’ (Newell
and Simon 1976), the tenets of which are prevalent in various branches of AI,
evolutionary programming, artificial neural networks, artificial life (Bedau 2004),
robotics (Nemzow 2006), and modern formal methods (Hall 1990). Since many
programs are unpredictable, or even ‘chaotic’, the scientific paradigm holds that
a priori knowledge emanating from deductive reasoning must be supplanted with
a posteriori knowledge emanating from the empirical evidence by conducting
scientific experiments. Since program-processes are temporal, non-physical, causal,
metabolic, contingent upon a physical manifestation, and nonlinear entities, the
scientific paradigm holds them to be on a par with mental processes.

4.1 The Scientific Methods

Allen Newel and Herbert Simon, prominent pioneers of AI, define computer
science as follows:
Computer science is the study of the phenomena surrounding computers ... it
an empirical discipline ... an experimental science ... like astronomy,
economics, and geology (Newell & Simon 1976)
Scientific experiments are traditionally concerned with ‘natural’ objects, such as
chemical compounds, DNA sequences, stellar bodies (e.g., Eddington’s 1919 solar
eclipse experiment), atomic particles, or human subjects (e.g., experiments
concerning cognitive phenomena.) It can be argued that the notion of scientific
experiment is only meaningful when applied to ‘natural’ entities but not to
‘artificial’ objects such as programs and computers; namely, that programs and
computers cannot be the subject of scientific experiments:
There is nothing natural about software or any science of software. Programs
exist only because we write them, we write them only because we have built
computers on which to run them, and the programs we write ultimately reflect
the structures of those computers. Computers are artifacts, programs are
artifacts, and models of the world created by programs are artifacts. Hence,
any science about any of these must be a science of a world of our own making
rather than of a world presented to us by nature. (Mahoney 2002)
As a reply, Newell and Simon contend that, even if they are indeed contingent
artefacts, programs are nonetheless appropriate subjects for scientific experiments,
albeit of a novel sort (‘‘nonetheless, they are experiments’’. Newell and Simon
1976) Their justification for this position is simple: If programs and computers are
taken to be some part of reality, in particular if the scientific ontology (ONT-SCI) is

123
Three Paradigms of Computer Science 155

accepted, then we see no particular difficulty in employing scientific methods for


investigating them. Even Turing acknowledged the role of experiments in
investigating the behaviour of artificial artefacts:
We also wish to allow the possibility than an engineer or team of engineers
may construct a machine which works, but whose manner of operation cannot
be satisfactorily described by its constructors because they have applied a
method which is largely experimental. (Turing 1950)
Additional arguments supporting the relevance of scientific experimentation
concern the limits of analytical methods. In §4.2, we shall examine the argument of
non-linearity and the argument of self-modifiability and conclude that, indeed,
knowledge about even some of the simplest programs can only be gained via
experiments.
The scientific notion of experiment must be clearly distinguished from the
technocratic notion of a reliability test (§3.1). The purpose of a reliability test is to
establish the extent to which a program meets the needs of its users, whereas a
scientific experiment is designed to corroborate (or refute) a particular hypothesis. If
a test suite fails, the subject of experiment (the program) must be revised (or
discarded); if an experiment ‘fails’, the theory must be revised (or discarded), or
else the integrity of the experiment is in doubt. For example, an appropriate test
suite may have prevented that programming error (in the conversion a 64-bit
floating-point number to a 16-bit signed integer) which caused the Ariane 5 Flight
501 to disintegrate forty seconds after launch. The purpose of such a test suite is to
prevent the space shuttle from exploding; had a test suite discovered this error, the
program would have been revised. In contrast, Eddington’s experiment of
measuring the bending of light at a total solar eclipse in 1919 was specifically
tailored to test Einstein’s 1915 general theory of relativity. Had this experiment
failed to corroborate this theory, General Relativity or the integrity of Eddington’s
experiment would have been questioned.
For this reason, experiments with programs go beyond establishing the usability
of a particular manufactured artefact, even beyond the ‘extent and limitations of
mechanistic explanation’. Computer programs can also be used as tools in
discovering and empirically establishing the laws of nature. In particular, program
simulations can be used to examine the veracity of models of non-linear phenomena
(such as the ones we shall examine in §4.2) in other natural sciences. For example,
in cognitive psychology, an artificial intelligence programs can be taken to be a tool
for empirical examinations of models of memory and learning; in bioinformatics,
genetic algorithms are used to test the extent to which models of the reproduction of
DNA molecules are corroborated by the laws of Darwinian natural selection; and in
astronomy, the predictions of models for the creation of the universe can be tested
by means of computer simulations. If computer science is concerned with the
‘phenomena surrounding computers’—such as the behaviour of computer simula-
tions—then its subject matter is distinct from any given class of natural phenomena
at most in the extent to which scientific theories deviate from reality. In other words,
our programs are only ‘incorrect’ to the extent to which the scientific theories they

123
156 A. H. Eden

implement deviate from the phenomena they seek to explain. In Popper’s (1963)
terms, the difference between programs and the (naturalistic view of) reality is at
most limited by the verisimilitude (or truthfulness) of our most advanced scientific
theory. The progress of science is manifest in the increase in this verisimilitude.
Since any distinction between the subject matter of computer science and natural
sciences is taken to be at most the product of the (diminishing) inaccuracy of
scientific theories, the methods of computer science are the methods of natural
sciences.
But the methods of the scientific paradigm are not limited to empirical validation,
as mandated by the technocratic paradigm. Notwithstanding the technocratic
arguments to the unpredictability of programs (as well as the additional arguments
we examine in §4.2), the deductive methods of theoretical computer science have
been effective in modelling, theorizing, reasoning about, constructing, and even in
predicting—albeit only to a limited extent—innumerable actual programs in
countless many practical domains. For example, context-free languages has been
successfully used to build compilers (Aho et al. 1986); computable notions of
formal specifications (Turner 2005) offer deductive methods of reasoning on
program-scripts without requiring the complete representation of petabytes of
program and data; and classical logic can be used to distinguish effectively between
abstraction classes in software design statements (Eden et al. 2006). If computer
science is indeed a branch of natural sciences then its methods must also include
deductive and analytical methods of investigation.
From this Wegner (1976) concludes that theoretical computer science stands to
computer science as theoretical physics stands to physical sciences: deductive
analysis therefore plays the same role in computer science as it plays in other
branches of natural sciences. Analytical investigation is used to formulate
hypotheses concerning the properties of specific programs, and if this proves to
be a highly complex task (e.g., Table 4) it nonetheless an indispensable step in any
scientific line of enquiry.
Tim Colburn concurs with this view and concludes that in reality the tenets of the
scientific paradigm offer the most complete description of the methods of computer
science:
Computer science ‘‘in the large’’ can be viewed as an experimental discipline
that holds plenty of room for mathematical methods, including formal
verification, within theoretical limits of the sort emphasized by Fetzer
(Colburn 2000, p. 154)
To summarize, the scientific position concerning the methodological question
(MET) can therefore be distinguished from the rationalist (MET-RAT) and the
technocratic (MET-TEC) positions as follows:
MET-SCI Computer science is a natural science on a par with astronomy,
geology, and economics, any distinction between their respective subject matters
is no greater than the limitations of scientific theories. Seeking to explain, model,
understand, and predict the behaviour of computer programs, the methods of
computer science include both deduction and empirical validation. Theoretical

123
Three Paradigms of Computer Science 157

computer science therefore stands to computer science as theoretical physics


stands to physics.

4.2 The Scientific Epistemology

The argument of complexity (§3.2) demonstrates that deductive reasoning is


impractical for large programs. The following arguments however demonstrate that
the outcome of executing even very small and programs cannot be determined
analytically.
The argument of self-modifiability for the unpredictability of programs
concerns the fact that certain program-processes modify the very set of their
instructions (the program-script) during the process of computation. For example, in
genetic and evolutionary programming the program-script is treated as a chromo-
some, namely as a sequence of symbols that is subjected to mutation and crossover
during the process of computation. Therefore a genetic program-process, even if
entirely deterministic, does not follow a fixed set of instructions. Similarly, the
instructions encoded in the program-script for computer viruses are modified by
infected program-processes. For example, in the attempt to defeat anti-virus
scanners, polymorphic viruses randomly change their effect—indeed their very
program script (the virus ‘signature’)—arbitrarily with each ‘infection’; thus, any
instruction can change arbitrarily to any other instruction. As a result, the behaviour
of ‘infected’ programs is veritably impossible to predict analytically, not even when
government secrets or large fortunes are at stake. The behaviour of self-modifying
programs of other kinds is almost equally volatile.
Once one program is contaminated, any other program-process sharing the same
resources is likely to be affected. Since computer viruses and other forms of
malware are likely to infect (at one point of another) almost every networked
computer, almost any program in the Internet era carries the risk of becoming self-
modifiable.
The argument of non-linearity for the unpredictability of programs relies on the
fact that the vast majority of program-processes belong to the deterministically
chaotic class of phenomena. Dynamic systems theory (also complexity theory),
which accounts for a very large class of ‘natural’ phenomena (including weather
systems, traffic jams, ecosystems, and stock markets), states that the outcome of
chaotic and deterministically chaotic systems cannot be determined analytically
because ‘‘tiny deviations of initial data lead to exponentially increasing computa-
tional efforts to analyze future data, limiting long-term predictions, although the
dynamics is in principle uniquely determined’’. (Mainzer 2004)
A phenomenon is classified as ‘deterministic chaos’ if the following conditions
hold:
(1) Arbitrarily close to every state s1 of the system, there is a state s2 whose future
eventually is significantly different from that of s1. That is, the tiniest changes
can cause arbitrarily large changes in the future course of events.
(2) Arbitrarily close to every state s1 of the system, there is a state s2 whose future
behaviour eventually returns exactly to s2.

123
158 A. H. Eden

(3) Given any two states s1 and s2, the futures of some states near s1 eventually
become near s2 (Devaney 1989).
For example, the future state of a program calculating the nth value of formula
(4) for some r > 3 satisfies the conditions of deterministically chaotic phenomenon,
and therefore cannot be determined analytically:

Stateðn þ 1Þ ¼ r  StateðnÞ  ð1  StateðnÞÞ ð4Þ

Already in 1946, before the principles of chaos theory have been developed and
evidence to its widespread applicability has been presented, von Neumann observed
that the outcome of programs computing non-linear mathematical functions cannot
be analytically determined:
Our present analytical methods seem unsuitable for the solution of the
important problems arising in connection with non-linear partial differential
equations and, in fact, with virtually all types of non-linear problems in pure
mathematics. (von Neumann, in Mahoney 2002)
In 1979, DeMillo et al. illustrated how ‘chaotic’ computer programs are using the
example of weather systems, for which an event as minute as the flap of a butterfly’s
wings may potentially have a disproportionate effect, indeed a result as catastrophic
as causing a hurricane:
Every programmer knows that altering a line or sometimes even a bit can
utterly destroy a program or mutilate it in ways that we do not understand and
cannot predict. ... Until we know more about programming, we had better for
all practical purposes think of systems as composed, not of sturdy structures
like algorithms and smaller programs, but of butterflies’ wings. (DeMillo et al.
1979)
In other words, even if a program was not specifically encoded to calculate a non-
linear function, in effect its behaviour amounts to such a program. The reason is that
one part or another of it is non-linear. DeMillo et al. specifically mention operating
systems and compliers, which in effect take large part in the behaviour or almost
any program. Therefore, it is very unlikely that any knowledge about all but the
most trivial programs can be established without conducting experiments.
Knuth conceded the weight of the argument of non-linearity, in particular with
relation to the class of programs that are the concern of artificial life:
It is abundantly clear that a programmer can create something and be totally
aware of the laws that are obeyed by the program, and yet be almost totally
unaware of the consequences of those laws; [for example,] running a program
from a slightly different configuration often leads to really surprising new
behaviour. (Knuth Undated)
Berry et al. corroborate the argument of non-linearity by showing that the very
behaviour of microprocessors is chaotic when executing certain program-processes:

123
Three Paradigms of Computer Science 159

As a consequence, the performance of these microprocessors during the


execution of certain programs displays complex non-repetitive variations
that challenge traditional analysis.... Our results show that ... for several
[programs], the complex dynamics observed result from deterministic
chaos. This suggests that a detailed prediction of microprocessor perfor-
mance at long execution times is unlikely with these programs. (Berry
et al. 2005)
Without specifically referring to non-linearity, Turing, in a remark which can
be taken to be an (anticipatory) rebuttal to Hoare (EPI-RAT), acknowledged
already in 1950 that the behaviour of some programs is inevitably a source of
surprises:
The view that machines cannot give rise to surprises is due, I believe, to a
fallacy to which philosophers and mathematicians are particularly subject.
This is the assumption that as soon as a fact is presented to a mind all
consequences of that fact spring into the mind simultaneously with it. It is a
very useful assumption under many circumstances, but one too easily forgets
that it is false. (Turing 1950)
In conclusion from the compelling arguments of complexity (§2.3), self-
modifiability, and non-linearity for the unpredictability of programs, the behaviour
of some programs is inevitably a source of a surprise, and a priori knowledge about
them is severely limited. Therefore, while it may be possible in principle to deduce
some of the properties of the program and all the consequences of executing it (EPI-
RAT), in practice it is very often impossible.
The tenets of the scientific epistemology can therefore be summarized as follows:
EPI-SCI While it may be possible in principle to deduce some of the
properties of the program and all the consequences of executing it, in
practice it is very often impossible. Therefore, while some knowledge about
programs can be established a priori, much of what we know about programs
must necessary be limited to some probabilistic, a posteriori notion of
knowledge.

4.3 The Scientific Ontology

To him who is a discoverer ... the products of his imagination appear so


necessary and natural that he regards them, and would like them regarded by
others, not as creations of thought but as given realities.
—Albert Einstein (1934)

We postulate that an adequate ontological explanation for program-processes must


offer an account for the following unique set of their apparent properties:

123
160 A. H. Eden

1. Temporal: The existence of program-processes extends in time in the interval


between being created and being destroyed19;
2. Non-physical: Program-processes are non-physical, intangible entities;
3. Causal: Program-processes can interact with and move physical devices;
4. Metabolic: Program-process ‘consume’ energy20;
5. Contingent upon a physical manifestation: The existence of program-processes
depends on the existence of that physical computer which is said to be
‘executing’ it;
7. Nonlinear: The outcome of a program-process, in the general case, cannot be
analytically determined.
Let us examine briefly the weaknesses of the rationalist and of the technocratic
ontological explanations with relation to the apparent properties of program-
processes. Rationalism (ONT-RAT) asserts that programs are mathematical objects.
But mathematical objects, such as turing machines, recursive functions, triangles,
and numbers cannot be meaningfully said to metabolize nor have a causal effect on
the physical reality in any immediate sense. It would be particular difficult to justify
also a claim that mathematical objects have a specific lifespan or that they are
contingent upon any specific physical manifestation (except possibly as mental
artefacts). In this respect, ONT-RAT is inadequate.
Alternatively, the technocratic paradigm (ONT-TEC) reduces program-scripts to
mere ‘‘bunches of data’’. It is hostile towards assertions of existence of any abstract,
ontologically independent manifestations of whatever the data is taken to represent.
But program-processes do have causal effect on physical reality: They control
robotic arms, artificial limbs, machine guns (BBC 8-Apr-2006), ‘smart bombs’, the
navigation of automated and semi-automated vehicles, the sale and purchase of
stocks in stock exchanges, and to some degree almost every single home appliance.
Programs also treat depression (Medical News Today 22-Feb-2006), determine
whether your child shall receive her vaccination (Observer 26-Feb-2006), shortlist
job applications (Int’l Herald Tribune 26-Sep-2006), count votes in national
elections, and spread copies of themselves over the Internet. Program-processes
came to have a tangible effect on concrete, physical reality, an effect which ONT-
TEC fails to account for.
The inadequacy of both the rationalist and the technocratic ontological accounts
has led Dijkstra to conclude that program-processes are a ‘radical novelty’:
It is the most common way of trying to cope with novelty: by means of
metaphors and analogies we try to link the new to the old, the novel to the
familiar. Under sufficiently slow and gradual change, it works reasonably
well; in the case of a sharp discontinuity, however, the method breaks down:
though we may glorify it with the name ‘‘common sense’’, our past experience
is no longer relevant, the analogies become too shallow, and the metaphors

19
We ignore, for the moment, difficulties arising from concurrency and the possibility of suspending the
execution of program-processes.
20
That is, the computational process by the central processing unit depends on the consumption of
energy; if suspended, program-processes cease to exist.

123
Three Paradigms of Computer Science 161

become more misleading than illuminating. This is the situation that is


characteristic for the ‘‘radical’’ novelty. (Dijkstra 1988)
According to Dijkstra, the ontological question (ONT) remains open.
Others contend that the misleading similarities to mathematical objects and to
engineered artefacts arise because program-processes are on a par with mental
processes. For example, Alan Bundy calls them ‘mental machines’:
The reason that it is possible to have this analogy both with applied
mathematics and pure engineering is that computer programs are strange
beasts; they are both mathematical entities and artifacts. They are formal
abstract objects which can be investigated symbolically as if they were
statements in some branch of mathematics. But they are also artifacts, in that
they can do things, e.g., run a chemical plant. They are machines, but they are
not physical machines, they are mental machines. (Bundy 2005, p. 218)
Indeed, cognitive and other mental processes are non-physical, causal, metabolic,
contingent upon a physical manifestation (e.g., the human brain), and non-linear
processes too. Bundy’s metaphor is therefore adequate at least according to the
criteria of their apparent properties listed above.
A symmetrical contention is made by computational theories of mind
(McLaughlin 2004), which suggest that the brain is a (programmable) computer
and that the computation of cognitive functions—that is, the exercise of mental
abilities, or simply ‘thinking’—is in effect a program-process. For example, Hilary
Putnam (1975) and more recently Eric Steinhart (2003) argued that the human mind
is in effect a finite-state automaton. Strong AI, which holds that intelligent mental
processes—artificial ‘‘thinking’’ processes—can be effectively reproduced by
executing programs using existing technology. Strong AI was upheld by the
pioneers of AI and to this day it the working assumption of those computer scientists
who investigate machine learning, evolutionary algorithms, and artificial life (Bedau
2004). The same stance was in effect taken by Turing as early as in 195021:
May not machines carry out something which ought to be described as thinking
but which is very different from what a man does? This objection is a very
strong one, but at least we can say that if, nevertheless, a machine can be
constructed to play the imitation game satisfactorily, we need not be troubled by
this objection. ... I believe that at the end of the century the use of words and
general educated opinion will have altered so much that one will be able to
speak of machines thinking without expecting to be contradicted. (Turing 1950)

The analogy to mental processes receives considerable support from recent


results in computational theories of the DNA. Brent and Bruck suggest that the
DNA molecule can be taken to be a program-script encoded in a turing-complete

21
Turing forecast named the year 2000 as a target. During that year, Jim Moor conducted an experiment
which refuted Turing’s prediction, but he hastens to add: ‘‘Of course, eventually, 50 years from now or
500 years from now, an unrestricted Turing test might be passed routinely by some computers. If so, our
jobs as philosophers would just be beginning’’. (Moor 2000)

123
162 A. H. Eden

(‘procedural’) programming language, and the mechanisms of interpreting it to be


on a par with (turing-complete) digital computing machines:
It seems reasonable to view the DNA script in the genome as executable code
that could have been specified by a set of commands in a procedural
imperative [programming] language. (Brent and Bruck 2006)
If a monist, materialist (Stack 1998) position is taken, then the human mind is
indeed largely the product of the interpretation of the human genome. If the DNA
representing the human brain is taken to be a program-script then program processes
are indeed on a par with mental processes.
The scientific ontology and the arguments in its favour can thus be summarized
as follows:
EPI-SCI Program-scripts are on a par with DNA sequences, in particular with the
genetic representation of human organs such as the brain, the product of whose
execution—program-processes—are on a par with mental processes: temporal,
non-physical, causal, metabolic, contingent upon a physical manifestation, and
non-linear entities.

5 Discussion

We examined the basic tenets of three paradigms of computer science, each of


which holds different positions concerning the definition of the discipline, warranted
notions of program correctness, and whether programs are mathematical objects.
We concluded that the disputes among computer scientists go beyond the
boundaries of the discipline and extend to philosophical positions concerning the
nature of computer programs and the nature of knowledge about them. We
expanded on the arguments that corroborate the scientific position concerning these
questions and concluded that, since almost all programs are non-linear or self-
modifiable, a priori knowledge about them is unattainable. Therefore, the methods
of computer science must combine deductive reasoning with scientific experimen-
tation. Our analysis of the apparent properties of program-processes has also
demonstrated that the category of mental process offer the most compelling account
for their existence.
The significant increase in the complexity of software systems has lent much
support to the argument of complexity, leading almost all who upheld the rationalist
paradigm to abandon it22. But while most computer scientists pledge allegiance to
the scientific position, at least in principle, mainstream computer science is yet to
concede the ontological commitments of the scientific paradigm. Rather, since
Wegner observer the prevalence of the technocratic paradigm in 1976, the failure of
the methods of theoretical computer to deliver effective solutions to this crisis and
the vested interest of the multi-billion dollar software industry (Ophir 2006) have

22
Hoare (2006) has recently conceded that ‘‘Because of its effective combination of pure knowledge and
applied invention, Computer Science can reasonably be classified as a branch of Engineering Science.’’

123
Three Paradigms of Computer Science 163

contributed to the dominance of the technocratic doctrine in all but some branches
of AI.
As a result of the increasing influence that the technocratic paradigm has been
having on undergraduate curricula, ‘computer science’ academic programs are
seldom true to their name. Courses teaching computability, complexity, automata
theory, algorithmic theory, and even logic in undergraduate programs have been
dropped in favour of courses focusing on technological trends teaching software
design methodologies, software modelling notations (e.g., the Unified Modelling
Language200523), programming platforms, and component-based software engi-
neering technologies. As a result, a growing proportion of academic programs churn
increasing numbers of graduates in ‘computer science’ with no background in the
theory of computing and no understanding of the theoritical foundations of the
discipline.
In 1988, Dijkstra scathingly attacked the decline of mathematical, conceptual,
and scientific principles, a trend which has turned computer science programmes
into semi-professional schools which train students in commercially driven, short-
lived technology:
So, if I look into my foggy crystal ball at the future of computing science
education, I overwhelmingly see the depressing picture of ‘‘Business as
usual’’. The universities will continue to lack the courage to teach hard
science, they will continue to misguide the students, and each next stage of
infantilization of the curriculum will be hailed as educational progress.
(Dijkstra 1988)
It is difficult to determine precisely the outcome of the domination of the
technocratic doctrine on computer science education. but the anti-scientific attitude
has evidently taken its toll on the software industry. Since it was declared in the
1968 NATO conference (Naur and Randell 1969), the never-ending state of
‘software crisis’ has been renamed to ‘software’s chronic crisis’ (Gibbs 1994) and in
2005 it was pronounced ‘software hell’ (Carr 2004). The majority of multimillion-
dollar software development projects, government and commercial, largely
continues to end with huge losses and no gains (Carr 2004). As a standard,
software manufacturers sign their clients on an End-User Licence Agreements
(EULA) which offer less of a guarantee for their merchandise than any other
commodity with the possible exception of casinos and used cars. Much of the
professional literature refers to software in a jargon borrowed from mathematics,
melodrama, and witchcraft in almost equal measures (e.g., Raymond 1996). Crimes
involving bypassing security bots guarding the most heavily protected electronically
stored secrets and spreading a wide spectrum of software malware have become part
of daily life. The correct operation of the majority of computing devices has become
largely dependent on daily—even hourly—updates of a host of defence mecha-
nisms: firewalls, anti-virus, anti-spyware, anti-trojans, anti-worms, anti-dialers, anti-
rootkits, etc. Even with the widespread use of these defence mechanisms, virtually
no computer is invulnerable to malicious programs that disable and overtake global
23
To which Bertrand Meyer (1997) satirical critique offers valuable insights.

123
164 A. H. Eden

networks of millions of zombie computers (‘botnets’) through the Internet.


Paradoxically, the doctrine preached primarily by software engineers and practi-
tioners has done little but deepen the disparity between the state of practice in
‘software engineering’ and estabilished engineering disciplines.
David Parnas, who became known for his contributions to software design (e.g.,
Parnas 1972) pointed out that even in software engineering the technocratic stance is
untenable, upholding instead the basic tenets of the scientific paradigm:
There is no engineering profession in which testing and mathematical
validation are viewed as alternatives. It is universally accepted that they are
complementary and that both are required. (David Parnas, in Denning 1989)
Parnas’ argument is upheld by the analogy between software engineering and
established and more successful branches of engineering such civil engineering,
chemical engineering, and even genetic engineering. These branches of engineering
would not exist if not for the rigour their scientific and theoretical counterparts, e.g.,
material sciences, chemistry, and molecular biology, respectively. Robin Milner
(2007) concurs and concludes that indeed, the failures of software engineering
emanate from the decline in the role of theoretical computer science and its
methods. Therefore, before software engineering matures to that level of established
engineering disciplines and stand to computer science as chemical engineering
stands to chemistry, computer scientists must abandon the technocratic paradigm.

Epilogue

If the scientific paradigm comes to dominate, and mainstream computer science is


recognized as a branch of natural sciences, the question is how computer science
can mature as such. Quine offers the following criterion of maturity for a scientific
discipline:
A branch of science would qualify for recognition and classification at all ...
only when it had matured to the point of clearing up its similarity standards
[between natural kinds]. ... In general we can take it as a very special mark of
the maturity of a branch of science that it no longer needs an irreducible notion
of similarity and kind. It is that final stage where the animal vestige is wholly
absorbed into the theory. (Quine 1969)
An interesting open question is therefore whether computer programs are natural
kinds (Copeland 2006) and if not then what mature scientific theory of computer
programs can lead us to better understanding of the technology that civilization has
come to depend upon.

Acknowledgements Special thanks go to Ray Turner for reviewing draft arguments and for his
guidance and continuous support, without which this paper would not have been possible; to Jack
Copeland for his guidance on matters of traditional philosophy; and to Bill Rapaport for his detailed
comments. We also thank Tim Colburn (2000) and Bill Rapaport (2005) without whose extensive
contributions the nascent discipline of philosophy of computer science would not exist; Barry Smith for
his guidance; Susan Stuart for developing the contentions made of this paper; Naomi Draaijer for her
support; Yehuda Elkana, Saul Eden-Draaijer, and Mary J. Anna for their inspiration. This research was

123
Three Paradigms of Computer Science 165

supported in part by grants from UK’s Engineering and Physical Sciences Research Council and the
Royal Academy of Engineering.

References

Abran, A., & Moore, J. W. (Eds.) (2004). Guide to the Software Engineering Body of Knowledge—
SWEBOK (2004 ed.) Los Alamitos: IEEE Computer Society.
Abelson, H., Sussman, J.J. (1996). Structure and Interpretation of Computer Programs. (2nd ed.)
Cambridge: MIT Press.
Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers: Principles, techniques, and tools. Reading:
Addison Wesley.
Balaguer, M. (2004). Platonism in metaphysics. In: E. N. Zalta (Ed.), The Stanford Encyclopedia of
philosophy (Summer 2004 ed.) Available http://plato.stanford.edu/archives/sum2004/entries/plato-
nism. (Accessed March 2007.)
Bedau, M. A. (2004). Artificial life. In: L. Floridi (Ed.), The Blackwell guide to philosophy of computing
and information. Malden: Blackwell.
Berry, H., Pérez, D. G., & Temam, O. (2005). Chaos in computer performance. Nonlinear Sciences
arXiv:nlin.AO/0506030.
Brent, R., & Bruck, J. (2006). Can computers help to explain biology? Nature, 440, 416–417.
Bundy, A. (2005). What kind of field is AI? In: D. Partridge, & Y. Wilks (Eds.), The foundations of
artificial intelligence. Cambridge: Cambridge university Press.
Carr, N. G. (2004). Does IT matter? Information technology and the corrosion of competitive advantage.
Harvard Business School Press.
Cohn, A. (1989). The notion of proof in hardware verification. Journal of Automated Reasoning, 5(2),
127–139.
Colburn, T. R. (2000). Philosophy and computer science. Armonk, N.Y.: M.E. Sharpe.
Copeland, B.J. (2002). The Church-Turing thesis. In: Edward N. Zalta (Ed.) The Stanford Encyclopedia
of Philosophy (Fall 2002 ed.) Available http://plato.stanford.edu/archives/fall2002/entries/church-
turing/ (Accessed Mar. 2007).
Copeland, B.J. (2006). Are computer programs natural kinds? Personal correspondence.
Devaney, R. L. (1989). Introduction to chaotic dynamical systems (2nd ed.). Redwood: Benjamin-
Cummings Publishing.
Debian Project, The. http://www.debian.org. Accessed March 2007.
DeMillo, R. A., Lipton, R. J., & Perlis, A. J. (1979). Social processes and proofs of theorems and
programs. Communications of the ACM, 22(5), 271–280.
Denning, P. J. (1989). A debate on teaching computing science. Communications of the ACM, 32(12),
1397–1414.
Denning, P. J., Comer, D. E., Gries, D., Mulder, M. C., Tucker, A., Turner, A. J., & Young, P. R. (1989).
Computing as a discipline. Communication of the ACM, 32(1), 9–23.
Dijkstra, E.W. (1988) On the cruelty of really teaching computing science. Unpublished manuscript EWD
1036.
Dybå, T., Kampenesa, V. B., & Sjøberg, D. I. K. (2006) A systematic review of statistical power in
software engineering experiments. Information and Software Technology, 48(8), 745–755.
Eden, A. H., Hirshfeld, Y., & Kazman, R. (2006) Abstraction classes in software design. IEE Software,
153(4), 163–182. London, UK: The Institution of Engineering and Technology.
Einstein, A. (1934). Mein Weltbild. Amsterdam: Querido Verlag.
Fasli, M. (2007). Agent technology for E-commerce. London: Wiley.
Fetzer, J. H. (1993). Program verification. In: J. Belzer, A. G. Holzman, A. Kent, & J. G. Williams (Eds.),
Encyclopedia of computer science and technology (Vol. 28, Supplement 13). New York: Marcel
Dekker Inc.
Gamma, E., Helm, R., Johnson, R., & Vlissides, J. M. (1995). Design patterns: Elements of reusable
object-oriented software. Reading: Addison-Wesley.
Georick, W., Hoffmann, U., Langmaack, & H. (1997). Rigorous compiler implementation correctness:
How to prove the real thing correct. Proc. Intl. Workshop Current Trends in Applied Formal
Method. Lecture Notes in Computer Science, Vol. 1641, pp. 122–136. London, UK: Springer-
Verlag.

123
166 A. H. Eden

Gibbs, W. W. (1994) Software’s chronic crisis. Scientific American, 271(3), 86–95.


Hall, A. (1990). Seven myths of formal methods. IEEE Software, 7(5), 11–19.
Hoare, C. A. R. (1969). An axiomatic basis for computer programming. Communications of the ACM,
12(10), 576–583
Hoare, C. A. R. (1986). The mathematics of programming: an inaugural lecture delivered before the
Univ. of Oxford on Oct. 17, 1985. New York: Oxford University Press
Hoare, C. A. R. (2006). The ideal of program correctness. Transcript of lecture, Computer Journal.
London: British Computer Society. Available: http://www.bcs.org/upload/pdf/correctness.pdf
(Accessed Mar. 2007)
IEEE Std 610.12-1990 (1990). IEEE Standard Glossary of Software Engineering Terms. Los Alamitos:
IEEE Computer Society.
Knuth, D. E. (1968). The art of computer programming, Vol. I: Fundamental algorithms. Reading, MA:
Addison Wesley.
Knuth, D. E. (1974). Computer science and its relation to mathematics. The American Mathematical
Monthly, 81(4), 323–343.
Knuth E. D. (Undated). On the game of life, free will and determinism.(Video). Available: http://
www.meta-library.net/ssq/sj1-body.html (Accessed Mar. 2007)
Kuhn, T. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press.
Loux, M. J. (1998). Nominalism. Routledge Encyclopedia of Philosophy (electronic Ver. 1.0). London
and New York: Routledge.
Mahoney, M. S. (2002). Software as science—Science as software. In: U. Hashagen, R. Keil-Slawik, &
A. Norberg (Eds.), History of computing: software issues. Berlin: Springer Verlag.
Mainzer, K. (2004). System: an introduction to systems science. In: L. Floridi (Ed.), The Blackwell guide
to philosophy of computing and information. Malden: Blackwell.
Markie, P. (2004). Rationalism vs. Empiricism. In: E. N. Zalta (Ed.), The Stanford Encyclopedia of
Philosophy. Available http://plato.stanford.edu/archives/fall2004/entries/rationalism-empiricism
(Accessed Mar. 2007).
McCarthy, J. (1960). Recursive functions of symbolic expressions and their computation by machine, Part
I. Communications of the ACM, 3(4), 184–195.
McCarthy, J. (1962). Towards a mathematical science of computation. Proceedings of IFIP.
McClure, R. M. (2001). Introduction. Addendum to: (Naur & Randell 1969). Available: http://
www.homepages.cs.ncl.ac.uk/brian.randell/NATO/Introduction.html (Accessed Mar. 2007)
McLaughlin, B. (2004). Computationalism, connectionism, and the philosophy of mind. In: L. Floridi
(Ed.), The Blackwell guide to philosophy of computing and information. Malden: Blackwell.
Meyer, B. (1997). UML—The Positive Spin. American Programmer, 10(3). Available: http://
archive.eiffel.com/doc/manuals/technology/bmarticles/uml/page.html (Accessed Mar. 2007)
Milner, R. (2007). Memories of Gilles Kahn, and the informatic future. Transcript of speech before
Colloquium in memory of Gilles Kahn, INRIA Research Institute.
Naur, P., & Randell, B. (Eds.) (1969). Software Engineering: Report of a conference sponsored by the
NATO Science Committee (7–11 Oct. 1968), Garmisch, Germany. Brussels, Scientific Affairs
Division, NATO.
Newell, A., & Simon, H. A. (1976). Completer science as empirical inquiry: Symbols and search.
Communications of the ACM, 19(3), 113–126.
Olson, E. T. (1997). The ontological basis of strong artificial life. Artificial Life, 3(1), 29–39.
OMG (Object Management Group). (2005). Unified Modeling Language (UML), Ver. 2.0. Technical
report (2005). Available http://www.omg.org/technology/documents/formal/uml.htm (Accessed
Mar. 2007)
Ophir, S. (2006). Computer science and commercial forces: Can computer science be considered science?
In Proc. 4th European conf. Computing And Philosophy—ECAP, Trondheim, Norway.
Parnas, D. L. (1972). On the criteria to be used in decomposing systems into modules. Communications of
the ACM, 15(12), 1053–1058.
Perry, D. E., & Wolf, A. L. (1992). Foundation for the study of software architecture. ACM SIGSOFT
Software Engineering Notes, 17(4), 40–52.
Pierce, J. R. (1968). Keynote address. Conference on Academic and Related Research Programs in
Computing Science (5–8 June 1967). Reprinted in: A. Finerman (Ed.), University Education in
Computing Science. New York: Academic Press.
Popper, K. (1963). Conjectures and refutations: The growth of scientific knowledge. Routledge, London.

123
Three Paradigms of Computer Science 167

Putnam, H. (1975). Minds and machines. In: Philosophical papers, Vol. 2: Mind, Language, and reality.
pp. 362–385. Cambridge: Cambridge University Press.
Quine, W. V. O. (1969). Natural kinds. In: Ontological reality and other essays. Columbia University
Press.
Rapaport, W. J. (2007). Personal correspondence.
Rapaport, W. J. (2005). Philosophy of computer science: An introductory course. Teaching Philosophy,
28(4), 319–341.
Raymond, E. S. (1996). The New Hacker’s Dictionary (3rd ed.). Cambridge: MIT Press.
Simon, H. A. (1969). The sciences of the artificial (1st ed.) Boston: MIT Press.
Sommerville, I. (2006). Software engineering (8th ed.) Reading: Addison Wesley.
Stack, G. S. (1998). Materialism. Routledge Encyclopedia of Philosophy (electronic Ver. 1.0). London
and New York: Routledge.
Steinhart, E. (2003). Supermachines and superminds. Minds and Machines, 13(1), 155–186.
Stoy, J. E. (1977). Denotational semantics: The Scott-Strachey approach to programming language
theory. Cambridge: MIT Press.
Strachey, C. (1973). The varieties of programming language. Tech. Rep. PRG-10 Oxford University
Computing Laboratory.
Szyperski, C. A. (2002). Component software—Beyond object-oriented programming (2nd ed.). Reading:
Addison-Wesley.
Turing, A. M. (1936). On computable numbers, with an application to the entscheidungsproblem.
In Proc. London Math. Soc. Ser., 2, 43(2198). Reprinted in Turing & Copeland (2004).
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.
Turing, A. M., & Copeland, B. J. (Ed.) (2004). The essential Turing: Seminal writings in computing,
logic, philosophy, artificial intelligence, and artificial life plus the secrets of Enigma. Oxford, USA:
Oxford University Press.
Turner, R. (2005). The foundations of specification. Journal of Logic & Computation, 15(5), 623–663.
Turner, R. (2007). Personal correspondence.
Wegner, P. (1976). Research paradigms in computer science. In Proc. 2nd Int’l Conf. Software
engineering, San Francisco, CA, pp. 322–330.

123

You might also like