You are on page 1of 17

THINKING DIFFERENTLY ABOUT DOING AI: Toward a satisfactory

epistemic methodology of the strong hypothesis


II. Hierarchical Planning Is Not Enough:
AI should espouse the origin of common sense
Copyright 1999 Everett A. Johnston.

What is natural intelligence? Is it, as Western tradition suggests, simply a host of neatly compartmentalized
mental mechanisms known as cognitive faculties which operate to categorize, systematize, evaluate, and
generally produce intelligent action? Or is it, as the connectionists prefer, the system states which emerge
upon interaction of large numbers of the brain's simple, insensate processing units? If intelligence is best
described by one model or the other, why is the field of artificial intelligence not more advanced than it is, and
why are its real-world applications still so unsophisticated? What shortcoming could possibly prevent
numerous AI professionals world-wide from making significant inroads into their particular study?
After all, the problem of modeling intelligence from the vantage point of either analysis seems markedly
clear-cut and categorical. Similarly palpable premises are widely known to have brought extraordinary
progress to fields such as physics, chemistry, biology and neuropsychology. This second article of a series of
three makes the case that, ultimately, AI cannot become a rigorous discipline on an equal footing with other
sciences without a reassessment of the epistemic methodology that is used to develop its emulations of natural
intelligence. The shortcoming, we aver, is one of treating intelligence the same as other objects of scientific
study, as though intelligence possessed no qualitative difference apart from them. We advocate a renewed
effort to reinvent AI but from the standpoint of the whole person, rather than as some objectification thereof,
by suggesting a design for AI systems which takes into account the ontology of reason as well as its outward,
phenomenal character.
Artificial intelligence requires an understanding of reason which goes beyond that required of scientific
acumen, in that an AI theorist's purpose is to produce computer programs which generate an emulation of
reasoning. In contrast, a scientist's purpose is to produce theories which later can become the basis for
experiment, thereby eventually generating only data. An AI program does not calculate so much as yield
potential solutions; it takes the entire person of a scientist to accomplish the same thing. In fact, the purposes
of AI and science are so different that, instead of facing difficulties concerning reasoning which resemble
those that beleaguer scientists, the AI community rather faces a more comprehensive host of difficulties
resembling those that beleaguer philosophers. This said, although much has been written concerning the
nature of philosophical inquiry, the West's understanding of reason itself remains essentially the same as that
of Aristotle's, some 2300 years ago. This neglect is important for both philosophy and AI, because they share
a central problem of methodology: how should reasoning be employed to understand the reasoning process
itself? Furthermore, how can circularity be avoided while doing so?

The answer for both the AI community and philosophers has been to probe far-reaching alternatives to the
West's approach to reason (a tradition which practices denotation of cognition through sentential logic and
canonical form). Both the AI community and philosophers retreat from direct, Western-styled treatment of
reason, but in very different ways from the other. We shall address the retreat of the philosophers first. They
study reason for what it is an expression of the human enterprise, rather than as an exercise in logic. They
ponder the meaning of reason as it relates to all things, even to the study of philosophy itself. Their retreat
from the traditional approach to reason is that from canonical form to eclectic metalogic, from sentential logic
to mereology (the theory of part-to-whole1). Because the scope is so broad, however, what they write appears
to be of only theoretical interest to AI engineers. Yet, many believe that AI is in actuality the culmination of
Western philosophy. Looking over its long history, Western thought tends to foreshadow the promise which
AI appears about to bring to fruition. Locked within AI is the tremendous potential of the mind which
philosophers have extolled for centuries. The particulars of philosophy are quite pertinent to AI's own retreat
from Western philosophy's approach to reason, and will become even more so as awareness of philosophical
literatures grows among the AI community.
Philosophy involves a tradition steeped in metaphysics and first questions. Philosophical discussion is
couched in terms of thinking and mind, rather than in terms which are more determinate but narrow, like the
cognitive science terms cognitive faculties and mental constructs, because such broader terms more clearly
express a philosopher's insight of the dynamic nature of human awareness. To philosophers, the mind seems
infinitely malleable, unable to be pigeonholed or methodized. Because its configuration is inexhaustible, the
mind does not seem neatly compartmentalizable into mental mechanisms which hang together to fully account
for intelligence. Therefore, many philosophers believe cognition per se is neither sentential nor canonical.
Unlike most of the techno-scientific community, most philosophers are convinced that the mind is much more
than just an inference engine. Both philosophers and the techno-scientific community do agree that sheer
processing power alone is not enough to explain the mind's amazing problem solving capacity, which,
according to calculations based upon information science, exceeds that of a hypothetical computer the size of
the universe with parts as small as an electron.2 It is clear that people resort to other types of cognitive
resources besides the rational to solve certain problems (i.e. intuition, inspiration, common sense, etc.).
Philosophers also use the terms thinking and mind for purposes beyond those of above. These terms serve as
idioms which inherently connote more than the terms cognitive faculties and mental constructs, in that they
communicate a recognition that humanity actually comes into direct contact with the metaphysical dimension
of existence through the mind and its capacity to think ideas. The acumen which purports to delineate this
transcendent aspect of reality has of old been known as ontology, the philosophical study of being. Ontology
becomes pertinent to ratiocination (and thus conceivably pertinent to AI) as one attempts to question the
nature of the being of knowledge. An ancient discipline arose to meet the challenge of such questioning and
became an esoteric field in its own right, known today as epistemology. Although ontology and its offshoot,
epistemology, are both elements of traditional metaphysics, neither ever became so sufficiently developed as
to allow philosophers to bring it to bear, in any significant fashion, upon the question of whether or not
cognition is analytic (that is, comprised of neatly compartmentalized mechanisms which hang together to
fully account for intelligence). Although philosophy has always posited that a transcendent aspect lies behind
material reality, understanding as to the basis in transcendence of the being of knowledge has never before
translated into detailed understanding of thinkings phenomenal manifestation, cognition. Modern
philosophers have finally begun to speculate as to exactly how this transcendence subtly affects the gamut of
human consciousness.
Through the vagaries of many centuries, the relevance of ontology and epistemology to prevailing
philosophical opinion waxed and waned until, finally, the two ancient dogmas, ontology and epistemology,

came down to us seemingly a curiosity of antiquity, all but moot. But a large community of philosophers and
other academics work today to pinpoint the exact nature of how transcendence subtly influences the
phenomenal world, and, in particular, how it actuates thought. To do so, they back even further away from
idealism, as well as the Western tradition, than did their predecessors. They enter an area that requires
pre-ontological investigation. To be sure, they conduct their investigations in order to develop an ontology,
but they seek one which is true to all authoritative writings about Being, including those which are most
ancient. Their understanding of reason stands well outside the Western tradition because they ascribe to
writings from an era long before that of Aristotle, in which lived pre-Socratic poet-philosophers such as
Parminedes and Paracelsus. Abandoning the classical roots of Western tradition, they have likewise
abandoned its logic. Declaring the end of metaphysics, their investigations nevertheless have much to say for
science and the technology of AI because of the insight which their research opens up into what is usually
called common sense.
We now come to the AI community which, like philosophers, also has understood that there is a need to probe
alternatives to the traditional approach to reason. At one time, the answer seemed to lie in programming
systems to record the reasoned responses of experts so that, later, the automated know-how might be called
upon and even enriched. But since such expert systems function to simulate reasoning about a relatively small
set of specified tasks, this type of programming expert systems mainly allows for a scope of query which
is specialized, prescribed, and geared toward implementing only rationale, rather than full-blown human
intelligence. Every day, human beings successfully overcome real world problems which even a patchwork of
expert systems, each bringing to bear their narrow expertise, could not be so robust to handle. This is because
natural intelligence features coordinated tacks of reason that work together to achieve goals according to a
hierarchy of master plans. A growing AI literature that deals with planning theory3 is evidence that the AI
community has begun to realize that a more general approach, which addresses the problem of overall
cognition, does seem desirable. Planning theory has come into its own recently, but developers are
encountering age-old epistemological problems that are well-documented in philosophical literature and
rhetoric. These include the frame problem (how to define a focus of activity within a virtual world without
detailing that world's every aspect), the precondition qualification problem (how to maintain the thrust of a
focus of activity despite globalization of the scope of its context), and the ramification problem (how to
establish axioms of invariance about a focus of activity which hold true regardless of changes to its context).4
At root, all these dilemmas revolve around the character of some hypothetical focus of activity which takes
place in a virtual world.
In a sense, metaphysics, too, might be thought to be a study of the character of focuses of activity (i.e. beings)
only metaphysics focuses of activity take place in the real world. Philosophers have traced basic problems
of metaphysics (analogous to those of planning theory) back to lack of knowledge about cogitation i.e.
"thinking ... which includes both 'understanding' and 'sensibility'...."5 Natural intelligence is characterized by
egoistic reason, that is, reason as reckoned by a sensible being. To assess natural intelligence in all its aspects,
one must consider more than just the phenomenological content of mental images (those associated with
rationality). One must consider the ontological nature of both the reasoner and the reasoner's world. Another
way to say it is that, in metaphysics, too, a frame problem is involved. Once there is a frame problem, related
problems must follow having to do with precondition qualification, ramification, multiple agent interaction,
etc. It is no accident that such dilemmas perplex both AI theorists and philosophers. In fact, problems
associated with planning in a virtual world derive their origin from metaphysical ambiguities in the real
world.
But some AI theorists believe that they have escaped the root of planning theory's problems altogether.
Known collectively as connectionists, their subject matter avoids the issue of egoistic reason entirely. It is just

as devoid of the Aristotelian tradition as is pre-ontological philosophy, in that one finds nothing about
rationale, sequitur, or, for that fact, syllogism, inductive-deductive proofs, or the hypothetico-deductive
method in connectionism's techniques devised to configure parallel distributed processing (PDP) networks.
Instead, connectionists believe that "Learning ... amounts to a very simple process that can be implemented
locally at each [network] connection without the need for any overall supervision."6 The whole process
depends mainly upon feedback and feedforward from local system states which sweep up or down through the
mesh of connections to influence, in real time, the state of the entire array. Much is written about getting
networks to learn (i.e. " ... to cause them to have a particular global behavior ... "7), of choosing the rules
governing the network's activation functions, and of retraining its pattern associators.8 But the fine-tuning of a
PDP is a heuristic process. It can't be known beforehand with a great deal of accuracy whether or not a
particular network configuration can be trained to perform as intended.
For this very reason, connectionism's burden of proof is light: it is framed without the excess baggage of the
requirement for analyticity which encumbers traditional rationalism, a liability which adheres to any
theoretical framework built upon sentential and canonical antecedents. Things do not need to be spelled out
beforehand in detail. PDP networks are strictly stimulus-response oriented devices, and their creators make no
pretense about possessing any special understanding of natural intelligence at all, except to presume that it,
too, is strictly mechanical and derived from state changes attributable to neural arrays in the brain, which
produce associations at the local level rather than globally throughout consciousness. What's more, the
creators of PDP'S do not accept that internal mental activity is conceptual-level semantics; but rather that it is
merely " ... psychological states with no referential ties to anything ...."9 PDP networks are designed with
pattern associators which search for canonical forms, but not for semantic forms. In fact, the attempt would be
absurd, according to the way that connectionists understand their discipline; to search at all for semantic
forms would be contrary to the major tenents of connectionism, which argue for the mechanical nature of the
world and everything within it (including us). Although one day it might be possible for perceptrons
(comprised of PDP networks) to be designed which realistically simulate the sense of sight or the sense of
hearing, there will never be a PDP network designed which can sense the connotations that surround the
world's myriad canonical forms, the ambience inherent to natural scenary or the exaltation that inhabits a
choral masterwork. This "ambience," this "exaltation" is a phantasm of conceptual-level semantics, assert the
connectionists. Even less than phantasm, it is an amorphous fiction that exists external to reality. It is visceral
in origin and tends to destabilize rather than resolve to a steady state.
Connectionism is an outgrowth of logical positivist thought, the doctrine of scientific realism. Rummelhart
and McClelland in their Parallel Distributed Processing (Vol. I) of 1988 note that, " ... the simple PDP model
is clearly consistent with a rabidly empiricist world view."10 In fact, its precepts accord with classical
evolution theory in that biological systems are presumed to be the same as any other type of system, animate
or inanimate. The connectionist paradigm is a brand of materialism, therefore, whereas the pre-ontological
investigation of modern philosophy is a brand of transcendentalism. Both camps alike withdraw from direct
treatment of the phenomenological content of mental images, although each in separate ways. Many thorny
problems similar to those detailed in Edmund Husserl's phenomenology11 are avoided by this maneuver.
Now note that all three approaches the Physical Symbol System Hypothesis, the connectionists, and
todays pre-ontological philosophies all enjoy the benefit of a supralogical approach with none of the
failings of traditional methods. Of course, the glaring difference is that the Physical Symbol System
Hypothesis and connectionism are already being developed into full-blown AI disciplines. Notwithstanding,
traditional AI is not succeeding for reasons detailed in the first paper of this three part series, and
connectionists have also failed thus far to develop any PDP network that can be fine-tuned so to respond in a
moderately sophisticated, human-like fashion (developing a good pun, for example). Cognitive scientists

suspect this is due to inappropriate methodology: an overconfidence among connectionists as to their ability
to "engineer in" desired responses beforehand; their disbelief that, "...sometimes explanations in terms of
mind states do have brain-state counter-parts, but sometimes not..."; plus their overdependence upon contrived
fine-tuning techniques to tweak PDP's.12 Contrary to connectionism's claims, implementation of a PDP
involves more than just heuristics. Revealingly, intuitive guesswork and artful skill at tinkering with
complicated interconnecting mechanisms is involved, too.
As for pre-ontological investigation, suffice to say that it is not even an issue for the AI community. Although
several seminal treatises by Martin Heidegger are available which point the way,13 no writer has successfully
imparted the significance of pre-ontological philosophy to the techno-scientific community as a whole. But a
fresh paradigm of reason based upon pre-ontological philosophy could presage a new era for AI. Traditional
approaches to reason would give way to a deeper understanding of natural intelligence, one which goes
beyond cognition defined in terms of analysis (i.e. assessment, then formulation, and finally corroboration) or
in terms of gross phenomena, like brain wave patterns and overt behavior. Its tenets would adhere to neither
the chief characteristics of idealism nor scientific realism. It would dispense with both the notion that the
mind has a predilection toward ideals and the notion that it is a disjoint collection of pattern associators.
Instead, a new theory of mind could be posited that does not take as a given the West's schema of linear
concept formation that which is typified by the reference to sentential linking in the phrase, "a chain of
logic." Rather, the premise would be that concepts, at a certain level of abstraction, are grounded in an
ontological dimension which is responsible for the blend of rationality and sensibility that yields
conceptual-level semantics. This level of abstraction encompasses affective components of naturally
intelligent action such as psychologism, other-directed conatus, intuition, portent, etc. Accordingly, because
the origin of concepts is part physical and involves the physical workings of the brain, concepts are
intrinsically organic and grow one with another in concert as a holistic reasoning milieu. Also, because their
ultimate origin is ontological, concepts are directly or indirectly dependent upon the existential comportment
of the conceiver. In such a framework, the antecedent/consequent dyad represents causal relationship through
contextual linking, rather than through sentential linking. It is connection via multiplanar networking across
strata of transphenomenal as well as phenomenal criteria (as opposed to monoplanar chains of sentential
logic). Embedded in a framework that is preveniently nonlinear, and thus nonhomogeneous, concepts are
characterized by nested interconnectedness. Attempts to analyze them in terms of sentential logic and
canonical form would be unnatural. Aristotelian logic is at best a rough approximation of their actual
operation. Because of this, AI algorithms designed to account for such an expanded view of intelligence
would necessarily have to somehow incorporate routines that could simulate multiplanar networking of
antecedent/consequent dyads across strata of transphenomenal and phenomenal criteria. These algorithms
would also have to account for existential comportment in order to represent the primary emotive influences
that underlie concepts.
The AI community as a whole has not even heard of such an alternative, much less understood or applied it to
system design. Yet AI based upon traditional paradigms of reasoning is disappointing to its designers, the
techno-scientific community, and to society at large. Neither symbolics nor connectionism suffice to satisfy
AIs strong hypothesis, because neither are rooted in an accurate understanding of mind. Therefore the
resulting programs are not as comprehensive, categorical, prototypal and fecund as they need to be. From this
perspective, it is recommended that progress should be sought on some more prevenient level if AI is to fulfill
its promise as the culmination of Western thought.
Historically, first attempts to pursue AI on some more prevenient level began by following an objective bent,
and the result was connectionism ... which seeks to reduce the idea of reason to the status of an end-product of
local connectivity. Recently, an AI theory became available that utilizes connectionist techniques (i.e.

arrangements of pattern associators) implemented in a novel way, so as to achieve a more authentic simulation
of natural intelligence. The IRIS Group's Fuzzy, Holographic and Parallel Intelligence advances a
holographic theory of intelligence premised upon an idea similar to that expressed above. Natural intelligence
is nested, multiple pattern associations of various levels of complexity are enfolded into a single neural
region, and the " ... input-output associations enfolded within a given memory 'cell' are expressed directly
[and exclusively] through content of input .... "14 The holographic theory of intelligence engenders an
automatic formalism system which simulates the multiplanar networking of natural intelligence, and,
therefore, it is more faithful to actual reason. However, it does fall short of simulating the dynamic experience
of subjective rationality (described above as a parameterization of logic space in terms of existential
comportment). Because it only goes halfway, the holographic theory of intelligence is predictably fraught with
variations of the frame problem. This is because, at root, perspective upon the world is contingent upon the
standard of rationality to which one adheres. To some extent, that standard can be arbitrary.
Reasoning as linear process, written or not, breaks down as situations become more and more complex, and
strict rationality fails to represent the world as it exists for people. The rational edifice crumbles. Human
comprehension doesn't rely upon such frail empirical structures, however, but operates day to day relying
upon ulterior means of cognition, such as intuition (though, perhaps, what is known as intuition, in fact, is just
the application of reason which is allowed to address every aspect of the world as it appears in its essence,
rather than being restricted to just those aspects which happen to be objects of knowledge). Just by virtue of
the complexity of human life, everyone daily experiences modes of reasoning firsthand which are
fundamentally different from those of the hypothetico-deductive paradigm of logic. These modes include
judgments and associations inappropriate to traditional logic, but which are nevertheless essential to our day
to day welfare. They may even be resorted to, when couched in a proper framework known as metalogic, to
oversee the sensibility of hypothetico-deductive conclusions. An individual can find such an advantage in
such metalogical thinking that, by far, the more progressive strides of any science are still generally
attributable to the efforts of individuals or a small group of individuials, rather than to institutional
collaboration. Philosophers claim that our innate experience of such comparatively more cardinal eduction
faculties gives to us an indication of the profound ontological substrata which lies underneath all
phenomena.15 Their work suggests that if people could come to better understand the nature of being, and in
particular the manner in which the human mind exists, ideas regarding cognitive tools better suited to
reasoning schemas in general would take shape and become more definite. More accurate representations of
natural cognitive processing than traditional logic would come within reach and, consequently, could begin to
be applied.
The modern debate between academics over the pertinence which the role ontological issues play for reason
has raged several decades now, since the middle of the twentieth century. But on the whole, contemporary
philosophers have given up pursuing the essence of reality and have concentrated upon figuring out how
phenomena behave. Because cognition itself may very well involve other kinds of elementals in addition to
mental phenomena, it is possible that by so doing they deprive themselves of understanding their own
essence. Consequently, they miss out on the chance to conceptualize (and then canonize) a greater share of the
mind's natural intelligence than what is instituted in the hypothetico-deductive method. Currently, no
trenchant means more powerful than the hypothetico-deductive method has yet been devised to represent
reasonable cognition, nor does it appear that any more powerful means will ever become possible. But this
impasse, we aver, might just be a by-product of methodologies that are premised only upon phenomenological
aspects of reason's being. Kowakolski's masterful summation of the problem, The Alienation of Reason,
suggests that this dilemma still haunts anyone who might wish to account for cognition by either logical
positivists' rationalism or metaphysical traditions.16 The jury is still out for reasons of insufficient evidence.

Artificial intelligence researchers must understand human reasoning if they are to successfully simulate
human intelligence. Historically, systematic study of reason has been the exclusive province of philosophers.
But AI researchers seldom see the need to survey philosophical literature for insight into the nature of human
intelligence, perhaps because the AI industry's overriding concern has been pragmatic to develop viable
systems for commercial enterprise. To some extent, reason is essential to every branch of knowledge and,
therefore, gaining a comprehensive understanding of reason must be difficult. This is because human reason
behaves and reacts differently depending upon the circumstances involved and the different people who
reason. As we understand today, no one in history has ever had a complete and rigorous treatment of reason
available to which they could refer, and so researchers in scores of academic fields, unable to wait for one,
have had to fill in the gaps on those occasions when the traditional paradigm of logic failed to provide the
proper underpinning for their theories. They had to become their own philosophers, devising what they
believed were reasonable means to achieve their varied goals and, only thereafter, was the scope and
thoroughness of the methodologies created assessed. Methodologies from the advanced fields of knowledge
can speak tomes about the character of human reason in general. But what AI theorist has time to learn about
them? People have a hard enough time keeping up with their own literature. And besides, still other scholars
in the past believed that heuristics rather than reason was, in fact, the key to finding meaningful patterns in the
world around them and their voluminous statistical musings have much to say about reason as practiced for
scientific purposes as well.
As it stands today, reason is not well-understood. At the most elementary level, if one is to properly
understand reason, then an idea of how to recognize reason is essential beforehand. But suppose one is a
pragmatist and does not believe that there is a realm of ideals which reason inhabits, or even that there are
conceptual-level semantics. How can one define reason a priori (assuming one knows to look in the first
place)? Reason has no separate existence outside its context. Examples of reason vary widely. Slight changes
of context produce exaggerated aftereffects that distort its tack, and, what's more, we now understand that
orderly patterns a necessary precondition for reasoning can exist even in the midst of apparently chaotic
contexts. How is one to recognize what is reasonable, and what isn't? More importantly, how is one to
program a machine arguably the ultimate pragmatist to be reasonable?
Humanity has overcome its ignorance through careful cultivation and use of a unique cognitive artifice
developed to facilitate rationality an overall conception of what it means to be reasonable. Even if one is a
connectionist and disbelieves such talk of conceptual-level semantics,17 still the fact remains that people en
masse do believe that they know what it is to reason. When asked how do they know, they claim to have
developed some kind of mental framework which enables them to determine, often from incomplete data,
what is reasonable and what is not. As of yet, mental frameworks with such wide-ranging compass appear to
exist only in the minds of human beings. This seems to be because human beings, in their everyday lives, are
able to garner enough directed experiences from both internal and external sources that they become able to
achieve such mental assemblages. People search out directed experiences that possess salient inclination for
the purpose of augmenting the cache of meaning which forms their personal mental framework. They do so to
assess the behavior of objects in the world, looking for tendencies, and to assess outlooks from within,
seeking corroboration. As they build upon the sensibility so fostered, the mental act of comprehending
becomes easier. Understanding as to what is reasonable and what is not solidifies. A belief is formed that
thinking builds upon conceptual understanding at a certain level of abstraction as a process that has
significance beyond the sum of neural interactions. This notion that thinking is conceptual-level semantics,
which is normally well-developed by the time one achieves adulthood, becomes a modus operandi that, once
achieved, guides individuals in their search for ever more significant directed experiences.
Maturation connotes attainment of this overall conception of what is reasonable. It means that the concept of

common sense becomes firmly established as a guiding principle. Even devout pragmatists have found that
they in their growing up must accomplish no less. This fact may seem trivial, but this is because we are heirs
to scientific realism. There has not always been a prescribed, context-free definition of what it is to be
reasonable. The ideal of a totally rational man is a fiction that was successfully advanced by Aristotle. For
expediency's sake, the techno-scientific community has adopted a standard for reason ultimately defined in
terms of such an ideal. Courts of justice employ this standard as well, since jurisprudence also descends from
the Aristotelian tradition of the ideal rational man. Neither science nor the law makes claims that duration
alone will make a person wise, yet the tacit implication behind the concept of the ideal rational man is that,
with maturation, most persons over the years become reasonable. Involvement with many aspects of the world
must take place over and over again in order for common sense (i.e. the combination of understanding with
sensibility) to develop. It is association with the world which brings one to the stage of thinking reasonably.
Such association involves objective observation, to be sure, but the process is not entirely one of cataloging
responses to stimuli and reaction to action. Philosophers understand that experience is the vehicle that brings
one into such association or contact with the essence of things, and it is this contact between their essence and
that of the individual's which imparts knowledge about reason. There is a certain level of abstraction upon
which exists the "look and feel" of reason its tenor of common sense that becomes all-important to the
rational interpretation of events. The techno-scientific community takes no position on such a notion, but
inadvertently recognizes it nonetheless in such terms as elegant, extraneous, redundant and trivial. Some
ontologists suggest that this notion of the "look and feel" of reason is nothing less than evidence of a dialogue
between the Being of everything and the being of the individual. This dialogue produces a relationship
between the world and human beings that permits enough directed experiences to come their way to promote
in them an overall conception of reason.
According to Martin Heidegger, a prominent philosopher who advocated ontological investigation, ancient
people enjoyed a much more acute understanding of metaphysical issues than we do today, because they were
able to consciously recognize this relationship of essences. Their world-view was profoundly different from
ours and presupposed a closeness to Being which afforded them special insight into transcendence. During
their time, they recognised another plane of consciousness which seemed to impel all that existed, and these
ancient people realized that this other realm of existence had to be a remarkably integrated and causally
related place in order for it to be the source from which emanated mathematics, science and technology. We
can get a feeling for their awe by contemplating the character of reasoning, which often gives one a sense that
its conclusions are somehow perfect even elegant. This sense of elegance is not just our imagination. It is a
certain level of abstract comprehension which, at times, nearly everyone achieves through Eureka experiences
of sudden insight that reveal core tenets which underpin whatever knowledge they thereby attain. Plato
recognized this phenomenon and, because he interpreted it to be a strong indication that the world exists as a
oneness, wrote often of a Realm of Ideals perfectly suited to a holistic conception of the universe. Aristotle
referred to this oneness in all things and incorporated it into his philosophy of logic, deducing, " ... other
things being equal, that proof is the better which proceeds from the fewer postulates." Eudoxos of Cnidus, the
founder of geometry, was said by his contemporaries to have been an oracle for another world of
mathematical perfection from which he uttered replete proofs, spoken for the enlightenment of all.
Understanding is able to spring forth, holistic and full-fledged, and the ancients attributed the source of such
disclosure to an inner kinship with the world. They often referred to the natural world using terms that in
some way imply sudden manifestation or revealing. Even the root of our modern word technology is a term
which, for the Greeks, meant a kind of appearance, an upsurge or manifestation of the transcendent revealing
itself into our world.
The end-result of what they considered to be kinship with the world we today call common sense. It is just the
ability to see the correspondence between reason and the world. The concept of reason and the concept of

world are both concepts interrelated on many levels and, therefore, to explain either one the other must be
brought into discussion. This is because the concepts of both reason and world form a hub about which spin
their differences. In other words, reason and world are two easily distinguishable concepts but, in actuality,
they delimit one another. They do this so mutually that practically any level of comprehension about the one
necessarily involves a corresponding level of comprehension about the other. (Cognitive scientists call
thoughts of this type "integral thoughts." A similar degree of correspondence exists between space and time,
i.e. space-time, though relating reason to world may not seem as straightforward as relating time to space.)
Reason is a thinking process that always exists within the context of a world, whether that world be narrow or
broad. It follows that insights into the world revealed by ontology should bear, perhaps directly, upon our
understanding of reason. It is possible for the study of reason, as a being among other beings-in-the-world, to
be of significance to overall understanding of the reasoning process mainly because of the special insight such
ontological thinking lends to the context of reason. Because the world at large is the context of everything
which can be reasoned, renewed understanding of the world brought about by a revitalized and credible
ontology will revise our way of thinking, too. The promise of ontology brings fresh insight into the essence of
reason which would provide, among other things, concrete measures against which theoreticians can gauge
the reciprocal relationship between the reasoning process and that which is reasoned about. Anyone who
wishes to consider reason fully will have to delve into subtle ontological distinctions which effect meaning.
This is due to the world reciprocating with us as we employ reason to understand it. The world changes as it
and its constituents are studied. There are both phenomenal and transphenomenal changes. As an example of
what is commonly considered a phenomenal change, according to the Heisenberg uncertainty principle
subatomic particles will change state solely as a consequence of being observed. (Even the profile of what
constitutes evidence that a subatomic particle has been observed changes as physicists take different
philosophical positions on the interpretation of parameters in their mathematical formulae!) This is a
surprising basic assumption. That the assumption helps produce such a successful theory of subatomic matter
as quantum mechanics is indicative of the paradoxical deviations from rationality which natural intelligence
must take in order to successfully theorize following modern science's radically reductionist bent.
Known entirely through theoretical calculation, subatomic particles lie along a borderline between
phenomenal and transphenomenal existence. What lies beyond that thin line is the transcendent subject of
ontology. We know of transphenomenal change mainly through the writings of ancient people who, as a
whole, seem to have been more attuned to such metaphysical subtleties than we are. Amazingly, their writings
record that as any entity becomes observed it undergoes an ontological change. The ontological change that is
involved is a harbinger of its manifestation as a thing in the world.18 Thus, the ancients would not have been
at all surprised by the seemingly nonsensical basic assumption of quantum mechanics that subatomic
particles change as they are observed, that they undergo state-changes.
Traditionally, the techno-scientific community does not recognize ontological change, of course. They have
no means to do so. Their philosophical foundation became divorced from ontological investigation centuries
ago. Instead, the physics of subatomic particles is reasoned in terms of state changes. But as we see, an
ancient raison d'etre exists which explains the basis for the modern theoretical artifice known as state
changes. This fact is not a coincidence and illustrates the profound epistemic ramifications which can ensue
from academic, seemingly irrelevant ontological issues. The dominant means of advancing knowledge in our
day and age abstract theorization appears to be underpinned by reasoning faculties which are greatly
influenced by metaphysical ambiguities. Methodological questions abound throughout science involving
metaphysical ambiguities on a par with those which beleaguer the physics of subatomic particles. Quite
possibly, lack of awareness of the ontological substrate which underlies conceptions of objects as manipulated
in abstract theories creates the conditions which makes such misconception likely. AI is challenged by the

same type of misconception because it relies heavily upon abstract theorization. According to philosophers,
the nature of the world's ontology changes over time. It is likely that the West's approach to abstract
theorization might no longer be suited for the types of problems that have become demanded of it. Increased
understanding about the ontology of reason seems warranted by all.
Unfortunately, this is very difficult to achieve because many centuries of neglect for Being and its ontology
have resulted in a modern mindset that is unresponsive to ontological subtleties. We do not enjoy conscious
apprehension of the world's being, as did our forebears, nor are we able to perceive palpable relationships
among the essences of its entities. However, as long as the techno-scientific community completely ignores
the ontology of reason and conceives of it as just a myth, preferring to treat reason as mere cerebral
processing divorced from any ontological context within the world, the above-mentioned metaphysical
ambiguities must continue to be introduced into theorization unabated. This is because dissociating reason
from the reasoner's world reduces the human act of cogitation to the formal act of rationalization, curiosity
becomes merely investigation, and the reasoner's continuity of thought is compromised by the presupposition
that it is inherently mappable onto a system of sententially linked generic representations. In general, natural
intelligence is incompatible with this reduction. Although the reduction occurs in the name of codifying
common sense to make it objective, in fact, it is a principal daemon among the causes which lie at the heart of
the cyclic paradigm shifts of which Thomas Kuhn wrote in his The Structure Of Scientific Revolutions. It
discredits the reasoner's dynamic experience of subjective rationality, and so leads theoreticians to discount
perhaps the most important system of checks and balances available to natural intelligence its sensibility.
The meaning so lost effects the whole endeavor of abstract theorization, as described earlier, and ultimately
leads to a cycle of reassessment and cathartic revision. Between such paradigm shifts, stopgap measures are
employed to remedy inconsistencies interspersed throughout the prevailing theory of the day. As theorists
discover glitches ultimately derived from metaphysical ambiguities, they become forced to adopt rules of
thumb which readjust theory in such a way as to patch over the discontinuities.
Despite the constant upheaval, no comprehensive reexamination of logic is underway. Even logicians who
work with nontraditional logic shy away from its metaphysical foundations, despite a thorough introduction
given by Heidegger.19 The result is a techno-scientific community with tunnel vision. Their present epistemic
methodology is indifferent to the important philosophical idea that cardinal sources of ambiguity derive from
tensions caused by metaphysical equivocalities. Rather than regard contradictory experimental evidence as a
reason to reexamine the logic that is used to develop paradigms in the first place, the Western practice is to
treat it as cause for a paradigm shift. Likewise, instead of treating some strikingly new theory as perhaps a
radical departure from the West's rational tradition, proponents concentrate their efforts upon demonstrating
its paragon-like generality meant to subsume already existing theory. Such selective oversight just increases
the Byzantine overhead of an already top-heavy Western world-view.
To illustrate: the once important mathematical subject matters of upperbound limits, communicative algebra,
Euclidean geometry, and differential geometry eventually came to be treated in the nineteenth century as just
particular instances of "broader" subjects (i.e. Cantors infinities, noncommunicative algebras, nonEuclidean
geometries, and n-dimensional differential topologies, respectively). Seen only as innovative generalizations,
such "broader" subjects became more and more frequently applied to branches of science, but without due
attention given to assessment of the changes in overall tenor which Western thought had to undergo in order
to accommodate them. In the twentieth century, physicists broached a new plateau by applying such a
"broader" subject known as potential field theory to subatomic physics. The ensuing paradigm shift left
everyday understanding in the dust, and, what's more, made headlines in the newspapers. Matter was no
longer solid, but actually made up mostly of empty space. Subatomic matter had both particle and wave-like
properties, and could exist in two different states at the same time. The very act of observing a subatomic

particle irrevocably changed it. Also during this surprising era of the 1930's, the paradoxes of the infinitesimal
became matched by other paradoxes involving the infinite, courtesy of an application of tensor calculus to
geodesics known as the general theory of relativity. It held that gravity was equivalent to acceleration. Yet
intense gravitational fields could deflect light rays. Time was proven to slow down as one's speed approaches
that of light. The world came to look very strange indeed.
More recently, the AI theory of connectionism also presented the public with its host of curious anomalies.
Connectionists embrace a "broader" subject which happens to be the type of subject which is the "broadest"
there can be, on a par with the fundamental catholicity of the West's traditional logic. Connectionist theories
are premised upon an entirely different set of preconditions than those which give rise to the epistemic
methodologies of Western tradition. They adopt a doctrine of skepticism so radical that, epistemologically, it
withdraws to a class by itself beyond the incipient doubting native to both Cartesian rationalism and the
hypothetico-deductive method. It is so rarified an acumen that it has been tamed principally through heuristic
experiment rather than as is the Western convention by logic and mathematics. At the start, this
doctrine, known as behaviorism, was a specialized field of psychology involving the study of behaviors in
humans and animals. Even in the beginning, behavioral psychologists claimed that self-conscious
introspection did not exist. They would not accept explanations given in terms of conceptual-level semantics.
But today, what was behaviorism has now become eliminativism a general principle of materialism
applicable to any epistemic methodology, not just to that which concerns animate objects. Connectionists
ascribe to this principle of materialism which supports their claim that biological system design and
nonbiological system design differ only in the degree of their complexity. Their faith in this principle is
plainly evident in the Physical Symbol System Hypothesis, the presupposition that physical symbol systems
themselves possess the necessary and sufficient means for general intelligent action. They do not adhere to the
strong hypothesis of AI that simulations can be developed capable of tracing the action of natural
intelligence point for point because that would require explicit delineation of innate cognitive functioning
(a state of affairs disallowed by connectionist methodology a priori ).
But paradox accompanies the connectionist position and, if anything, is more pronounced than in the
examples above taken from physics. PDP networks must be configured to reach a steady state which reflects
some standard for general intelligent action. But connectionist methodology is incompatible with
conceptual-level semantics. Which standard for general intelligent action should be chosen, then, to gauge a
PDP's progress? What other choice could there have been but traditional rationalism? And so the standard by
which PDPs are judged involves the very conceptual-level semantics so eschewed by connectionists. On the
face of it, this choice seems good since rationalism arose from a tradition that expressly sought to obviate the
need for detailed knowledge as to how minds actually think. Still, rationalism gained acceptance and
popularity primarily because its chief technique of formal logic was thought (incorrectly) to epitomize and
codify all incipient mental processes. The prototype upon which traditional rationalism is based is that of
conceptual-level semantics, which emblematizes certain dynamic experience of subjective rationality such as
the notions tautology, consonance, and contradiction. The design of PDP networks has had no business being
measured against such a standard, and yet it has been and is to this day. Despite underpinnings totally foreign
to that of the formal logic of traditional rationalism, PDP networks are configured for the purpose of
emulating behaviors attributable to Western-styled kinds of traduction. Compounding paradox on top of
paradox, designers of PDP networks seem not to be aware of the origins of the formal logic of traditional
rationalism. They still program their PDP's in anticipation that, by adopting a standard which is measured in
terms of Western-styled traduction, their acumen will thereby one day come to develop AI systems which
simulate natural intelligence (as they understand it)! Although connectionism is recognized especially
among its own to be unique and a "broadest" type of subject, there is little doubt that it has been pursued
without due attention given to assessment of the introduction of its type of tenets into Western thought. Either

the philosophical doctrine underlying connectionism must be reworked, or that underlying Western thought.
In contrast, the "broader" subject advocated here in this present paper entails ontological concerns. It, too, is
yet another "broadest" type of subject, being on a par with the fundamental catholicity of the West's traditional
logic. Because ontology is knowledge about beings, it encompasses every study imaginable since every study
involves entities of some sort or other. Ontology encompasses even the relationships between these entities.
This is because such relationships exist as a means to categorize information into canonical forms (logic,
mathematics, heuristics, etc.); and each such canonical form is derived from some unique set of axioms which
constitutes yet another way to settle the fundamental questions of ontology: what is the world?, what is the
self?, what is an entity?, what is thinking?, and what is the nature of the interaction between them all? Thus,
ontological issues already lie buried at the heart of models of cognition couched in terms of traditional
rationalism. What's more, the fact that ontological questions are intrinsic to the basis of any formal logic
suggests that ontology already is intrinsically fundamental to the philosophical underpinnings of every area of
knowledge, including connectionist models of cognition. This is true by virtue of the nature of ontology,
which is not just the study of beings but the study of Being itself the wellspring which underlies
everything, including mediums of general intelligent action.
While theoretically, it is undoubtedly possible for AI theorists to presuppose a principle of medium
independence; practically, it is necessary to consider the essence of the medium in order to develop a viable
simulation of general intelligent action. The gains in computability achieved by the nominal decrease in
problem complexity which accompany adoption of medium independence are more than offset by numerous
complications, introduced by the epistemic vacuum so created, which mask ontological insight into the
medium that supports the cognitive functioning. (As always with traditional rationalism, considerations of the
essence of the medium take place as a metalogical inquiry that never quite makes it into the finished work.)
Foremost among these complications is what might be called a phenomenalization of the AI system's
surrounding world. This means that its world is reduced to being nothing more than a problem space. Wholly
phenomenal objects without intrinsic relation one to another populate this virtual world known as problem
space. Virtual world events are comprised of entities entirely extrinsic to whatever logical relationships bind
them together. Their every significance must be imputed to them after the fact and, therefore, the calculations
that represent such significance multiply exponentially. Calculations become unmanageable quickly so that de
facto limits exist on the size of the virtual world. It is typically just large enough to contain the embedded
medium (say, a robot) and a few well-prescribed domains (say, a few rooms of furniture) before the burden of
computation outstrips feasible limits. Even at that size, any significance ascribed to focuses of activity in the
virtual world is purely formal, in that it is not complemented by an intrinsic sensibility (that is, by common
sense). This is because, as a result of the phenomenalization, a dichotomy exists in the virtual world between
logical referents (used to describe member objects of a focus of activity) and corresponding logical
relationships among them. There appears no way to bridge such a rift between content and form other than to
reground the existential firmament upon which rests the entire AI systematization. Secondary complications
introduced by the epistemic vacuum are the afore-mentioned frame-related problems of planning theory, the
aforementioned metaphysical equivocalities which cause ambiguities in abstract theorization, and the classic
problems of Husserl's phenomenology.
All this means that if one wishes to simulate the bourn of human thinking, as ultimately AI theorists desire,
the ideal way to do so is to develop an understanding of the ontological medium of thinking (i.e. reasoning)
and of the ontological environment of thinking (i.e. world). To do this, it must be realized that a human
being's world is not simply a milieu of objects and contingencies. That world has an ambience and a relative
point of view. Likewise, human reasoning is educed based upon a relative point of view about material and
immaterial objects within a specific world situation. This dependence of reasoning upon situational

experience is, in fact, symptomatic of a primordial ontological relativity which makes it possible for humanity
to entertain Western standards of thinking such as the hypothetico-deductive method, even to adopt
connectionist models of reasoning that are diametrically opposed to the practice of grounding knowledge
within the context of Being. Humanity has forsaken the fountainhead that is called Being and, in so doing, has
opened up to itself entirely new realms of thinking.
However, ontological relativity cuts both ways. Connectionism and the West's traditional logic may be
diametrically opposed to ontological explanation, yet oddly, what faint sense modern people still do have
about the world's ontological nature is brought out and amplified in and through such negation. Today,
nowhere else is a sense of the fountainhead of Being more manifest than in the very absence of reference to
Being in the canons of the West's traditional logic and of connectionism. Being is disclosed precisely in the
techno-scientific community's adamant obsession to remove all reference to Being from their work. This
obsession is an epistemic principle which paradoxically reifies Being, because it leaves the techno-scientific
community no other choice but to be ever mindful of what few absolutes there are and to constantly be in
search of new invariant principles. Despite all precaution, however, periodically inevitable paradigm shifts
must rock Western thought's foundations. The implication borne upon such paradigm shifts is a
groundlessness that is most indicative, and speaks eloquently in its silence of the ultimate ground to be found
in Being.
This relation of negativity which exists between Being and the metier of the techno-scientific community is
implicit in the following explanation of the Physical Symbol System Hypothesis. John Haugeland in his
Artificial Intelligence: The Very Idea introduces the Physical Symbol System Hypothesis thuswise:
"[B]rain cells and electronic circuits are manifestly different media; but,
maybe at some appropriate level of abstraction, they can be media for
equivalent formal systems."20 [Note: emphasis is mine]
What does Haugeland mean by "some appropriate level of abstraction?" What relevance could such an
obvious reference to conceptual-level semantics have for the Physical Symbol System Hypothesis, the
cardinal doctrine of AI? On the face of it, there seems no need to refer in the statement to abstraction at all.
From the standpoint espoused in the Physical Symbol System Hypothesis, the statement above would read
just as well if it were:
"[B]rain cells and electronic circuits are manifestly different media, but
they can be media for equivalent formal systems."21
Nonetheless, Haugeland writes perhaps inadvertently of a level of abstraction. What is this allusion to a
level of abstraction that stands between media such as brain cells or electronic circuits (which convey
information) and formal systems (which shape information)? Does Haugeland perhaps mean that the Physical
Symbol System Hypothesis is an artifice created for the sake of AI theorization, that it is an axiomatic
stratagem adopted for convenience as opposed to substance? If not, and if indeed the Physical Symbol System
Hypothesis epitomizes a standpoint about mind which vilifies conceptual-level semantics, what room is left
for reference to abstraction in any statement concerning this hypothesis? To what could this phrase allude if
not to some dynamic experience of subjective rationality?
This line of inquiry is pursued not to impugn Haugeland, but to point out that implicit reference to

conceptual-level semantics continually enters into discussion in general, even into discussions of
connectionist methodology. The phrase "at some appropriate level of abstraction" is not just a figure of
speech. There is an implicit level of abstraction which underlies physical symbol systems and it constitutes
the difference that exists between the notion of conveying information and that of communicating knowledge.
The question is not how to develop logic, algorithmics and subsequent program design to compensate for this
level of abstraction, but rather how to most efficiently integrate some simulation of it into AI system
architecture. To do that requires understanding what it is, not banning it from consideration at the start by fiat
with a principle of medium independence. Understanding what it is would fill the epistemic void the lack
of explicit delineations of innate cognitive functioning which first led connectionists to adopt radical
materialism. Once a recognizable understanding could be acknowledged, it would empower the AI
community to begin to establish a new framework upon which to address its most basic problem that of the
gulf which lies between electronic media and the mind.
An understanding that the origin of this gulf exists on "some appropriate level of abstraction" of essentially
ontological character would go far to reconcile the disparity between the hierarchical planning of electronic
media and the natural intelligence of the mind. It would lend insight into the actions of cognitive functioning
as they exist in natural intelligence. It would provide a technical context within which could be discussed AI
programs that simulate the organic propagation of integral thought in terms of unabridged information
content, rather than just in terms of figurative representations related through rules of sentential logic. All this
becomes conceivable because such an understanding regrounds the entire concept of AI. The programming of
machines to simulate natural intelligence becomes recognized for what it is, the process of revealing an
ontological connection between electronics and human beings. AI no longer takes place "out there" in a piece
of electronics, but "from within" on a level of abstraction where electronics and their human creators unite as
a single entity. Commonality of ontological foundation opens up the being of electronic technology onto the
vast sociocultural complex that is human being. At this point, the question of building consciousness into
machines becomes moot. On the level of ontology, there is no great leap which natural intelligence must make
to extend from mental space into cyberspace. Natural intelligence yields AI, and vice versa, because, from the
standpoint of their common ontological ground, there only exists general intelligent action regardless of the
media. This ontological principle of medium independence is based upon different premises than that which
connectionists presume. The connectionist principle of medium independence is predicated upon radical
materialism, and declares an equivalence between biological and nonbiological systems. To the contrary,
medium independence should rather be attributed to the commonality of ontological foundation which exists
between human beings and their creations. Just as clothes are not flesh and blood, and yet are integrally
human, so AI should be seen to be in actuality an integral part of the human mind a different, yet
functional part of human consciousness which comes about through application of computer science.
This "level of abstraction" is not simply an artifice, an "ontological bionics" as it were; it is a regrounding of
the existential firmament upon which abstract theorization takes place. Such regrounding is called for because
(as mentioned above) the present epistemic methodology is filled with ambiguities derived from tensions
caused by metaphysical equivocalities. These metaphysical equivocalities derive from the reluctance (and
inability) of traditional rationalism to address and adequately represent dynamic experiences of subjective
rationality. Because these experiences cannot be adequately represented, sensibility, which is so important to
cognitive action, cannot be elucidated through traditional rationalism.
Yet philosophical literature contains ontological investigation into abstract theorization which gives definite
knowledge that can be used to develop mathematical models of cognitive faculties. By placing the quest for
understanding about natural intelligence on an ontological footing, new possibilities for the explanation of
these dynamic experiences of subjective rationality become available. Various models of such experiences

become feasible with the potential to be translated into parameters meant to attenuate mathematical
simulations of natural intelligence. These parameters can then be tested by appropriate experiments designed
to obtain concrete data about the part played by subjective rationality in mediating the human judgment calls
that expedite abstract theorization. Knowledge so gathered can then be used to develop working definitions of
dynamic experiences of situational rationality, which, once mathematized, can be interjected into AI program
design to counterbalance program elements designated to simulate rational cognitive action.
Within the context of current paradigms, AI theorists already have made significant inroads into the
mathematization of specific aspects of common sense. This enterprise could be carried forward, but from the
standpoint of the different epistemic methodology herein, which holds that human reasoning involves
cognitive faculties that are influenced by existential comportments, factors that are manifest as emotive agents
which complement the analytic agents of rationale. Specific knowledge about such existential comportments
is available in philosophical literature and open a window onto those emotive forces which most directly
influence intelligent action. Introduction of parameterizations of such emotive forces into the design of AI
programs would greatly enhance their capability to simulate specific cognitive faculties and, consequently, to
simulate integral thought. They could conceivably contribute to teleologic schemas which control automated
manipulation of symbolic logic in hierarchical planning systems.
Two viable outlooks on reason exist then, the way of thinking about reason by scientists and that by
ontologists. Science rigorously builds up a world view, consequent upon consequent, but must remain open to
radical paradigm shift if it is not to become stilted and then outmoded. In contrast, the ontologists look to the
world itself to provide guiding principles round about which reason can develop, and yet ontologists remain
satisfied with lofty ideals far removed from practicality and even shun technological applications. Oddly, both
views about reason achieve success according to their respective standards. They both address the world as a
whole, purport to state truths about the constituent beings of the world, and incorporate the relativity of
human perspective as an integral part of their venue. Modern science enables the furthering of technological
innovation meant to influence the world, and ontology has emerged after more than a score of centuries
poised to tell us more about the way that the world influences human beings than we've ever known before.
Surely a marriage between these two understandings of reasoning should be possible. Without an epistemic
methodology which incorporates both meanings of what it is to reason, attempts to program accurate
simulations of human reasoning cannot be furthered and AI's strong hypothesis becomes defunct. When both
types of definitions of reason are integrated, heuristic methods will no longer be all-important to AI research.
Ontologically accurate models of reasoning will become available and the quality of what it is to reason
the reasoning experience will thereby change. The next and final article of this three-part series,
Holographic Intelligence Is Not Enough: AI should acknowledge the ontology of reason, incorporates part of
the first chapter of a book in progress with the working title, Curiosity: Application Of The Learning Emotion
To Artificial Intelligence. That book will outline a sketch of a theory of facultative reasoning which is
delineated in terms of an abstract realm known as the reasoning milieu and a taxonomy of qualitative
attributes of subjective consciousness which interact therein.

ENDNOTES
1 Martin. R., Metaphysical Foundations: Mereology and Metalogic, ed. by H. Burkhardt, Munich,
Germany: Philosophia Verlag GmbH, 1988, p. 11.
2 Stockmeyer, L., Chandra, A., "Intrinsically Difficult Problems," in Scientific American, May 1979,
Vol. 240, No. 5, pp. 140-159.
3 Artificial Intelligence Planning Systems: Proceedings Of The First International Conference,

June 15

17, 1992, College Park, MD, ed. by J. Hendler, San Mateo, CA: Morgan Kaufmann Pub., Inc., 1992.
4 Georgeff, M., "Planning," in Readings in Planning, ed. by J. Allen, J. Hendler, & A. Tate, San Mateo,
CA: Morgan, Kaufmann Publishers, Inc., 1990, p. 20.
5 Thinking About Being, Aspects Of Heidegger's Thought,

ed. by R. Shahan & J. Mohanty, Norman,

OK: University of Oklahoma Press, 1984, p. 44.


6 Rummelhart, D., McClelland, J., Parallel Distributed Processing, Vol. I, Cambridge, MA: The MIT
Press, 1988, p. 442.
7 Ibid, p. 442.
8 Ibid, p. 318.
9 Morton, A, Semantics and Subroutines, in Modelling The Mind, ed. by K. Mohyeldin Said, et al, New York, NY: Clarendon Press,
1990, p. 30.
10 Rummelhart, Ibid, p. 140.
11 Husserl, E., The Idea of Phenomenology, tr. by W. Alston & G. Nakhnikian, The Hague, Netherlands: Martinus Nijhoff, 1973.
12 Garnham, A., The Mind In Action: A Personal View Of Cognitive Science, New York, NY: Routledge, 1991, pp. 19, 93.
Fetzer, J., Philosophy And Cognitive Science, New York, NY: Paragon House, 1991, p. 92.
Evans, F., Psychology And Nihilism, Albany, NY: State University of New York Press, 1993, p. 121.
Consciousness In Philosophy And Cognitive Neuroscience, ed. by A. Revonsuo & M. Kampinnen,
Hillsdale, Jew Jersey: Laurence Erlbaum Associates, Publishing, 1994, p.262.
13 Heidegger, M., What is Called Thinking?, What Is A Thing?, Holzwege, On The Way To Language,
Modern Science, Metaphysics, And Mathematics, The Question Concerning Technology And Other
Essays.
14 The IRIS Group, Fuzzy, Holographic and Parallel Intelligence, ed. by B. Soucek, New York, NY:

Wiley, 1992, p. X.
15 Thinking About Being, Aspects Of Heidegger's Thought,

ed. by R. Shahan & J. Mohanty, Norman,

OK: University of Oklahoma Press, 1984, p.31.


Mehta, J. Martin Heidegger: The Way And The Vision, Honolulu: University Press of Hawaii,
1976, p.371.
Kockelmans, J., On The Truth Of Being, Bloomington, IN: Indiana University Press, 1984, pp.
213-218.
16 Kolakowski, L., The Alienation Of Reason: A history of Positivist thought, tr. by N. Gutterman,
Garden City, NY: Doubleday & Co., Inc., 1968.
17 Cussins, A. "The Connectionist Construction of Concepts" in The Philosophy of Artificial
Intelligence, ed. by M. Boden, New York, NY: Oxford University Press, 1990, pp. 368-440.
18 Heidegger, M., What Is A Thing?, South Bend, Indiana: Gateway Editions, 1967.
19 Heidegger, M., Metaphysical Foundations Of Logic, Bloomington, IN: University of Indiana Press,
1984.
Haack, S., Philosophy Of Logic, New York, NY: Cambridge University Press, 1978.
20 Artificial Intelligence: The Very Idea,

ed. by J. Haugeland, Cambridge, MA: The MIT Press, 1985,

p.243.
21 Rosenblatt, F., "Two theorems of statistical separability in the perceptron," in Mechanisation of
Thought Process: Proceedings of a Symposium Held at the National Physical Laboratory, November
1958, Vol I, London: HM Stationary Office, 1959, p.423.

You might also like