Professional Documents
Culture Documents
STEVE TORRANCE
School of Engineering and Informatics,
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
In this paper, the notion of super-intelligence (or \AIþþ", as Chalmers has termed it) is con-
sidered in the context of machine consciousness (MC) research. Suppose AIþþ were to come
about, would real MC have then also arrived, \for free"? (I call this the \drop-out question".)
Does the idea tempt you, as an MC investigator? What are the various positions that might be
adopted on the issue of whether an AIþþ would necessarily (or with strong likelihood) be a
conscious AIþþ? Would a conscious super-intelligence also be a super-consciousness? (Indeed,
what meaning might be attached to the notion of \super-consciousness"?) What ethical and
social consequences might be drawn from the idea of conscious super-AIs or from that of arti¯cial
super-consciousness? And what implications does this issue have for technical progress on MC in
a pre-AIþþ world? These and other questions are considered.
1. Introduction
Certain tech-enthusiasts proclaim the coming of the \Technological Singularity" or
\Super-intelligence Explosion", an event where arti¯cial intelligence will come to
match and then rapidly overtake levels of human intelligence [Good, 1965; Vinge,
1993; Kurzweil, 2005; Bostrom, 2005; Goertzel, 2007; Yudkowsky, 2008; Sandberg,
2010]. The dynamics of the development of super-intelligence is usually seen as being
based on the combination of a number of factors. Two main factors are: (a) the ability
of machines to take a progressively more active role in their own improved design and
(b) the inherent tendency for key technology measures (e.g., number of transistors on
a given surface of silicon, processing speed, lowering of cost in production of com-
ponents, etc.) to double every 18 months to two years
variants of Moore's Law
[Brock, 2006; Good, 1965].
Such factors, taken together, may arguably produce an eventual explosion in
super-AIs. In a recent study, Chalmers [2010] uses the term \AIþ" to refer to agents
whose intelligence exceeds that of average humans by just a little, and \AIþþ" to
refer to agents whose intelligence levels relate to human levels somewhat as human
483
484 S. Torrance
sive, period of objective time after the appearance of AIþ. Even without such an
explosive time-scale, the necessary recursive self-improvement process may take place
by UNIVERSITY OF SUSSEX on 07/11/13. For personal use only.
1 It might be appropriate to have a way of singling out potential future AI developments where respon-
sibility for design and production has come largely to rest on the \shoulders" of other, previously con-
structed, AI systems, rather than on that of the latter's (direct or indirect) human creators. I suggest the
term \hyperartefact" to signify an artefact whose existence is due to the actions of other artefacts [Tor-
rance, 1999]. On a strict understanding of such a de¯nition, the vast majority of artefacts that exist are
hyperartefacts, so really there is a continuum between weak ones (e.g., manufactured items of the last few
centuries) and strong ones (such as future hyperartefacts largely designed and implemented, in a recursive
succession, by other hyperartefacts). The term \rich hyperartefact" could be used for items at the strong
end of this continuum. It would seem reasonable, given such a term, to distinguish between AI, the research
and development area of the last few decades, and a gradually emerging ¯eld of (rich) hyperarti¯cial
intelligence, or HI
which may well develop out of existing AI work. (Chalmers' \AIþþ" might best be
retitled \HIþþ".) The emergence of rich HI may take centuries or, as some singularity theorists predict, a
few decades. The rich HI nature of the future super-intelligences discussed in this paper
in particular the
largely autonomous nature of their production, relative to the ancestral human input of their predecessors
constitutes an important issue to re°ect upon, and may have important implications for the question of
consciousness in such systems or agents.
Super-Intelligence and (Super-)Consciousness 485
carrier. However, the relevance of this proposed technique to the present discussion is
that a more or less fully emulated human mind, uploaded to a new (say) compu-
tational host, can then be subjected to a variety of AI techniques to increase its
cognitive capacities way above normal human levels. So this would be an alternative
method for producing AIþþ. (See the second half of Chalmers [2010] for philoso-
phical discussion of uploading.)
It is tempting to dismiss the singularity and mind-uploading scenarios as entirely
utopian. But it would be ill-advised simply to turn one's back on these projects.
Moreover, in view of the fundamental change in human (and perhaps evolutionary)
history that the emergence of AIþþ might bring about, we need to think as clearly as
possible about its implications. Quite apart from its far-reaching social and ethical
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
2A caveat: of course the progression to super-intelligence may involve the development of some quite novel
technologies completely unknown to us at present, or it may essentially involve major developments in
technologies that are currently known but that exist today only in highly embryonic forms such as
biocomputing, quantum computing and synthetic biology. We here concentrate on the possibility that the
production of super-intelligence simply develops on current computational and robotic techniques, but in
ways that are vastly improved on current achievements in terms of performance, e±ciency, design, etc.
486 S. Torrance
The DOQ provides the opportunity to reframe certain important debates within
the MC research community. Many key research themes in MC such as infor-
mation integration, self-models, neural architectures, embodiment, imagination and
so on can be discussed in the context of the DOQ, i.e., not in terms of the rather
limited functionalities of current bench models, but rather of hypothetical future
ultra-high-performance systems which would have all or most of the cognitive fea-
tures of human intelligence (plus maybe some more) but at greater-than-human
levels. The DOQ also highlights another important issue concerning MC as a research
area. Can MC, as an experimental and theoretical study, be considered as a sub-
department of mainstream AI? Or is it something which carries a special set of
performance criteria or success criteria, in that it targets, not (just) intelligence, as is
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
sciousness" (PC) as used here. PC is a fairly recent addition to a long list of terms,
some everyday, and some technical or philosophical, which include \feeling",
\sentience", \subjectivity", \raw feels", \direct experience", \qualia", \¯rst-person
properties", \phenomenology", \what it is like", etc. All these allegedly capture
something special about consciousness as contrasted with other features of the mind.
\PC" is often contrasted with \functional consciousness" or according to Block
[1995], \access consciousness". The question of whether there really is a philosophi-
cally important divide between two (or more) kinds of consciousness has been
extensively discussed. In this journal, Sloman [2010] compares PC with access con-
sciousness and argues at length that the term \PC" is \vacuous and a source of
confusion" (p. 137).
I prefer, in what follows, to use the term \functional consciousness" (FC) as the
contrastive term, rather than \access consciousness": the former term may be
thought to subsume, but be wider than, the latter. Franklin [2003] o®ers a concise
and instructive characterization of the phenomenal/functional distinction in the
context of MC research [pp. 4748]. He suggests that in trying to convey what PC is,
is like \trying to explain the colour purple to a person who has been sightless from
birth. . .[it]. . .must be experienced to be understood". He then gives examples of
several varieties of PC state which include (as well as the waking state of ordinary,
day-to-day life), states produced by hallucinogens; hypnogogia; states of pre-
language infants, cats, meditative adepts, etc. He then points out that consciousness
has a variety of functions e.g., helping us \deal with novel or problematic situ-
ations for which we have no automatized response" and so on. To have FC is
characterized by Franklin as having a cognitive architecture or concomitant mech-
anisms which support such functions. One can attribute FC to future machines and
maybe current ones (such as his own IDA and now LIDA systems).
I hope this gives an example of how the PC/FC distinction can be delineated, at
least in a rough, provisional fashion (alternative de¯nitions are legion). I do not have
space to o®er an extended defence here of the appropriateness of the distinction;
Super-Intelligence and (Super-)Consciousness 487
however, see Sec. 4 for some further remarks. I would stress that talk of PC as distinct
from FC is not intended to foreclose the question of whether or not the phenomenal
can be reduced to, or explained in terms of, the functional (or access, cognitive, or
indeed computational) aspects of consciousness. Nor is the use of the term PC
intended to enshrine the view that there is an intrinsically insoluble \hard" problem
of consciousness [Chalmers, 1996], such that consciousness (of this sort) cannot, in
principle, be fully explained in terms of the physical mechanisms of brains or other
types of organic or non-organic system, or in terms of virtual machines implemented
in such systems. Nor (pace [Sloman, 2010]), does use of the term \PC" presuppose
that any creature or physical entity possessing the latter could have a non-PC,
zombie physical duplicate which replicates all causal and functional properties of the
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
original, apart from the presence of phenomenality. Perhaps such total physical
duplicate \zombies" are empirically or metaphysically possible (although I am
by UNIVERSITY OF SUSSEX on 07/11/13. For personal use only.
inclined to think they are neither). Nor (again pace Sloman) does use of \PC"
necessarily commit one to epiphenomenalism about phenomenal experience [Prinz,
2003, pp. 129130]. I believe that it is conceptually permissible to use the term
\phenomenal consciousness" to communicate something de¯nite and easy to grasp
about our own conscious existence and to ask a clearly intelligible question about the
existence of such a consciousness in non-biological beings such as hypothetical AI
agents of di®erent kinds. Nor am I using \PC" in a way that commits its user to
denying that a purely information-processing device, or a computationally-based
robot, could have PC, or to denying that a computational or \functional" account or
\reduction" of PC is possible. In using the term \PC", such questions are left quite
open, to be discussed on their merits. So, I claim, it makes perfect sense to ask
whether a particular kind of AI agent or robot, and in particular whether an AIþþ
(or a whole-brain-emulation silicon upload) might have PC in this sense.
There is one thing further about \PC", as I am proposing to use it, that is
distinctive about the notion. This is not, perhaps, true in virtue of the de¯nition of
the term, but it seems to hang around as part of the \semantic aura" surrounding
the term. Having phenomenal consciousness matters to its bearer, and indeed
having such consciousness seems to lie at the root of anything mattering to a being.
In this way, there seems to be an ethical signi¯cance to the idea of a creature's being
phenomenally conscious a signi¯cance that does not seem to attach in the same
way to a being which is thought of as simply functionally conscious (unless the latter
is thought of as equated to PC). Indeed the ethical consequences of the question
about PC in AI agents (and in particular in AIþþs) will be considered in some
detail below.
Some people may still have their doubts about the PC/FC distinction. All I can do
is ask the reader to see where the argument leads (particularly in terms of its con-
sequence for our ethical thinking about consciousness in AIs and super-AIs, some-
thing I focus on in later stages of this paper) and then to judge the suitability of my
use of the phenomenal/functional distinction at that point.
488 S. Torrance
formance characteristics.
(c) Agnosticism: We can never be sure, however good our science and however
by UNIVERSITY OF SUSSEX on 07/11/13. For personal use only.
It is likely that people who are sceptical about the success of the super-intelligence
project will also give a sceptical response to the DOQ. The converse, however, is not
necessarily true. One may feel optimistic that, given the hypothetical arrival of
AIþþ, such super-intelligent agents would instantiate arti¯cial consciousness, while
being far from con¯dent about the likely emergence of AIþþ.
The second option above that super-AIs would instantiate FC but not PC
is likely to be favored by people who see the development of comprehensive intelli-
gence of a kind that could assist the emergence of AIþ or AIþþ as necessarily going
hand-in-hand with certain functional features of consciousness, such as the devel-
opment of a uni¯ed self-model [Metzinger, 2003, 2009; Holland and Goodman, 2003],
or of a self-aware Global Workspace (GW) system for phenomenal awareness or
conscious deliberation [Baars and Franklin, 2009; Franklin, 2011; Shanahan, 2005,
Super-Intelligence and (Super-)Consciousness 489
2010]. Many MC researchers are thus perhaps likely to agree that progress toward
AIþ and AIþþ is best served by pursuing cognitive models that incorporate the best
results from MC. It is worth pointing out that approaches to MC based on GW
models perhaps in common with the majority of MC approaches
often focus
primarily on FC. However, sometimes it is additionally claimed that PC may be
implemented within a computational GW architecture e.g., Ramamurthy and
Franklin [2009]. This latter approach is more in line with the \generous" response (d)
above.
Many MC researchers seem to be reluctant to recognize the possible existence of
PC, other than in terms that reduce the phenomenal to the functional. The general
thrust of much MC research may thus suggest the following as a broad response to
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
the DOQ. \The development of intelligence in humans and other animals is closely
associated with the parallel emergence of sophisticated forms of awareness. These
by UNIVERSITY OF SUSSEX on 07/11/13. For personal use only.
when the rest of the cat had disappeared. It might, then, be objected that a system
that instantiated \only" FC (and not PC) would not be a conscious system or agent
at all. Without the \what-it's-like" of phenomenality, there would be no experiential
awareness, and hence, it might be said, no reason to consider that consciousness was
present at all in the agent. To some this argument may seem specious, but to others it
may seem all-important.
At the very least, the phenomenal/functional split seems to have important
ethical consequences. Many people would agree that it is the phenomenal features
of consciousness rather than the functional ones that matter ethically. Or, putting
it in a more circumspect way: if functional features matter, that would only be
because, and insofar as, they also have phenomenal \feel" bound up with them.
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
Thomas Metzinger [2003, 2009] is the one who has stressed the ethical implications of
machine consciousness as a research programme: we should care about how we treated
by UNIVERSITY OF SUSSEX on 07/11/13. For personal use only.
research is, presumably, to try to work out what these global, architectural con-
ditions for PC are, rather than just to reproduce relatively speci¯c, isolated instances
of \qualia" such as those experienced by people subject to the McGurk e®ect. As
Tononi [2008] has remarked, \Phenomenologically, every experience is an integrated
whole...". So the human subject's phenomenal experience of the \dig" syllable is a
small chunk of the overall experiential manifold that the subject is undergoing at that
moment. It is the integrated nature of this experiential manifold that gives it the
phenomenological quality that it has (maybe in conjunction with other global fea-
tures). A dedicated arti¯cial phoneme recognition system, in contrast, would have no
complex experiential manifold: it would only register a limited set of speech sounds. It
is the overall cognitive or information architecture of the human experiential system
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
them, surely.)
Almost certainly, then, the conditions of phenomenality will not be ful¯lled simply
by reproducing this or that phenomenal or qualia-type state in isolation (as in the
McGurk e®ect) but by reproducing some broad, global, architectural conditions.
These conditions may be broad cognitive-architectural features (e.g., a global
workspace architecture, or a cognitive structure supporting something like Metzin-
ger's transparent phenomenal self-model). But equally, they may have more to do
with deep biological, or metabolic, features of the system
for example, whether the
system is su±ciently like a living organism to be dependent upon, or constituted by, a
dynamically self-maintaining network of processes existing in a far-from-equilibrium
state. No one can pretend that the debate between these two broad options is closed
currently; and if the latter viewpoint is correct
i.e., if the phenomenality of
consciousness emerges from these deep-seated bio-functional properties rather than
merely from cognitive-functional properties, then there is little ground for assuming
that an AIþþ would, merely in virtue of having a rich set of intelligence-related
performance characteristics, also possess phenomenal consciousness.
be so, but it cannot simply be taken as read. For one thing, all forms of natural
intelligence are found in creatures which have a long and interlinked evolutionary
history, and in which the various skills that are clustered under the term
\intelligence" developed in a relatively integrated manner as those creatures coped
with various kinds of environment. Intelligence in machines, as it has been studied
and modeled in AI over the last six or so decades, has not emerged as a result of a
natural evolutionary process (and the kinds of arti¯cial evolution found in A-Life
models are, on the whole, relatively simplistic and abstract). This is one evident
ground for asserting a strong discontinuity between naturally occurring intelligence-
features and \intelligence" as displayed in machine models and agents.
On the other hand, there are clearly some important evolutionary discontinuities
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
and what might qualify for that title in other natural creatures. This makes the idea
of any smooth scale of comparison in \intelligence" from (e.g.) rodent to human to
machine rather problematic.
Both these discountinuities seem to be disregarded by writers on the singularity.
Chalmers [2010], for example, de¯nes \AIþþ" in a way that explicitly appeals to
such a continuous scale. Of course, some broad comparisons can be made: mice can
solve maze problems but cannot do calculus, and there are no doubt many
\cognitive" feats that future computational systems may perform which are beyond
human capability. But something important distinguishes the intelligence of both a
human and of a mouse from that of an AI system, at least current AI systems.
Whereas the intelligence of both humans and mice allow them to be pretty well self-
standing in their respective niches, the intelligence of any present-day arti¯cial
cognitive agent does not free that agent from being fundamentally dependent upon
human support for its continued existence. Of course, this may change if the pre-
dicted progression to AIþ, and onward to AIþþ, occurs (perhaps a necessary cri-
terion of reaching AIþ, let alone AIþþ, is indeed that any agent qualifying for such a
title must be more or less self-sustaining in this sense). But there seems to be no
obvious guarantee that such an independent self-standing status will be developed.
This relates to another important feature of arti¯cial forms of intelligence: the fact
that current AIs are usually designed to perform a speci¯c set of tasks, and have little
or no abilities beyond the boundaries of those tasks. Much work is being done on
developing models of \arti¯cial general intelligence" (AGI), where, instead of
\islands" of domain-speci¯c competence, such systems will exhibit mainlands of
operative capacity [Goertzel, 2007, 2010; Goertzel and Wang, 2007]. Signi¯cant, and
interesting, approaches to AGI are being developed by some workers who also
strongly identify with the machine consciousness research programme. For example,
Stan Franklin has developed, with Bernard Baars and colleagues, LIDA, a prototype
system for modeling general decision-making [Baars and Franklin, 2009; Franklin,
2007]. (LIDA is a development from the IDA of Franklin [2003], enhanced with
Super-Intelligence and (Super-)Consciousness 493
learning capabilities.) LIDA's design revolves around the insight that the delibera-
tions of any naturally intelligent decision-maker must draw upon the resources of
that creature's consciousness. So a good model of general decision-making must, on
this approach, also incorporate a good model of consciousness (in LIDA's case the
Baars Global Workspace model). Indeed, it may well be the case that much other
work in machine consciousness will help to progress work in AGI, because of the clear
connections between functional consciousness and generalized intelligent cognitive
capacities.
Nevertheless, the development of really e®ective generalized intelligent decision-
makers or agents irrespective of whether they are FC or not may be fraught
with di±culties. It seems likely that any agent that might qualify as an AIþþ has to
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
be built around a robust AGI model. If that were so, the likelihood of the super-
intelligence explosion (or its emergence at a more leisurely pace) would appear to be
by UNIVERSITY OF SUSSEX on 07/11/13. For personal use only.
crucially dependent upon the success of AGI research. Yet, prototype models aside,
no one has produced anything that could be taken as a serious practical contender for
a true AGI at present.
Moreover, how general need an AGI be to be an AGI? One could imagine a system
being developed that had a broad set of \intellectual" capacities, so that it performed
faultlessly and seamlessly in many contexts, but which still had serious gaps when
compared with certain aspects of human performance (or even canine or corvid
performance). One might even have a system that appeared, when judged against a
whole raft of criteria, to clearly rank as an AIþþ, but which, in a minority of respects,
perhaps, failed miserably. Some of these areas of underperformance may be pretty
obvious and therefore discountable or remediable (compare a Nobel laureate's
ineptitude at practical tasks like driving a car or buying groceries). But some de¯cits
might be subtle and hard to spot and yet may vitiate the judgment or the wisdom
of the AIþþ agent in ways that could have crucial consequences for human destiny.
This leads to a wider question
what kinds of aptitudes count as \intelligence"?
The ¯eld of intelligence-measurement has been plagued by the street-lamp syndrome.
Like the drunk looking for a lost set of keys under a light because it was easier to see
there, psychometricians concentrate on easily measurable features of cognitive ¯t-
ness, such as deductive or verbal reasoning, focused attention, visuo-spatial working
memory, etc., while paying relatively little heed to what may be crucial, but less
operationally amenable, features such as appropriate emotional response, physical
manipulation skills, creative aptitudes and so on. The kind of bias that is built into
the term \intelligence" by frequent usage makes people naturally think of skill in
chess or mathematical problem-solving; yet the sort of embodied, empathetic intel-
ligence that is necessary to handle and nurture a newborn baby, for example, may
draw upon quite di®erent kinds of intelligence
ones which are far less frequently
studied or measured.
An AIþþ may, by de¯nition, possess an abundance of skills of the ¯rst sort, but
yet may have few skills of the latter sort. The notion of intelligence is, plausibly, a
494 S. Torrance
cluster notion with many di®erent facets [Gardner, 1983]. Certainly it would be
dangerous if an AIþþ built on a highly restricted notion of intelligence were to be
given (or were to take upon itself) the responsibility to address the intractable
problems of human society that humans themselves have been so poor at solving (yet
it is part of the rhetoric of Singularity theorists that super-AIs could serve as saviors
of humanity in just this way).
For all these reasons, and others, the notion of super-intelligence is far from
straightforward. There may thus be perilous hidden reefs along the channels which
lead to the development of AIþ and AIþþ. Such developments may fail because of
the inherently vague, multivocal and inherently open-ended nature of the notion of
\intelligence"; and worse, there may be no clearly assignable success-conditions
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
that AIþþ is an impossibility so the DOQ still remains a key problem worth
considering.
6. \Super-Consciousness"?
How does thinking about AIþþ help us to hypothesize about the kinds of arti¯cial
consciousness that might emerge? Let us suppose that some form of consciousness
(and consciousness, indeed, of a phenomenal kind) were to be present in an emerging
super-AI agent. What kind of consciousness might this be? Would it be \just like"
our consciousness in most major respects? Or would it be appropriate to say, by
analogy with Chalmers' characterization of AIþþ, that it was as di®erent from
human consciousness as the latter is from rodent consciousness? Would a super-AI
which had PC experience the same kinds of \feels" as a smart human or would the
large disparity in \intelligence level" mean that the conscious states of the super-AI
system would involve some kind of quantitative, or qualitative, di®erence from
human conscious states? And, indeed, how might one know? (For that matter, do the
PC states of a human whose cognitive capacities are at the high end of the intelli-
gence distribution di®er quantitatively or qualitatively from those of a human with
scores at the low end?)
It is no doubt tempting to talk in terms of \super-consciousness" or \super-
experience" as a state concomitant to super-intelligences. But what might such a
notion encompass? Assuming that the problems concerning creating PC (as well as
FC) in an AIþþ have been solved, would a PC super-AI feel pleasures and pains
more keenly than all humans? Would it have the capacity for a more vivid range of
sensory experiences? Would such a being be prone to being rocked by far more
tempestuous emotions than even the most highly strung human? Or would its highly
developed intelligence enable it to master emotional forces to a greater degree than
humans can generally do? Or, by contrast, could it be that the kind of AIþþ that is
most likely to emerge is one which has highly developed cognitive powers, but no
emotional capacity whatsoever?
Super-Intelligence and (Super-)Consciousness 495
There has been some discussion of the idea that consciousness may come in graded
levels or quantities, within the MC literature. Perhaps, the most general such
measure is Tononi's , which equates the degree of consciousness in a system with a
combination of integration and di®erentiation in the information held by the system
[Tononi, 2008]. Tononi is explicit that arti¯cial systems can have measures in the
same way as biological organisms can [Koch and Tononi, 2008], but it is not clear
whether any future super-intelligent agent would be likely to have measures of
information integration, or , in excess of human levels in proportion to the degree
that their intelligence levels exceed human intelligence levels.3 Discussions of
Tononi's and other such measures of consciousness, have mainly taken an
anthropocentric stance, the interest being focused on how to ¯nd a systematic way to
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
3 Seth,who has proposed a measure of degrees of consciousness in terms of causal integration, has argued
that the measure can be trivialized, as it could be found in arbitrarily high amounts in relatively simple
systems such as fully connected Hop¯eld nets [Seth et al., 2006].
496 S. Torrance
of Human Rights, the 1950 European Convention on Human Rights, and so on.
Humanity has generally taken it for granted that such rights do not apply to non-
human organisms, despite the fact that degrees of consciousness are often readily
attributed to many other biological species. As a moral community, humans do often
concede that other creatures have some level of ethical status, and they do so (at least
in part) because of their possession of cognitive and phenomenological capacities in
varying degrees. Because of the variations a clear moral hierarchy is generally taken
to operate between humans and other creatures, even though there is little consensus
about the detailed structure of such a hierarchy.
So with the advent of AIþþ agents it seems that we should, in fairness, expect
new, higher, layers to be added to that moral hierarchy, especially if such beings
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
possess, not just super-human levels of intelligence, but also levels of phenomenal
experience that far exceed human experiential capacities. Would it even have to be
by UNIVERSITY OF SUSSEX on 07/11/13. For personal use only.
the case that the normative principles inherent in the best human ethical systems
commit humans to conceding that the moral interests of super-AIs should take
precedence over the moral claims of humans? It is di±cult to see how such a moral
hierarchy extending beyond humans could be avoided if we are to be consistent. And
remember, this may well not just be a \private" debate by humanity entre nous.
This is a bleak conclusion. Humans have been used to thinking of themselves as at
the top of the pile. But it seems to be an implication of the super-intelligence thought-
experiment, at least if some kind of super-consciousness were to emerge alongside the
super-intelligence. But surely there would then, in turn, be compelling reasons to
prevent the emergence of super-AIs in practice. Far from such super-AIs being able to
solve the ills of humanity, super-AIs may well feel justi¯ed in subjugating humans in
just the ways that we humans have, for millennia, subjugated other, less intelligent
animal species. Moreover, consistency with the dominant human practice of sub-
jugating less intelligent animals would require humanity to approve of our being
treated in this discriminatory way (or else we had better all become vegetarians fast!).
they us? What kind of mental properties do we need to assume are present before our
treatment of them becomes ethically relevant? How do we validate ascriptions of
such properties? Further, is there an ethical hierarchy of treatment or consideration
which could include humans, various kinds of animals, and various sorts of arti¯cially
conscious agents, including ones that far transcend current human levels?
Leaving aside the speci¯c ethical and social impact issues of a potential super-AI
era, there are clearly some points to be made about the relation between the scienti¯c
and philosophical study of consciousness in both human and arti¯cial agents, and
moral questions concerning such agents. As already observed, questions concerning
machine consciousness appear to have an ethical \bite" in a way that questions
concerning \mere" AI systems do not. Consciousness
at least of the phenomenal
sort \matters" in a way that merely functional aspects of mind do not. One might
say (although this will no doubt require careful quali¯cation) that questions con-
cerning the attribution of consciousness have an ethical status that other kinds of
psychological attribution lack [Sloman, 1985; Torrance, 1986].
Perhaps this will also help to explain, at least in part, why it is, as some have
claimed, that MC is very closely associated with the ¯eld of arti¯cial, or machine,
ethics [Torrance, 2008; Levy, 2009; Wallach et al., 2010; Torrance, 2012a, 2012b].
Clearly, claims about the conscious states of arti¯cial agents seem to raise other
claims to do with how such agents should be treated
i.e., about their status as
moral patients or recipients. Metzinger has suggested that machine consciousness,
when properly instantiated, would introduce the possibility of su®ering in the
machine agents to which that property applied. He has gone so far as to say that, in
virtue of that possibility, and of the resulting circumstance of our bringing unnecessary
su®ering into the world, the research activity of MC should perhaps be banned, and
certainly considered in a morally precautionary light [Metzinger, 2003, 2009].
There is another important reason why MC and Machine Ethics have a close
association why they are, as Wallach et al. [2010] put it, \joined at the hip".
Consider ethical agents, or \producers", in the sense of actors who make morally
498 S. Torrance
signi¯cant decisions and who are deemed to have moral responsibilities (in contrast
to ethical \consumers" or recipients, who are the targets of such morally signi¯cant
action) [Torrance, 2008]. If super-AIs are to be the kinds of ethical producers that we
should take seriously in a moral sense, and that we should want to accept as full
members of our moral community, they surely also have to be conscious agents in a
rich sense. Arguably, because of the central role played by consciousness in deter-
mining what counts as morally signi¯cant for ethical action, any ethical agent must
surely understand consciousness \from the inside", as it were that is, as a being
who is conscious in the ethically engaging sense.
Wallach et al. [2010] make a speci¯c claim that any system which ¯ts the bill of
being a moral agent in this sense will also qualify as a conscious agent in having the
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
conforms reasonably closely to, say, the LIDA/GW pattern described by Franklin
and others. Then if the above claim is correct, such AIþþ agents will at least have the
functionality of conscious creatures. We questioned earlier whether any such system,
merely in terms of its computationally speci¯ed cognitive architecture, will have
anything other than functional consciousness. But this is clearly an open question at
present nothing said here has been intended to foreclose it. So there is indeed force
to the argument that an AI agent with all the functionality of consciousness will be
able to function as a moral \producer", in the sense outlined just now.
Clearly, one must be cautious, for two reasons. First, suppose it is correct to claim
(as we earlier intimated) that if a being possesses PC it has genuine moral interests (is
a genuine moral \recipient"); and that if it has not got PC, it does not. Then there are
real issues, to do with false positives and false negatives, a®ecting how we treat
arti¯cial agents that we consider to be PC. If we cede moral interests to super-AIs
who are, in fact, non-conscious in the phenomenal sense, we may be wasting valuable
resources on beings that have no moral need for them. Conversely, if our belief that
they are non-conscious, phenomenally speaking, is mistaken, then we would be doing
them a great moral injustice by refusing to grant their needs (assuming we had the
power to do so). So attributing consciousness to AI agents in a way that has ethical
\bite", in itself carries great (moral) responsibilities. (But see Coeckelbergh [2009,
2010a, 2010b, 2012] and Gunkel [2012] for contrasting views of how consciousness-
attributions do or do not relate to how we think about the moral interests of both
humans and of future AI agents, and Torrance [2012b] for a response.)
Second, in our conception of moral agents or producers mentioned above, we
considered the view that consciousness (at least of a functional kind), and the
capacity for moral decision-making, go together. We would surely hope that any
super-AI agents that we brought into existence (or that we seeded for other arti¯cial
agents to bring into existence) would deliberate in ways that were responsible, just,
benevolent, and so on that is, in ways that would be \friendly" to humans
[Yudkowsky, 2001, 2008]. Yet, in the generic sense of \ethical agent" (or \producer"),
Super-Intelligence and (Super-)Consciousness 499
villains are no less ethical agents than saints are: a villain has the capacity for ben-
evolence and other kinds of moral virtue, but opts for a di®erent path. (Also, of
course, villains will be no less conscious than morally upright individuals.) Perhaps
moral venality always results from some rational failure, which in turn can be laid at
the door of some intelligence de¯cit that an AIþþ can be guaranteed, constitutively,
not to have. However, that point would need a lot more debate and re°ection.
Surely any project to produce super-intelligence needs to put its money on the idea
that increased rationality or general intelligence also brings increased ethical insight
in the sense of commitment to the broad interests of conscious beings as such, and as
far as is humanly (!) possible, ensuring that such commitment is maximized. The
supposition would thus be that super-AIs, were they to come about in a relatively
Int. J. Mach. Conscious. 2012.04:483-501. Downloaded from www.worldscientific.com
moral universality (in a revised, enhanced sense of the term) that gives due weight to
the interests of both humans and arti¯cial conscious agents (not to mention other
creatures). But this, of course, is a rather tenuous supposition, and at present we have
only the most slender hope that a Singularity or super-AI era would not develop in a
way that would threaten human interests and get to be progressively more and more
beyond human in°uence.
Acknowledgments
Thanks to Mike Beaton, Mark Coeckelbergh, Ron Chrisley, Robert Clowes, David
Gamez, David Gunkel, Joel Parthemore, Murray Shanahan, Wendell Wallach and
Blay Whitby for useful discussions on issues covered in this paper. I owe a great debt
of gratitude to Maggie Boden and Aaron Sloman for decades of conversation and
inspiration on topics related to the present paper. I would also like to thank the
EUCogII network for support in connection with this work.
References
Arrabales, R., Ledezma, A. and Sanchis, A. [2009] \Strategies for measuring machine con-
sciousness," Int. J. Mach. Conscious. 1(2) 193201.
Arrabales, R., Ledezma, A. and Sanchis, A. [2010a] \The cognitive development of machine
consciousness implementations," Int. J. Mach. Conscious. 2(2), 213225.
Arrabales, R., Ledezma, A. and Sanchis, A. [2010b] \ConsScale: A pragmatic scale for
measuring the level of consciousness in arti¯cial agents," J. Conscious. Stud. 17(34),
131164.
Baars, B. J. and Franklin, S. [2009] \Consciousness is computational: The LIDA model of
Global Workspace Theory," Int. J. Mach. Conscious. 1(1), 2332.
Block, N. [1995] \On a confusion about the function of consciousness," Behav. Brain Sci. 18,
227247.
Bostrom, N. [2005] \A history of transhumanist thought," J. Evol. Technol. 14, 125.
Brock, D. C. (ed.) [2006] Understanding Moore's Law: Four Decades of Innovation (Chemical
Heritage Press, PA, USA).
500 S. Torrance
Sloman, A. [1985] \What enables a machine to understand?" Proc. Int. Joint Conf. Artif.
Intell., Los Angeles, CA, pp. 9951001.
Sloman, A. [2010] \Phenomenal and access consciousness and the ‘hard' problem. A view from
the designer stance," Int. J. Mach. Conscious. 2(1), 117169.
Tononi, G. [2008] \Consciousness as integrated information: A provisional manifesto," Biol.
Bull. 215, 216242.
Torrance, S. B. [1986] \Ethics, mind and arti¯ce," in Arti¯cial Intelligence for Society, ed. Gill,
K. S. (John Wiley, Chichester), pp. 5572.
Torrance, S. B. [1999] \Natural, arti¯cial, hyperarti¯cial minds," Proc. 4th Australasian Cog.
Sci. Conf., Newcastle, NSW, Australia.
Torrance, S. B. [2007] \Two conceptions of machine phenomenality," J. Conscious. Stud.
14(7), 154166.
Torrance, S. B. [2008] \Ethics and consciousness in arti¯cial agents," Artif. Intell. Soc. 22(4),
495521.
Torrance, S. B. [2012a] \Arti¯cial agents and the expanding ethical circle," Artif. Intell. Soc.
doi: 10.1007/S00146-012-0422-2.
Torrance, S. B. [2012b]. \The centrality of machine consciousness to machine ethics. Between
realism and social-relationism," Proc. AISB/IACAP World Congress, 2012. Workshop
on The Machine Question: AI, Ethics and Moral Responsibility, University of Birmingham,
5460.
Vinge, V. [1993] \The coming technological singularity," http://mindstalk.net/vinge/vinge-
sing.html.
Wallach, W., Allen, C. and Franklin, S. \Consciousness and ethics: Arti¯cially conscious moral
agents," Int. J. Mach. Conscious. 3(1), 177192.
Yudkowsky, E. [2008] \Arti¯cial intelligence as a positive and negative factor in global risk," in
Global Catastrophic Risks, eds. Bostrom, N. and Cirkovic, M. (Oxford University Press),
91119.
Yudkowsky, E. [2001] Creating Friendly AI 1.0, http://singinst.org/ourresearch/
publications/CFAI/index.html.