Professional Documents
Culture Documents
Artificial Intelligence and Ethics PDF
Artificial Intelligence and Ethics PDF
In a book written in 1964, God and Golem: Inc., Norbert Another important Hebraic image is that of R. U.R., a 1923
Wiener predicted that the quest to construct computer- play that gave us the first disastrous revolt of man-made
modeled artificial intelligence (AI) would come to impinge slaves against their human masters. In both works thed-
directly upon some of our most widely and deeply held re- logical insight and allusion abound; God and creation are
ligious and ethical values. It is certainly true that the idea salient issues, and both works condemn the AI enterprise.
of mind as artifact, the idea of a humanly constructed ar- Alongside the above images, of course, there have al-
tificial intelligence, forces us to confront our image of our- ways lurked Hellenic or; perhaps better, “Promethean”
selves. In the theistic tradition of Judeo-Christian culture, images. Western history is replete with examples of at-
a tradition that is, to a large extent: our “fate,” we were tempts to construct an AI, with miserable and comical
created in the imago Dei, in the image of God, and our flops; with frauds; and with: as of late, some feeble ap-
tradition has, for the most part, showed that our greatest proximations. The more sophisticated our reason and our
sin is pride-disobedience to our creator, a disobedience tools, the more we seem to be inexorably drawn to repli-
that most often takes the form of trying to be God. Now, if cate what has seemed to many to be what marks human
human beings are able to construct an artificial, personal persons off from the rest of creation-their cogito, their
intelligence-and I will suggest that this is theoretically nous, their reason. We seem to want to catch up with
possible, albeit perhaps practically improbable-then the our mythology and become the gods that we have created.
tendency of our religious and moral tradition would be to- In the layperson’s mind, however; the dominant imagery
ward the condemnation of the undertaking: We will have appears to be the Hebraic; many look upon the outbreak
stepped into the shoes of the creator, and, in so doing we of AI research with an uneasy amusement, an amusement
will have overstepped our own boundaries. masking, I believe, a considerable disquiet. Perhaps it is
Such is the scenario envisaged by some of the clas-
sic science fiction of the past, Shelley’s Frankenstein, or
the Modern Prometheus and the Capek brothers’ R. U.R.
(for Rossom’s Universal Robots) being notable examples.
Both seminal works share the view that Pamela McCor-
Abstract
duck (1979) in her work Machines Who Think calls the
The possibility of constructing a personal AI raises many ethi-
“Hebraic” attitude toward the AI enterprise. In contrast cal and religious questions that have been dealt with seriously
to what she calls the “Hellenic” fascination with, and open- only by imaginative works of fiction; they have largely been ig-
ness toward, AI, the Hebraic attitude has been one of fear nored by technical experts and by philosophical and theological
and warning: “You shall not make for yourself a graven ethicists Arguing that a personal AI is possible in principle,
image. . .” and that its accomplishment could be adjudicated by the Tur-
I don’t think that the basic outline of Franl%enstein ing Test, the article suggests some of the moral issues involved
needs to be recapitulated here, even if, as is usually the in AI experimentation by comparing them to issues in medical
case, the reader has seen only the poor image of the book in experimentation Finally, the article asks questions about the
capacities and possibilities of such an artifact for making moral
movie form. Dr. Frankenstein’s tragedy-his ambition for
decisions It is suggested that much a priori ethical thinking
scientific discoveries and benefits, coupled with the misery is necessary and that that such a project cannot only stimu-
he brought upon himself, his creation and others-remains late our moral imaginations, but can also tell us much about
the primal expression of the “mad scientist’s” valuational our moral thinking and pedagogy, whether or not it is ever
downfall, the weighting of experimental knowledge over accomplished in fact.
the possibility of doing harm to self; subject, and society.
6For example, Joseph Fletcher (1972) uses the term “human” rather
Adjudicating the Achievement of a Personal Al than “person,” but his criteria distinguish the personal from the
merely biomorphologically human.
If these difficulties can be surmounted, how can we finally 71t ought to encompass even the possibility of a “religious experi-
reach agreement that a personal AI has been achieved? ence,” which Weizenbaum has asserted in a popular source it could
never attain to Weizenbaum does not tell us why it could not have
The quasi-behavioral criteria I outlined earlier facilitate such an experience. See the interview of Weizenbaum by Rosenthal
our ability to specify the conditions under which a machine (1983), pp. 94-97