You are on page 1of 297

Robots and AI:

The Challenge to Interdisciplinary Theology

by

Erin Elizabeth Green

A Thesis submitted to the Faculty of Emmanuel College


and the Graduate Centre for Theological Studies of the Toronto School of Theology.
In partial fulfilment of the requirements for the degree of
Doctor of Philosophy in Theology awarded by the University of St. Michael’s College.

© Copyright by Erin Elizabeth Green 2018


Robots and AI:
The Challenge to Interdisciplinary Theology
Erin Elizabeth Green

Doctor of Philosophy

University of St. Michael’s College

2018

Abstract

The growing presences of increasingly sophisticated and humanlike robots and artificial

intelligence (AI) brings about new theological challenges. In military, biomedical, industrial and

other applications, these technologies are changing how humans think about themselves, their

futures, and how their societies are organized. Such an unstoppable and global force requires

scrutiny through a new interdisciplinary theological lens. Some researchers have made tentative

theological responses, but this work is missing cohesion and leaves open many crucial questions.

Insights from contextual and ecological theologies make significant contributions in addressing

gaps in interdisciplinary theological discourse about robots and AI.

This thesis applies a postfoundationalist approach, especially as expressed by Wentzel

van Huyssteen, to the transversal intersection between robots, AI, and theological reasoning.

Contextual analysis complements this methodology. Study of four key roboticists and AI

researchers—Hans Moravec, Rodney Brooks, Cynthia Breazeal, and Heather Knight—illustrate

the complexity of this field, including important methodological differences within the robotics

and AI community. Diverse and disparate theological literature on robots and AI is collated into

ii
two broad types of responses. The first has Anne Foerst’s work as its hub, the second Noreen

Herzfeld’s. Critical engagement with these theological contributions makes clear the way to a

third theological approach, one that is yet-undeveloped in theological writing. The fourth chapter

details this approach through the application of insights from ecological theology to the vision of

the human found in robotics research, detailing the kind of contextual analysis required to

radically enhance interdisciplinary discourse in this area, and through further consideration of the

historical and methodological issues that will shape these questions in the years to come.

This process of collating and analyzing both scientific and theological literature on

humanoid robots and AI spurs growth in a to-date disorganized area of theological enquiry. It

identifies and provides a first analysis of some of the most pressing ethical aspects of robotics

and AI research, and develops a structure for further debate about and engagement with these

issues. Importantly this thesis emphasizes the practical way theologians, churches, and civil

society can respond to these unprecedented historical forces.

iii
Acknowledgements
This research project—and most of those leading to it—have been made possible by the

never-ending support, friendship, and good humour of my dear friend and supervisor, Dr.

Michael Bourgeois. Credit for any good that comes from this work goes as much to him as to its

author. I’m also incredibly grateful to all those who’ve taken time to read sections of this project

or talk about robots and AI with me, up to and including its defence. This has helped me stay

connected to the work, even when the demands of other parts of my life pulled me away from it.

Of special note, of course, are professors Dr. Dennis O’Hara and Dr. Tom Reynolds who helped

with my proposal, served on my committee, and have been an instrumental part of my education

at TST. I appreciate the time and care they took with the pages and paragraphs that follow. Big

thanks to my friends Chris Zeichmann and Michael Buttrey who were essential in overcoming

practical and bureaucratic barriers in finishing a Canadian degree from my Belgian home. I was

also pleased to revisit postfoundationalism and the work of Wentzel van Huyssteen in this

project, which made this work entirely possible and much more interesting and constructive.

I am also grateful for the steadfast steadfastness of my beloved pet, Fattie. Through 17

years of stress, uncertainty, poverty, three countries, 13 apartments and even more roommates,

she has been an unchanging presence amid sometimes too-much change—an orb of blessed

indifference. Though appearing late to this PhD game, my partner in life and all that goes with it

has seen me through the struggle of finishing this beast of a project, while at the same time

supporting me as I cope with and delight in all things that come with a transatlantic move. My

dear Winny, I so look forward to our new life together post-PhD. Finally, to Cleo, whose

maternity leave I selfishly used to finish this project. Thank you, baby girl—now let’s go to the

zoo.

iv
Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
The Quest for a Counterpart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Alan Turing and the Turing Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Chapter One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Hans Moravec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Intelligence as computing power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Understanding intelligence, understanding the human . . . . . . . . . . . . . . . . . . . . . . 42
Embodiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Mind and evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
The importance of perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Rodney Brooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Robots in history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
The importance of perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Functional interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Situated and embodied robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Cynthia Breazeal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Relational interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Weak AI, strong impression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Heather Knight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Social media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Robot optimism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Chapter Two . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Robotics and robots as self-discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Robots as Stories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Robots as Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Embodiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Embodiment in community . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
God . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Imago Dei . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Tillich’s influence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

v
Chapter Three . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Intelligence and the Imago Dei . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Substantive approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Functional approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Relational approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Other contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Ethics and Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Actual hybridity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Virtual hybridity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Human Distinctiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Chapter Four . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
The Human . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
A theological response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Holism and relationality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Context in cosmogenesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Embodiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Contextual Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Attending to diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Application of Robots and AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Industrial and commercial settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Biomedical applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Military applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Drawing boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Intradisciplinary diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
A theological response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Key Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Directions for Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

vi
Introduction

The Quest for a Counterpart

In 1956 John McCarthy convened the first conference on artificial intelligence (AI) at Dartmouth

College, New Hampshire.1 Soon after, along with Marvin Minsky (another giant in the field of

computer science, mathematics, robotics, and AI) he founded the Computer Science and

Artificial Intelligence Laboratory at the Massachusetts Institute of Technology (MIT).2 At nearly

the same time, researchers began placing AI systems in mechanical bodies and within a few

decades made the first humanoid robots. Despite this short history, humans have contemplated

creating something like a humanoid robot with intelligence rivalling or surpassing our own long

before the gathering at Dartmouth. The historical roots of robots run much deeper. In Living

Dolls: A Magical History of the Quest for Mechanical Life, Gaby Wood traces these to at least

the second century BCE.3 Here, in Classical Greece with Prometheus and Pygmalion, Wood

finds the beginnings of our robot story. Others, too, including John Cohen in Human Robots in

Myth and Science find the “first ancestors of modern automata in the twilight figures of remote

1. Peter Heltzel, “Cog and the Creativity of God,” Journal of Faith and Science Exchange 2, (1998): 21. The
phrase Artificial General Intelligence (AGI) also has some currency in technical literature, most commonly referring
to strong AI or AI that is exceptionally humanlike. This phrase has nearly no currency in theological literature, so
will not be used in this thesis. See for example Jean-Christophe Baillie, “Why AlphaGo is not AI,” IEEE Spectrum,
March 17, 2016, accessed July 31, 2017, http://spectrum.ieee.org/ automaton/robotics/artificial-intelligence/why-
alphago-is-not-ai. Occasionally science commenters will also speak of artificial life, which goes beyond this
project’s interest in humanoid robots and AI. These broader concerns are also subject to theological interest. See for
example Antje Jackelén, “The Image of God as Techno Sapiens,” Zygon 37, no. 2 (2002): 289-302; R.E. Lenski,
“Twice as Natural,” Nature 414, no. 6861 (2001): 255-256.
2. Susan Hassler, “Marvin Minsky’s Legacy of Students and Ideas,” IEEE Spectrum: Technology,
Engineering, and Science News, February 18, 2016, accessed February 18, 2016, http://spectrum.ieee.org/
computing/software/marvin-minskys-legacy-of-students-and-ideas. See also Marvin Minsky, “Steps Toward
Artificial Intelligence,” Proceedings of the International Radio Institute (1961): 8-30. Minsky develops his approach
to AI in Society of Mind (London: Simon & Schuster, 1986).
3. Gaby Wood, Living Dolls: A Magical History of the Quest for Mechanical Life (London: Faber and Faber,
2002), xv.

1
2

mythology.”4 In Greek mythology, Prometheus created humans from clay, an action not

dissimilar to that found in Judeo-Christian and other creation narratives. He is a trickster figure

who disobeys Zeus and gives fire to humans, which cautions the hearer against transgressing

God-given or natural boundaries. Mary Shelley’s 1818 novel Frankenstein; or, The Modern

Prometheus, is a well-known iteration of this myth that also inspired debate about humans

making a counterpart for themselves, this time through alchemy-like experimentation. The

humanoid beast, Frankenstein’s nameless monster, makes clear the parallel with the myths of

Genesis saying, “I ought to be thy Adam.”5

In the story of Pygmalion, Ancient Roman civilization is also a rich source of robot

inspiration. Though there are many versions of the myth, including in Cypriot and Grecian

cultures, the best known comes from Ovid’s “Metamorphoses,” wherein a gifted sculptor makes

a statue so beautiful and compelling he falls in love with it.6 By transforming an ivory form into

a human one, Venus grants Pygmalion’s wish to marry a woman whose beauty equals that of his

statue. The account speaks of the human desire to construct for ourselves an equal, a counterpart

worthy of relationship. These themes all resurface again and again leading to contemporary

robotics research.

It is not only the great civilizations of Rome and Greece that gave rise to such

aspirational thinking. Elsewhere throughout the world, in narratives less well-known to western

audiences, people also created in their own image and wrestled with related philosophical and

4. John Cohen, Human Robots in Myth and Science (London: George Allen & Unwin Ltd, 1966), 15.
5. Mary Shelley, Frankenstein, or The Modern Prometheus (Boston and Cambridge: Sever, Francis, & Co.,
1869), 78.
6. Stanley Lombardo, trans., Metamorphoses (Indianapolis/Cambridge: Hackett Publishing Company, Inc.,
2010).
3

metaphysical questions. For example, some accounts of ancient Egyptian customs suggest that

these early civilizations believed that statues could somehow blur the line between living and

dead, material and spiritual. Priestly rituals could infuse a statue with the soul of a deceased

person, and along with it other supernatural powers.7 The boundary between human and a

created counterpart continues to preoccupy researchers even today, especially in biomedical

applications of robotics and AI that require hybridity between the thoughts and behaviours of

humans and machines. Proto-robot stories continue near ad infinitum. From Rome to India to

China and beyond, records of earliest civilizations hint at curiosity and mystery that now plays

itself out in the development of humanlike robots and AI. These ancient accounts show humans

already wrestling with many of the issues that arise now in the fields of contemporary robotics

and AI research: What does it mean to be human? What is the nature of human relationships with

humanoid creations? What should be the limits of human creative activity?

Modern history is continuous with these ancient stories and pervasive questions. Wood’s

and Cohen’s thorough histories reveal that many great minds were captivated by both the

mechanical and philosophical challenges presented by automatons.8 Even from the late Middle

Ages inventors and scientists turned their attention to building automatons that both delighted

and frightened. In Japan, early automatons called karakuri performed in theatre and at parades,

7. Ibid., 20.
8. Automatons are self-operating machines that often take human or animal forms. The word “automaton” is
derived from the Greek for self and move, first appearing in English-language dictionaries in the early 1800s. Isaac
Asimov and Karen A. Frenkel, Robots: Machines in Man’s Image (New York: Harmony Books, 1985), 18. The word
“robot” was only introduced in the 1920s and therefore is anachronistic in the instances of Vaucanson, von
Kempelen, and Edison. Robots that both look and act as humans do are often also called androids. I will, however,
use humanoid rather than android throughout to underscore a stronger semantic parallel between the two objects of
study at hand: humans and humanoid robots. Additionally, the term “android” rarely appears in theological literature
on robotics and AI, with preference given to “humanoid robot” or simply “robot.”
4

and participated in the detailed social customs of Japanese tea ceremonies. These proto-robots

appeared as early as the seventeenth century and were items of luxury for Japan’s ruling class. 9

In Europe, the Strasbourg clock (1354) is another historically important example. Though the

model now standing in the Alsatian cathedral dates from the nineteenth century, the original

featured several automatons, including a rooster as “a reminder of St. Peter’s denial of Jesus,”

that performed a series of noontime rituals.10 Perhaps to twenty-first century sensibilities the first

Strasbourg clock seems as intriguing as the average cuckoo clock, however it was remarkable in

its original setting. The Strasbourg clock, “seemed a model of the cosmos. God has set it in

motion at the start and then withdrew from the scene. He was no more necessary for the

continuous supervision of the cosmos than the clockmaker was needed to regulate each tick of

the clock.”11 Cohen notes that many great names in history considered this clock with interest,

including French polymath René Descartes, who would later become a protagonist in his own

robot story.12 Though far better known for his contributions to philosophy and mathematics,

Descartes also emerges as an unavoidable figure in the history of robotics and AI. One of the

stranger places he appears is in apocryphal accounts that he tried to build a “simulacrum” of his

9. Kristy Boyle, “Homepage,” Karakuri.info, last modified January 14, 2008, accessed April 1, 2018,
http://www.karakuri.info/.
10. Cohen, 81.
11. Ibid.
12. The clock carries important symbolic weight in the history of robotics and AI. Cohen, for example, uses the
clock as an important illustration of where the so-called East and West have divergent robot histories, and especially
where early European societies lagged others. The Strasbourg clock is no novelty considering that Caliph Harun al
Raschid (of present-day Iran) presented Charlemagne with a clock of equal—if not superior—complexity some 500
years earlier. Ibid., 82. MIT historian Bruce Mazlish, writing in the 1960s as contemporary AI research was
beginning, also remarked on Descartes’ role in this robot story. He held that Descartes was responsible for
propagating a strong divide or “discontinuity” between human and machine that has held through the twentieth
century. Mazlish further remarks on Descartes’ confidence in two infallible criteria for distinguishing humans from
machine: the human ability to use “words or other signs” for communication, and the human ability to act from
understanding. These, of course, are increasingly challenged by AI in the twentieth and twenty-first centuries. Bruce
Mazlish, “The Fourth Discontinuity,” Technology and Culture 8, no. 1 (January 1967): 6.
5

daughter who had died as a young child.13 While on a voyage to Sweden, Descartes’ shipmates

grew suspicious of a never-seen daughter who was supposed to be accompanying him. They

eventually broke into his cabin and discovered a girl-child automaton, were horrified, and threw

the doll overboard. The context of this odd tale reveals much about its significance. Wood

situates this story in the world of Galileo Galilei and the Roman Inquisitions, a time when the

powerful Church easily—and often violently—dismissed scientific claims. Descartes and his

automaton were but another vivid example of the exercising of this power. As she writes, “In this

context, what the fable about the ship finally represents is the throwing overboard of one of

Descartes’ great contributions to philosophy, anatomy and mechanics. Science was cast out to

sea.”14

From Descartes onward, interest in building humanoid counterparts only grew. This

interest was reflected in all manner of human expression, ranging from literature to philosophy to

science.15 Those who have prepared detailed histories of robots and AI find this interest reflected

everywhere, including among the west’s most influential thinkers. While Thomas Hobbes is

mostly known for his contributions to the development of political philosophy, even he

wondered about primitive automata and if they had some sort of “artificiall [sic] life.”16

Elsewhere, the great Spanish writer Miguel de Cervantes Saavedra included a wooden horse with

the ability to fly in his famed Don Quixote and Cyrano de Bergerac gives his flying automaton

the body of a locust.17 These examples stretch contemporary imaginations about the link between

13. Wood, 4.
14. Ibid., 8.
15. Cohen, 54.
16. Lenski, 255.
17. Wood, 54-55. My account of medieval and early modern humanoid robots and proto-robots relies primarily
on Woods.
6

twenty-first century robots and sixteenth century fiction. Such accounts, however, show that

technical challenges and philosophical questions related to building humanoid robots long

predate McCarthy, Minksy, and others.

Approximately a century after Descartes, Jacques de Vaucanson emerged as one of the

most influential figures in early robotics and AI. Best known for his mechanical duck, which

could both replicate digestion and excretion, Vaucanson was known as the “new Prometheus” by

none other than Voltaire. The prolific inventor also built a flute-playing automaton out of wood

and painted to resemble a well-known statue of his day. The flute player was remarkably

complex, having a moving tongue and could simulate breathing. Its contemporary observers

were unsettled and claimed it was encroaching on the very “essence of life.”18 The flute player

and duck resurfaced sporadically throughout Europe for some time, but both were eventually

lost.

Vaucanson also made other lesser-known—but equally important—contributions to the

foundations of robotics and AI. His aptitude for inventing combined with the success of his

automatons allowed Vaucanson to become a crucial figure in the history of digital computing. In

1741, Louis XV of France appointed him Inspector of Silk Manufacture for the kingdom.19 In

response to the challenges facing France’s lagging silk industry, Vaucanson designed his own

loom. His automated loom was greeted with protests from weavers who “insisted that their skills

were needed, that no machine could replace them.”20 Vaucanson’s invention would later serve as

one of the bases for the Jacquard loom, which was the first machine to use a punch card system

18. Ibid., 15-31.


19. Ibid., 37.
20. Ibid., 38.
7

to control sequences of operations. This system was then adapted for use in the earliest digital

computers in the mid-twentieth century.

In this period, technical advances converged to make proto-robots and AI an ever-more

viable endeavour. Importantly, during Vaucanson’s lifetime, the art of building anatomical

models out of wax developed. A contemporary of his, Marie Grosholtz—better known as

Madame Tussaud—perfected this craft.21 This development along with the increasing social

acceptance of human anatomical study contributed to an environment where inventors like

Vaucanson could thrive. These conditions were only further buoyed by support from the likes of

Louis XV who wanted his own humanoid automaton, complete with a circulatory system. He

believed this would help surgeons increase their social stature and move away from the guilds of

barbers and establish their own profession.22 Vaucanson became embroiled in competition with

surgeon Claude-Nicolas Le Cat to build the king’s automaton. Le Cat presented his plans to the

French Academy of Sciences and concluded with the somber warning that “it will resemble a

man too much.”23 This personal unease speaks to broader discomfort with creating or recreating

life using the tools of science and technology. Despite social mores, Vaucanson pressed on in

pursuit of the king’s ambition. The discovery of rubber by Europeans in the mid-1700s

seemingly held great promise for this project. This material was unlike others in that it could be

used to craft fine and flexible tubes for the automaton’s circulatory system. Ultimately, however,

21. Ibid., 43.


22. Ibid., 44.
23. Ibid., 49.
8

the supply of rubber and knowledge of how to use it was too limited. The project was abandoned

and Vaucanson’s role in the history of robotics and AI comes to an end.24

Elsewhere in Europe other curious and creative people were also fixated on automatons.

At the end of the eighteenth century, Wolfgang von Kempelen of modern-day Hungary emerged

as an important figure in this history. Like Vaucanson before him, he was also called a “modern

Prometheus” by his contemporaries.25 Von Kempelen built a chess-playing automaton

nicknamed The Turk, in reference both to its costume and to the “unknown, the spirit-like forces

of darkness” of popular Orientalist mythology of his time.26 Inventor and invention went on a

two-year tour of Europe and even played Napoleon, Catherine the Great, and other notable

opponents. The Turk was also the subject of an essay by Edgar Allan Poe and would tour the

United States and Canada in later years under the guardianship of Johann Nepomuk Maelzel. By

the close of the eighteenth century, von Kempelen’s chess player was an object of controversy.

Pamphlets began to circulate trying to “expose and detect” the “nature of the mechanism” at

work in the automaton.27 As Wood notes, this historical anxiety was “connected with the

question of what was human, and only human. A machine could not be intelligent.”28 Ultimately

the Turk proved to be an elaborate hoax and not an automaton at all, but simply by existing, it

advanced scientific and philosophical conversations about what it means to be human and create

24. Ibid., 51.


25. Ibid., 57.
26. Ibid., 58.
27. Ibid., 60.
28. Ibid.
9

such a counterpart. For nearly three generations, as a direct result of The Turk, these debates on

the nature of machine intelligence endured on both sides of the Atlantic.

In the later decades of the nineteenth century, Thomas Edison, motivated by similar

desires as Vaucanson and von Kempelen, established an elaborate industrial operation in New

Jersey. The scope of Edison’s lab in 1887 was impressive even by today’s standards: it occupied

22 acres, had 10,000 staff, and churned out inventions warranting nearly 550 patents.29 While he

is better known for his contributions to electric lights, clock mechanisms, and recorded sound

and images, Edison was also enamoured by automatons and invested heavily in the production of

dolls with recorded voices. Interestingly, his involvement in the history of robots and AI also

reveals a new kind of objection to the overall project of building humanoid counterparts. The

main theological and philosophical objection to Vaucanson and von Kempelen’s humanoid

automatons was that they too closely resembled humans. People were uneasy with the idea of

turning machines into humans. When it came to Edison’s work, the inverse was true. His

contemporaries protested his efforts to turn humans into machines. As precursors to Henry

Ford’s assembly lines, Edison’s factories were broken into separate, task-specific areas. This

allowed tasks to be distributed across a greater number of workers, increasing output. Clergy in

America objected to these trends. Some argued that humans would lose their “manhood” if they

assumed too much the role of machines in Edisonian production lines.30

Though Edison’s dolls are not his best-known invention, he invested significant

intellectual and material resources in them, often drawing on his other achievements to make

29. Ibid., 107.


30. Ibid., 113.
10

them as lifelike as possible. For example, his knowledge of phonograph production allowed

Edison to give the dolls human voices, and the joints used in the dolls had already been

developed as part of a “steam-excavating machine” for railways.31 Edison hoped his product

could compete with fine, expensive imports from Europe. To achieve this, he employed

hundreds, built new production facilities, and scaled down tools and objects needed for the task.

Ultimately his efforts were in vain. Consumers balked at the doll’s eerie voices and hard metal

bodies with a “perforated chest.”32 The finished product was also too heavy for young children to

play with easily. The project failed.

Edison’s fixation with dolls lived on in imaginary worlds. In 1886 Auguste Villiers de

l’Isle-Adam published The Eve of The Future. The novel featured a mythologized Thomas

Edison falling in love with a “mechanical, electrical, magical” perfect woman.33 This is a

nineteenth-century variation on the themes found in stories about Pygmalion and serves as

something of a commentary on Edison’s ambitious attempts to make a counterpart in the human

image. Literary imagination about such matters did not stop here. An early work of science

fiction indispensable to any history of robotics is Karel Čapek’s 1920 play R.U.R. (Rossum’s

Universal Robots).34 This work introduces the word “robot” in relation to humanoid

counterparts. Derived from Čapek’s native Czech it means worker or slave. In this short

31. Ibid., 114.


32. Ibid., 117.
33. Ibid., 128.
34. Though Čapek’s work is among the most famous in western literature about robots and AI, others wrote of
similar themes at the same time or even before him. For example, Samuel Butler’s 1872 novel Erewhon is mostly a
satire of Victorian culture but also contains a section on machines achieving consciousness according to the
principles of Darwinism. Unlike Čapek’s R.U.R., it is the humans who rise up and destroy the machines before it is
too late. Karel Čapek, R.U.R. (Rossum’s Universal Robots) and War with the Newts, (London: Gollancz, 2001);
Samuel Butler, Erewhon: or, Over the Range, (London: Trübner & Co.,1872).
11

dystopian play, mass-produced humanoid automata are built as servants, “but in the end they

rebel, wipe out humanity, and start a new race of intelligent life themselves.”35

A generation later, Isaac Asimov began writing about robots and machine intelligence in

response to “unrealistically wicked or unrealistically noble” portrayals in science fiction to that

point.36 His best-known contributions to the genre include a 1950 collection of short stories, I,

Robot, which includes his famed Three Laws of Robotics:

1) A robot may not injure a human being, or, through inaction, allow a human being to
come to harm.
2) A robot must obey the orders given it by human beings except where such orders would
conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with
the First or Second Law.37
The short stories that make up I, Robot play with the limits and interpretation of these laws, often

showing how humans and robots can bend seemingly straightforward boundaries. In this way,

the collection is a drawn out thought experiment about what can go wrong when humans try to

enact sophisticated programming in machines. From Asimov, interest in science fiction

representation of robots and AI only grew. Today they appear in all manner of media ranging

from video games to manga to Hollywood blockbusters to dystopian novels. These

representations serve as an evolving record of how humans think and feel about increasingly

humanlike robots and AI.

35. Wood, 128.


36. John M. Jordan, Robots (Cambridge, MA: MIT Press, 2016), 32.
37. Ibid.
12

Alan Turing and the Turing Test

With the birth of AI research in the 1950s came renewed questions about litmus tests for

the success of machine intelligence. Here, Alan Turing emerges as a major influence on early AI

research and everything that follows. A British mathematician and early contributor to the

emerging field of computer science, Turing also played a crucial role in code-breaking efforts

during the Second World War. Despite his obvious genius and contributions to his homeland’s

national interests, he was persecuted for his sexual orientation, underwent experimental chemical

castration, and his career withered. He eventually died of a presumed suicide at the age of forty-

one.38

In the late 1930s Turing proposed what would eventually be called the Universal Turing

Machine, “an abstract machine that helped characterize what kinds of problems could be

computed.”39 Such thought experiments have helped give shape to the central-processing units

found in computers today. His well-known 1950 essay “Computing Machinery and Intelligence”

tests not the mathematical limits of computational power, but the nature of machine intelligence

itself.40 The essay outlines a test based on a popular parlour game for judging machine

intelligence. In the game, a human and a computer answer a series of simple questions about hair

or poetry (for example), and perform simple calculations. Another human must then discern,

38. Caroline Davies, “Enigma Codebreaker Alan Turing Receives Royal Pardon,” The Guardian, December 24,
2013, accessed December 13, 2016, https://www.theguardian.com/science/2013/dec/24/enigma-codebreaker-alan-
turing-royal-pardon.
39. Asimov and Frenkel, 192.
40. Alan M. Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (1950): 433-460. This article
was written a few years before the 1956 Dartmouth College conference that inaugurated a new era of artificial
intelligence research in the United States and English-speaking world and solidified its nomenclature. While Turing
speaks of machine intelligence, he is in fact referring to artificial intelligence.
13

using only the answers to the questions, which is the human and which is the machine. Turing’s

version of this game became known as the Turing Test, a controversial but indispensable aspect

of current AI discourse. The Turing Test features in the annual contest for the Loebner Prize, in

which research groups from around the world submit chatbot software that is pitted against a

series of human interlocutors.41 Judges then try to decipher which of these are human and which

are machine.42 As of early 2018, no AI has claimed the Loebner Prize by convincing more judges

than not that it was human, though the margin of victory for humans is narrowing and

researchers have elsewhere claimed Turing Test success for their AI.43 Importantly, in both the

game outlined in Turing’s article and in the competition for the Loebner Prize, success is judged

only by how a human perceives the machine’s intelligence. No consideration is given to the

engineering or programming of that intelligence, nor to whether it mimics or recreates the

processes of human cognition. All that matters is that humans interact with it as they would with

human intelligence.

Turing’s landmark paper also anticipates a number of objections to machine intelligence

and then refutes them. He identifies one explicitly theological objection: “Thinking is a function

of man’s immortal soul. God has given an immortal soul to every man and woman, but not to

41. “The First Turing Test,” Home Page of the Loebner Prize in Artificial Intelligence, accessed October 11,
2013, http://www.loebner.net/Prizef/loebner-prize.html.
42. For a detailed narrative account of the Loebner Prize competition see Brian Christian, The Most Human-
Human: What Talking with Computers Teaches Us About What It Means to Be Alive (New York: Doubleday, 2011).
43. See for example Giles Fraser, “A Computer has Passed the Turing Test for Humanity – Should we be
worried?,” The Guardian, June 13, 2014, accessed August 21, 2014, http://www.theguardian.com/commentisfree/
belief/2014/jun/13/computer-turing-test-humanity. This result is questionable because researchers gave the chatbot
the persona of a non-native English speaker who was speaking English. This makes awkward expressions and poor
grammar much more forgivable. The Loebner Prize is conducted in English, therefore reinforcing the global,
colonial status of the language. In restricting it to English the contest puts those who speak and understand the
language well at an advantage.
14

any other animal or to machines. Hence no animal or machine can think.”44 Turing rejects this by

saying that animals and humans are more alike than humans and machines, so it makes little

sense to classify animal and machine together in the non-thinking category. He also notes that

the concept of soul may have different or no currency in other religions.45 This means that such

theological objections cannot be levied against AI because they are not universal. He goes on to

suggest that this theological objection even limits the freedom and omnipotence of God. In

Turing’s estimation, “should we not believe that He has freedom to confer a soul on an elephant

if He sees fit?”46

Rationale

Alan Turning, Marvin Minsky, John McCarthy and countless others contributed to the early days

of AI research and expanding the frontiers of scientific and technological research. These first

pathways eventually led to sophisticated robots and AI systems implicated in all aspects of

human life ranging from warfare and espionage, remote-controlled surgery, self-driving cars, and

more. Today applications of robots and AI proliferate at an astonishing rate thanks to academic,

military, and commercial interests. Robots, AI, and the rest of the world exist in ever-shifting

equilibrium. Geopolitics, philosophy, culture, and history all inform this research as much as

scientific ambition. All this makes for a landscape ripe for theological, philosophical, and

technical inquiry, where the future remains permanently indeterminate. As one commentator

appropriately notes, AI wrestles with the “paradoxical notion of a field of study whose major

44. Turing, 443.


45. Ibid.
46. Ibid.
15

goals include its own definition.”47 There is no firm consensus on the definition of robots or AI

within robotics and AI research. While some elements remain fairly consistent, the boundaries of

these categories are blurry at best. For this reason, this project will not attempt to force consensus

where there is none. Instead, I will highlight relevant aspects of the definition of robots and AI as

they arise in Hans Moravec, Rodney Brooks, Cynthia Breazeal, Heather Knight, Anne Foerst,

and Noreen Herzfeld.

Though the first humanoid robots and AI systems date only to the 1990s, they are already

displaying behaviours and actions that were once known only in humans. This makes for a rare

moment in evolutionary history. Not since the time of the Neanderthals have humans come face-

to-face with something so humanlike in approximating our unique constellation of cognitive,

social, and motor skills. For a very long time, humans have been “alone in the world.”48 Robots

and AI challenge all manner of beliefs about the human, especially that we are somehow special

or unique. In this light, theologians have warned against hinging human worth on “constitutional

uniqueness,” which is all too easily dissolved by advances in strong AI.49 Robots and AI also

challenge human social and evolutionary futures. Some commentators argue that these artifacts

47. George Luger, Artificial Intelligence: Structures and Strategies for Complex Problem Solving, 5th ed.
(Reading, MA: Addison-Wesley, 2005), 2. See also Russell C. Bjork article for different and incompatible
definitions of AI and how these might affect theological responses. Russell C. Bjork, “Artificial Intelligence and the
Soul,” Perspectives on Science and Christian Faith 60, no. 2 (2008): 95-102.
48. The phrase “alone in the world” refers to J. Wentzel van Huyssteen’s work on theological anthropology,
which emerged from his 2004 Gifford Lectures and contributes to my responses found in later chapters of this thesis.
J. Wentzel van Huyssteen, Alone in the World? Human Uniqueness in Science and Theology (Grand Rapids, MI:
Eerdmans, 2006). Others have noted the distinctiveness of this moment in history. MIT historian Bruce Mazlish, for
one, framed this new age in terms of a “fourth discontinuity,” a world-changing “ego-smashing” for humans. The
first three brought about by Copernicus, Darwin, and Freud respectively. This fourth discontinuity is marked by fluid
boundaries between “man and machine” and increasing continuity with humans and the technologies they create.
Bruce Mazlish, “The Fourth Discontinuity,” Technology and Culture 8, no. 1 (January 1967): 2.
49. Bjork, 99.
16

of science and technology, and their role in a new age of digital innovation, change us to the

point where we are no longer human but “cyborg, or posthuman, or techno-sapien.”50

In robotics and AI research there is little distinction between laboratory and the rest of the

world, between applications in war and in healthcare, and, increasingly, between the human and

robot and AI creations. For example, the same technology that was used in the Canadarm for

space shuttle repairs was modified for heart surgery in small children.51 Similarly, technology

that “cleaned up cluster bomblets from airfields” is the same technology that now whirrs around

living rooms sucking up dirt and dust.52 These conflated industrial, biomedical, and military

applications of robotics and AI bring into sharp relief the need for contextually grounded and

critical theological research on the topic. In the United States, a global power in robotics and AI,

these interests are especially intermixed and university research in this area is largely funded by

the military.53 Academic freedom is threatened when researchers must increasingly balance the

need for lucrative military contracts with the integrity of their research programmes. As warfare

changes, nation-states and non-state actors without access to such resources stand unprepared

50. Gregory R. Peterson, “Imaging God: Cyborgs, Brain-Machine Interfaces, and a More Human Future,”
Dialog 44, no. 4 (2005): 338. Even the first theological responses to robots and AI were seemingly aware of the
power of these technologies to impact human lives and the world as a whole. Allen Emerson and Cheryl Forbes, for
example, writing in the mid-1980s remarked that robots and AI might even be more revolutionary than the invention
of writing and printing. Allen Emerson and Cheryl Forbes, “Living in a World with Thinking Machines: Intelligence
Will Not Separate People from Machines,” Christianity Today 28, no. 2 (1984): 14.
51. “SickKids receives $10 million in funding to support medical research and the development of KidsArm,”
SickKids Newsroom, March 10, 2010, accessed August 30, 2011, http://www.sickkids.ca/ AboutSickKids/
Newsroom/Past-News/2010/KidsArm.html.
52. P. W. Singer, Wired for War: The Robotics Revolution and Conflict in the 21st Century (New York:
Penguin, 2009), 22.
53. Singer notes that in a five-year span after the September 2001 terrorist attacks on the United States the
“annual national defense budget has risen by 74 percent, to $515 billion. This figure does not include the several
hundred billion dollars additionally spent on the cost of operations in Afghanistan and Iraq, which have been funded
in separate budget supplementals.” The Pentagon also has a so-called ‘black budget’ that remains classified. Ibid.,
61.
17

and vulnerable. This new reality simultaneously rejects and protects the value of human life. On

one hand, deploying robots and AI systems protects the lives of military personnel. On the other,

it makes the destruction of enemy infrastructure and lives even more effective, rapid, and

creative. All the while this goes on, medical robots and AI systems remain largely confined to

well-funded health care systems, accessible only to the world’s very richest. Robotics and AI are

increasingly put to service in surgery, diagnostics, and an array of therapies;54 they are applied in

prosthetic devices, exoskeletons, and brain-computer interfaces. Though still in their infancy,

these technologies will figure prominently in the future treatment of spinal cord injuries, vision

loss, paralysis, and so on. They also create new social inequality. For example, superior

prosthetic limbs cost tens of thousands of dollars (excluding research and development costs)

and are therefore inaccessible to all but the most privileged patients who have access to high-

quality private or state-funded health care. A good prosthesis can significantly increase an

individual’s potential to gain or regain autonomy, income-earning potential, and a social life.

Through the development of sophisticated robotic prostheses, humans can gain or regain

functionality that was either missing or underdeveloped at birth or lost later on. For those who

are paralyzed, this means that through thought alone they can again interact with their world in

significant ways (e.g., manipulate a cursor on a computer, change channels on a television set,

turn lights on and off). Researchers connect the human brain and computer through brain-

computer interfaces (BCIs). These devices take a number of forms, roughly divided between

invasive (where arrays of sensors are embedded in the brain) or non-invasive (where electrodes

54. For example, da Vinci robot systems are increasingly used for minimally invasive surgical procedures as
part of cancer treatment. See “da Vinci Surgery: Minimally Invasive Surgery,” da Vinci Surgery, accessed October
11, 2013, http://www.davincisurgery.com/.
18

resting on the surface of the skull read the electromagnetic patterns emitted by the brain). 55 Even

in the earliest incarnation of these technologies, scientists imagined they would “elevate the

computer to a genuine prosthetic extension of the brain.”56 For those who do not have typical

control over human limbs and joints, this means they might play and work and care for

themselves in ways that are comparable to the phenotypically expected human form.57 There is

even serious conversation about the potential of robotics and AI to augment the human form not

just for therapy but for enhancement.58 These technologies are often a result of the enormous

financial and political interests supporting military applications of robotic and AI systems. For

example, the rugged terrain of Afghanistan demands very sturdy, very dexterous robots to assist

or replace human military personnel. Military support for this kind of research and development

might ultimately benefit anyone in need of more affordable, higher-functioning prosthetic limbs

or AI systems to compensate for compromised natural abilities as those technologies developed

55. The ambitions for brain-machine interfaces (BMIs) or brain-computer interfaces (BCIs) are high. Gregory
R. Peterson notes that some researchers see these expressions of robotics and AI technologies as a primary pathway
to mobility in paralyzed individuals. Peterson indicates that some researchers have “successfully connected a device
to the motor cortex of the brain of a quadriplegic human volunteer, who is able to successfully move a computer
cursor about on a television screen, to check his email, change the volume or doodle.” Peterson, 337.
56. Jacques J. Vidal, “Toward Direct Brain-Computer Communication,” Annual Revue of Biophysics and
Bioengineering, no. 2 (1973): 158.
57. For example, there are devices that users fit over different parts of the body. Noreen Herzfeld (as further
discussed in Chapter Three) names the Active Ankle-Foot Orthosis (AAFO) from MIT and the Berkeley Lower
Extremity Exoskeleton (BLEEX) as important examples of this kind of technology. The latter was developed with
funding from Defense Advanced Research Projects Agency (DARPA), which is an agency of the United States
Department of Defense. Noreen Herzfeld, Technology and Religion: Remaining Human in a Co-Created World
(West Conshohocken, PA: Templeton Press, 2009), 47.
58. On this point, Noreen Herzfeld discusses the case of Oscar Pistorius who was the first amputee sprinter to
be ruled eligible to compete in the Olympics against protests that his prosthetic limbs were a competitive advantage.
In 2015, Pistorius was convicted of murdering his girlfriend, Reeva Steenkamp. Ibid., 48. Gregory R. Peterson also
comments on this close relationship between therapy and enhancement. He suggests that the line between therapy
and enhancement is context dependent and always blurry. He uses as an example the implantation of electronic
devices in the brain as a treatment for obesity. This is simultaneously therapeutic because it secures the health of the
individual by keeping at bay obesity related diseases and enhancement because size and body image are significant
sources of social stigma in the United States. Peterson, 340.
19

for warfare are later adapted to life at home.59 Such examples illustrate how in robotics and AI

hope and alienation are always mixed.60

While the interplay of military and biomedical interests are a major motivator of robotics

and AI research, commercial and industrial applications are also an inextricable force in an

increasingly roboticized world. Today robots are deployed in warehouses to expedite cycles of

consumption. Soon, delivery drones will take to the sky, making online shopping even faster

than traditional brick-and-mortar retail. AI is at work everywhere in digital spaces, subliminally

nudging the consumer toward the acquisition of more and more material goods. All of this comes

at a steep price for the health and wellbeing of the planet, its ecosystems, and those millions, if

not billions, of humans who are exploited for the sake of keeping the comfortable classes

comfortable.

The examples above show the widespread and profound impact of robots and AI in our

world—an impact that only intensifies with each passing month and year. As the age of robots

and AI continues to unfold, theologians must be equipped to respond to these changes in history

on theological and ethical grounds, emerging from already establish intellectual traditions.

Efforts to address the significance of these technologies in theological terms remain somewhat

scattered. Research projects like this one are an essential step in developing this new area of

interdisciplinary theological discourse. Bringing together the disconnected array of theological

59. Singer notes that there is movement in the United States military to use humanoid robots as soon as
possible. This will benefit people in need of artificial limbs as there will undoubtedly be a surge of research activity
in developing robotic joints, limbs, and systems that can better mimic or improve upon human behaviour and
movement. Singer, 98.
60. The idea that these technologies blend alienation and liberation comes from Gregory R. Peterson who
suggests that this divide should be the theological and ethical litmus test for assessing the impact of these
technologies. For example, people who are paralyzed could experience profound liberation should a brain-computer
interface restore their ability to interact with and manipulate the material world. Peterson, 343.
20

resources on robots and AI is a critical first step in developing theological responses, identifying

areas for ongoing research, and understanding the multiple social justice and ethical issues

embedded in robotics and AI research. The chapters that follow focus on the robot as a

counterpart to humans, emphasizing robot embodiment and the boundaries between AI and

human intelligence. Though real and virtual hybridities, robots and AI challenge interpretations

of the human and our status in the world. These blurry areas, where the conventional definitions

of the human start to dissolve give rise to theological, philosophical, and ethical questions that

remain largely unexplored in theological literature.61

Method

This research project draws on a postfoundationalist understanding of interdisciplinarity. A

leading exponent of this approach is Wentzel van Huyssteen, professor emeritus at Princeton

Theological Seminary, who has spent many years developing the approach, especially in relation

to the natural sciences and questions about human uniqueness. As his theological thought

matured, van Huyssteen moved from general interest in science and theology, to developing a

new interdisciplinary approach, to testing the transversal moment where these two reasoning

strategies meet on questions of human uniqueness. The origin of van Huyssteen’s impressive

work in these three areas reaches back decades, to his formative years as a young South African

theologian. In the midst of apartheid, with no meaningful end in sight, he saw churches and

61. Many theologians, philosophers of religion, and interdisciplinarians have contributed just one or a few
articles to the topic, often as a tangent to their primary research interests. There is much fruitful ground for
developing a robust theological response to robots and AI in these contributions. For example, Robert F. DeHaan
published a short piece in 2000 that is interesting for its comparison of three different ways—scientific, popular, and
theological—of interpreting a recent advance in self-replicating robots. Robert F. DeHaan, “Robotics: Darwinism,
Intelligent Design, and Genesis,” Perspectives on Science and Christian Faith 52, no. 4 (2000): 231-232.
21

church leaders lend theological support for racial segregation.62 “Doctrinal foundationalism,

biblical fideism, and ecclesial authority contributed to a world infused with violence of every

ilk.”63 These observations prompted van Huyssteen to question not only the claims of his

theological tradition, but the very methods by which these were developed. This turn to

methodology was—and remains—a concern for justice and liberation above all else. From the

outset, these methodological concerns were also necessarily about interdisciplinarity, particularly

the relationship of science and theology. In scientific reasoning, van Huyssteen sees both

challenge and opportunity for theologians. Science, as we know it in the west, is impressive in

both its claims and methods. The products of scientific reasoning touch every aspect of modern

life; science and scientists are often afforded generous public perception in secular societies, and

the enterprise does well in resolving its own mysteries and advancing human knowledge. For

these reasons, theology must understand and consider scientific reasoning when it seeks to make

sense of the world and the human role in it. In an expression of epistemological humility, van

Huyssteen recognizes that theological reasoning alone cannot satisfy the whole of human

curiosity and contact between reasoning strategies can only ultimately strengthen theological

method.64

The aims of postfoundationalism are plural. Foremost among them, it seeks a middle

ground between foundationalism and non-foundationalism, both of which (according to van

Huyssteen) are unviable starting points for robust interdisciplinary discourse. Van Huyssteen

62. Wentzel van Huyssteen, Theology and the Justification of Faith: Constructing Theories in Systematic
Theology (Grand Rapids, MI: William B. Eerdmans, 1989), x.
63. Erin Green, “A Primer in Interdisciplinarity: J. Wentzel van Huyssteen and the Post-Foundational
Approach,” Toronto Journal of Theology 27, no. 1 (2001): 28.
64. J. Wentzel van Huyssteen, Duet or Duel: Theology and Science in a Postmodern World (London: SCM
Press, 1998): xiv.
22

appreciates that it is a monumental—if not endless—task to describe foundationalism and non-

foundationalism, and their role in western philosophy and theology. He argues, however, that

even if difficult to define, both these dominant approaches have failed theological reasoning and

its interaction with scientific claims and methods. In sum:

Foundationalism is an affinity for the terra firma of empiricism, logic, and objectivity.
This is an approach that embodies epistemic confidence. It is also an approach that
affords science the perception of a public, objective, empirical, and justifiable endeavour.
In contrast foundationalism affords theology the perception that it is private, full of
compromising values, speculative and closed to justification.65

This understanding of human reasoning has not served theology well. It leaves no place for

theological reasoning except as science’s antithesis. If science is public, then theology is private.

If science is objective, then theology is subjective, and so on. Such an approach makes

meaningful interdisciplinary engagement practically impossible. It presupposes the superiority of

science and does little to honour the nuance and complexity of both science and theology as

reasoning strategies. Conversely, theological foundationalism also limits interdisciplinarity, as

evidenced by the historical context that originally prompted van Huyssteen’s efforts.

Nonfoundationalism emerged as a critique of foundationalism, yet ends up being equally

problematic for interdisciplinary theology. For example, as van Huyssteen notes, “the whole idea

of unity, totality, identity, sameness, and consensus, is . . . radically rejected by postmodernity in

all its various constructive and deconstructive forms.”66 Commitment to these epistemic values

leads to endless relativism and hypercontextuality. This again makes interdisciplinary dialogue

very difficult as there are few grounds for adjudicating the way forward or how to articulate the

65. Green, “A Primer in Interdisciplinarity,” 30.


66. J. Wentzel van Huyssteen, The Shaping of Rationality: Toward Interdisciplinarity in Theology and Science
(Grand Rapids, MI: William B. Eerdmans, 1999), 24.
23

relationship between science and theology as influential reasoning strategies. Both

foundationalism and nonfoundationalism leave theology isolated and without the benefits of rich

interdisciplinarity.

Postfoundationalism tries to find a middle ground between these two approaches, and as

such is also difficult to define. Van Huyssteen has spent some thirty years working to give shape

to this moderate, constructive form of interdisciplinarity. This is well articulated in his numerous

publications, and especially in Duet or Duel?, The Shaping of Rationality, and his magnum opus,

Alone in the World?, which emerged from his 2004 Gifford Lectures. As an intermediate

position, postfoundationalism tries to balance a number of opposing features of foundationalism

and nonfoundationalism. For example, postfoundationalism is suspicious of both universality and

hypercontexuality and instead tries to honour local experience while always connecting it to

broader trends and historical settings. This continues with balancing objectivity and subjectivity,

theory and practice, and even science and theology. Importantly, postfoundationalism seeks

moments where two or more reasoning strategies converge on shared concerns, research

questions, or objects of study. This transversal rationality is an integral feature of the

postfoundationalist approach to interdisciplinarity. Importantly, transversality expects to find

points of divergence as well as convergence. It is not about over-emphasizing what theology and

science might have in common. In van Huyssteen’s words, this approach is about “exposing

areas of disagreement and putting into perspective specific divisive issues that need to be

discussed.”67 In terms of interdisciplinary methodology, this means that roboticists and

theologians “should be empowered to identify the rational integrity of their own disciplines by

67. van Huyssteen, Alone in the World?, 9.


24

offering their own sources of critique and justification.”68 Current theological and scientific

inquiry into robotics and AI is just such a moment. Robots and AI bring together diverse

reasoning strategies on common questions about what it means to be human and how we should

live our lives and organize our societies. These questions, shared by roboticists and theologians

alike, include ones related to intelligence, uniqueness, and development. To these,

postfoundationalism looks to both science and theology for responses, but does not limit itself to

traditional academic disciplines. Instead, such an approach demands engagement with

application, public forms of discourse, discernment in community, and more. It is a lived

interdisciplinarity that is informed not only by the theories behind robotics and AI, but the

applications that surround us and lie in near futures.69 Such an approach relies on a broad range

of sources and methods for inspiration. It retains the role for experts, but grounds this expertise

in community and communal reasoning. This encourages contact with traditions of enquiry,

while opening space for creativity and innovation in pursuit of new interdisciplinary moments. It

also assumes that the quest for optimal intelligibility is revealed in what we do, what we make,

how and when we speak, and so on. This experiential dimension is particularly important to the

theological study of robotics and AI, given the materiality of the subject. Makers of robots and

AI systems want these objects to be social actors, accessible through multimedia and

interactivity, and integrated into daily human life. It is through these day-to-day experiences that

theological reasoning about robots will take shape. In this spirit, this research project uses the

term ‘the human’ instead of ‘theological anthropology.’ This broader category is helpful for

interdisciplinary discourse about robots and AI for several reasons. It encompasses discourse

68. van Huyssteen, The Shaping of Rationality, 3.


69. Ibid., 136.
25

about theological anthropology, that is, explicitly theological conversations about the human and

our relationship with God, but also includes non-theological reflections. David Kelsey notes that

traditional theological anthropology is preoccupied with three areas of inquiry: defining the

human, understanding the human in the world, and discerning personal and communal identity.70

These broad categories, especially when rearticulated using the language with currency in

science and technology, are suitable for interdisciplinary discourse about robots and AI. This

move toward ‘the human’ as preferred term also helps imagine us as a species among species—

homo sapiens—which facilitates dialogue with contextual and ecological theologies that work to

deconstruct overly anthropocentric worldviews. The more evocative term also helps bring

theological thinking into closer dialogue with roboticists and AI researchers who very often

simply speak of the human, especially in popular writing and public discourse.

Postfoundationalism is a constructive and optimistic approach to interdisciplinary

theology, and encourages close interaction between theological reasoning and the products of

robotics and AI research. It honours the success and influence of the natural sciences in the

world, while also acknowledging the significant contributions of theological reasoning. From a

postfoundationalist view, science offers the world a great deal in terms of clarity, intelligibility,

and optimal understanding, and this should be appreciated through interdisciplinary theological

reflection. Engagement with science enhances a theologian’s ability to speak a contextually

relevant message and as such postfoundationalism is well suited to the theological challenges

presented by increasingly humanlike robots and AI. Much of my master’s research, including my

70. David Kelsey, “Personal Bodies: A Theological Anthropological Proposal,” in Personal Identity in
Theological Perspective, eds., Richard Lints, Michael S. Horton, and Mark R. Talbot (Grand Rapids, MI: Wm. B.
Eerdmans Publishing Co., 2006), 140.
26

thesis, was devoted to the study of this approach to interdisciplinarity and evaluating its merits,

especially against the values and priorities of contextual theology in the United Church of

Canada.71 This current research project, then, is the culmination of these efforts and a case study

in postfoundationalism not unlike what van Huyssteen attempted in Alone in the World?. Here,

he put his postfoundationalism into practice looking at human uniqueness from an

interdisciplinary perspective. Similarly, this project is a case study about the human and testing

postfoundationalism as an approach to the relationship between science and theology. Following

in the heritage of van Huyssteen, I take seriously his charge to drop “talk about ‘theology and

science’ in any generic, abstract sense” and replace this with a detailed, contextual account of

robots and AI.72 With this in mind, the chapters that follow focus on specific theologians brought

into dialogue with specific roboticists about specific questions. These efforts are supported by

contextual analysis and recent examples from robotics and AI research.

Objectives

This project has three main objectives. First, it argues for the necessity of sustained theological

research into issues in and approaches to contemporary robotics and AI research. The project

gives specific attention to the rise of humanoid robots and AI (i.e., robots and AI systems

inspired by and designed for social interaction with humans) and to their military, commercial,

and biomedical applications as outlined in the rationale above. Such robots and AI systems touch

on all aspects of life. The changes they bring and will continue to bring challenge theologians to

imagine anew what it means to be human. Second, it collates and organizes a to-date fractured

71. Erin Green, “J. Wentzel van Huyssteen and Interdisciplinarity: Is the Postfoundationalist Approach to
Rationality an Intellectually Viable Way of Relating Theology and the Natural Sciences?” (master’s thesis,
University of St. Michael’s College, 2007).
72. van Huyssteen, Alone in the World?, 40.
27

body of theological literature on robotics and AI. This process makes clear the gaps in

theological treatment of robotics and AI, especially with respect to understandings of the human.

It also makes clear that robotics and AI is an area of study requiring insight from Christian

theology and ethics, keen contextual awareness, and well-developed interdisciplinarity. Third, it

outlines the elements of a more complete theological response to robotics and AI by identifying

lacunae in the existing theological literature and suggesting directions for how they might be

addressed.

The first objective is accomplished in two ways. The first, described in this introductory

chapter, is a methodological argument influenced by van Huyssteen’s postfoundationalist

approach to interdisciplinarity. Van Huyssteen argues extensively and persuasively that theology

must engage with history as a whole and the natural sciences in particular.73 An examination of

both the claims and methods of the natural sciences is integral to this approach. This entire

project, therefore, is a case study in postfoundationalist interdisciplinarity. The second argument

for sustained theological treatment of this subject takes place through a survey of influential

robotics and AI research from Hans Moravec, Rodney Brooks, Cynthia Breazeal, and Heather

Knight. The discussion of their work highlights the world-changing military, biomedical, and

social applications of these technologies and pivotal moments in their development. This survey

orients a theological reader to some of the key themes and debates in contemporary robotics and

AI research and highlights its importance as an object of theological inquiry. It also illustrates the

73. See J. Wentzel van Huyssteen, Duet or Duel: Theology and Science in a Postmodern World (London: SCM
Press, 1998); The Shaping of Rationality (1999); and Alone in the World? Human Uniqueness in Science and
Theology (2006).
28

impressive scope of robotics and AI research, its pervasiveness in day-to-day life, and its

potential as a destructive historical force.

The second objective is accomplished through analysis of three distinct treatments of

robotics and AI within theological scholarship and identifying how theologians align with these

treatments. The first sees robots and AI largely as a tool for human self-understanding. Thinkers

in this group find that robots and AI reveal to us the complexity of the human mind, the limits of

human creative activity, and the elements of human uniqueness. The work of Anne Foerst, a

theologian with a background in computer science and experience working at MIT with Cog and

Kismet (two important early efforts in humanoid robotics), is central to the development of this

first treatment. She sees humanoid robots as an exciting new opportunity for theological

consideration of the imago Dei. Her contributions are largely uncritical of robotics research and

leave room for improved contextual awareness and insight from Christian ethics. This opens the

possibility for a second kind of treatment of robotics and AI. Here, theologians argue that robots

and AI are able not only to influence human self-understanding, but also to raise difficult

relational and ethical questions. Thinkers in this group, most notably Noreen Herzfeld, explore

some of the ethical and historical questions brought about by the proliferation of robots, AI, and

related technologies. A third kind of treatment of robotics and AI is only partially developed in

theological literature. This third kind subsumes, but also goes beyond, the concerns of the other

two. This treatment includes taking into account the concerns about self-understanding,

relationships, and doctrines outlined above, but also includes attention to some of the gaps in the

first two treatments of robotics and AI. Such gaps in these treatments include their lack robust

contextual analysis, their neglect of ethical questions, and the limited scope of their

understandings of the human.


29

No single author or work yet represents this new kind of treatment of robotics and AI and

so it is developed here in this research project as a novel approach. Unlike the first two kinds, of

which Foerst and Herzfeld are good representatives, this third way is drawn together from partial

contributions from a variety of sources, especially contextual and eco-theologies.

The third objective of the thesis outlined above responds to the challenges of robotics and

AI by drawing on insights about the human found in ecotheology, combined with insights

emerging from postfoundationalist interdisciplinarity. This approach also makes room for critical

contributions and insights from ecotheologians like Thomas Berry and Sallie McFague, who are

not represented in the first two treatments of robotics and AI in theological literature. Applying

these insights to questions about robots and AI forms the bulk of the novel, creative work of this

thesis and is essential for developing rich ethical and contextual theological reflection on

robotics and AI. Introducing these thinkers to the dialogue is also important in critiquing the

understanding of the human found in writing about robots and AI to-date. So far, theologians

writing about robots and AI tend to espouse an individualistic anthropology that focusses on

traits associated with the human mind or individual human bodies. Ecotheologians, however,

argue for a worldview grounded in cosmogenesis—the whole universe story—which

decentralizes the human as individual and promotes holistic thinking about both the human and

its broader context in creation.74 These critical voices reveal that the anthropology found in

theological literature on robotics and AI represents an incomplete portrayal of the human. They

challenge the focus on individuals as the locus of wisdom and relationality and expand

theological imagination to include all webs of relationship, including societies and ecosystems.

74. The term cosmogenesis and its implications for this project are given fuller treatment in Chapter Four.
30

Using this approach inspired by ecotheology and other contextual thinkers, interdisciplinary

theologians interested in robotics and AI can better understand the wisdom, embodiment, and

relationality embedded in humans and any mechanical counterparts we bring into existence.

Overview

This research project is organized according to the three different treatments of robotics and AI

outlined above, with each forming a chapter. An overview of key figures in robotics and AI

precedes this theological analysis and forms its own chapter.

Chapter One. This chapter examines the contributions to the fields of robotics and AI by

four influential researchers: Hans Moravec, Rodney Brooks, Cynthia Breazeal, and Heather

Knight. The first three—Moravec, Brooks, and Breazeal—have shaped and continue to shape the

overall directions of robotics and AI the world over. Their contributions go beyond strict

scientific and technological boundaries. Moravec is an enthusiastic supporter of posthuman

futures and has written detailed speculations about future humans in light of increasingly

sophisticated robots and AI. He was also one of the earliest researchers to place AI systems in

mechanical bodies (i.e., robots) and take on the challenge of autonomous locomotion. Brooks

founded the embodied AI camp, which contrasts sharply with classical AI in its argument that

intelligence is radically informed by embodiment and interaction with environments. Brooks is

also notable for his industrial and military interests, particularly through iRobot and Rethink

Robotics. Breazeal is an important figure in this research project due to her important role in

social robotics, which argues that AI must not only be embodied (as Brooks asserts) but also

emerge socially (i.e., gradually through interaction with other actors, as is seen in human

development). Heather Knight is worthy of consideration in this chapter, as her methodology


31

offers new and important challenges to theology. Her work is also in the vein of social robotics,

but she goes further than Breazeal in her efforts to introduce the products of robotics and AI ‘into

the wild’ (i.e., outside the laboratory) and capture a broader, non-scientific audience. This first

chapter gives an overview of the challenges theology faces when dealing with robotics and AI,

and offer basic critique and reflection on each of these thinkers in preparation for the following

chapters devoted to theological treatments of robots and AI.

Chapter Two. This chapter covers theological approaches to robotics and AI that largely

treat these products of science as means by which humans can learn more about themselves. The

foremost thinker in this category is Anne Foerst, who worked on the famed Cog and Kismet

projects while researching with Rodney Brooks at MIT. Foerst characterizes her approach to

robotics and AI as a symbolic one. This approach holds that “both scientific and theological

descriptions of reality can be seen as partly constructed.”75 Foerst argues that within the scope of

her approach “the term symbol is used in a transcendent sense” and that a symbol “brings

together two different spheres; it points beyond itself to something else and participates in both

realities.”76 Finally this chapter also contains a critique of this approach, including its lack of

contextuality, absence of ethical considerations, and its reliance on an individualistic view of the

human.

Chapter Three. This chapter focusses on the ways robots and AI can change humans, our

relationships, and the way we understand God and each other. This treatment brings robotics and

AI into conversation with the imago Dei among other Christian concepts. Related issues

75. Anne Foerst, “Cog, a Humanoid Robot, and the Question of the Image of God,” Zygon 33, no. 1 (1998):
96.
76. Ibid., 97.
32

discussed include the potential for humans to make gods out of ourselves or our robot creations,

and the ways in which robotics and AI can be part of religious communities, have rights and

responsibilities, and so on. The foremost figure in this category is theologian and computer

scientist Noreen Herzfeld. Her work attends to historical changes in the interpretation of the

imago Dei and how this is mirrored by similar shifts in approaches to AI research. Compared to

Foerst and others, she gives more attention to ethical concerns and does a better job of analyzing

the historical setting of robotics and AI research. Herzfeld’s approach, however, also leaves

room to further develop contextuality, and appreciation for the ethical issues embedded in all

conversations about robots and AI.

Chapter Four. This outlines the elements of an approach approach to robotics and AI not

yet employed in theological scholarship. It takes into consideration the work done to this point

and the questions emerging from it. It acknowledges the potential for robots and AI to impact

theological understanding of the human, and affect how humans understand and relate to each

other and God. Importantly, this chapter identifies the shortcomings of the treatments discussed

in Chapters One and Two, especially relating to the human, contextual awareness, application of

robotics and AI, and methodology, and suggests how theology might address these matters more

completely. First, it sketches a view of the human that responds to questions raised by Foerst,

Herzfeld, roboticists, and others by applying insights from contextual and ecological theology

(e.g., Thomas Berry and Sallie McFague) to argue for a stronger sense of the embodied and

holistic qualities of the human. This approach also contains considerations emerging directly

from contextual theology, which counters the rather one-dimensional image of the human often

found in discourse about robots and AI. The contextual analysis entailed in this approach also

considers factors like race and gender in the development and portrayal of robots, attention to
33

which is essential for developing interdisciplinary discourse that promotes justice and dismantles

oppressive structures both in theological reasoning and in development and application of

robotics. This chapter also returns to the industrial, biomedical, and military concerns that

motivate much of this research project. These include the use of industrial robots that expedite

and facilitate unsustainable patterns of consumption. Some of the biomedical applications named

above are discussed in greater detail, alongside related ethical considerations. Finally, military

applications of robotics and AI are discussed in greater detail as a critical global force requiring

both theological reflection and advocacy from a faith-based perspective.

The outline of this more complete approach concludes with a brief return to some

methodological issues that will help propel the discourse further. While not at the core of the

approach outlined here, attention to these issues—such as the selection and use of sources and

attending to interdisciplinary diversity—support other efforts made throughout the thesis to

assure that methodological issues not be set aside when discussing robotics and AI.

Conclusion. This final chapter summarizes the work presented throughout and clarifies

why this project is important for interdisciplinary research. It highlights some current work

undertaken by churches and seminaries to further develop this field, and makes proposals for

future research and action. The conclusion also outlines the limitations of this research and

rearticulates the implications of this project for the theological community.

Implications

This research project seeks to create new possibilities for theological enquiry into robotics and

AI, and as such has unforeseen implications that will become clearer as the discourse unfolds.

Among the most important implications so far are the role such a project plays in identifying and
34

translating key issues in robotics and AI for a theological audience. This research project also

takes on the significant task of defining different approaches to robotics and AI through collating

and organizing research in this emerging interdisciplinary field. Such a contribution will

facilitate further theological reflection and helps identify critical gaps in existing theological

discourse. These gaps include both ones relating to content (e.g., calling into question Anne

Foerst’s omission of military robots) and to methodology (e.g., identifying the ways in which

popular culture impacts academic theological discourse about robots and AI). The contrast

between two existing theological treatments of robots and AI gives an evaluative tool for future

proposals in this area. Most importantly, this project offers an important constructive approach

to robots and AI that is not yet well developed in theological literature. In addressing some of the

questions unanswered by other contributors, this work advances discourse in a constructive and

contextually grounded way. Its consideration of the constellation of ethical and theological

questions also encourages theological thinking about robotics and AI that takes the experience of

power, privilege, oppression, and violence far more seriously than any theological thinking to

date. If theologians continue in this way, robots and AI may become yet another opportunity to

pursue justice in this world and understand more deeply how to uphold the dignity of all in word

and action.
Chapter One

Introduction

Since the advent of AI research in the 1950s and humanoid robotics shortly thereafter, a small

constellation of researchers has held enormous influence in the English-speaking robotics and AI

world. These foundational thinkers have not only been responsible for technical advances in their

fields, but have also applied their talents and imaginations to broader questions of metaphysics,

methodology, and anthropology. They have challenged widely held ideas about intelligence and

human uniqueness, and disagreed with each other on how to structure and develop intelligence in

robots. They have also aggressively expanded industrial, biomedical, and military application of

their research, translated technical work for popular audiences, and worked to dissolve

boundaries between the laboratory and the rest of the world. Their work presents clear, if

debatable, ideas about what it means to be human. Among these central contemporary figures are

Hans Moravec, Rodney Brooks, and Cynthia Breazeal. In the following sections, they are

discussed in chronological order of their rise to prominence, starting with Moravec who emerged

on the robotics and AI scene in the 1970s. A fourth figure, Heather Knight, complements their

work and is included here as an excellent example of how new generations of roboticists

continue to challenge the methods, aims, and claims of conventional robotics and AI research.

The work of these four roboticists shaped and continues to shape the landscape of

robotics and AI research in the United States and beyond. Their work goes beyond academic

circles and ripples into the domains of military, biomedical, industrial, and other applications,

impacting the lives of millions of people worldwide. Through innovative use of new media,

popular writing, and other tools for disseminating research, each has contributed to expanding

35
36

public discourse about robots and AI, and to philosophical and theological reflections on related

issues. Moravec stands out for the hope he places in posthuman futures in his books for a popular

audience, Robot: Mere Machine to Transcendent Mind and Mind Children: The Future of Robot

and Human Intelligence.77 He was also among the first to place an AI system in a mechanical

body (i.e., build a robot). Rodney Brooks, a graduate student from Australia, arrived at Stanford

just at the peak of Moravec’s efforts with the Stanford Cart, a primitive robot testing robot vision

and locomotion. Brooks went on to found one stream of embodied AI research, marking a

departure from dominant approaches to AI research at the time. Brooks was among the first—

and most successful—researchers to argue that intelligence, whether naturally or artificially

occurring, is inextricable from the bodies and ecosystems it inhabits. Brooks also stands out

today as a giant in industrial and military applications of robotics, including significant influence

through the founding of iRobot and Rethink Robotics. Brooks’s doctoral student, Cynthia

Breazeal, again radically challenged some mainstream perspectives and methodologies in

robotics and AI research. Through observation of her own children and commitment to

interdisciplinary research, she came to see developmental learning as foundational and essential

for the success of AI. By incorporating this core assumption into her research, Breazeal helped

give rise to the field of social robotics, which builds upon the embodied assumptions of Brooks

but goes further in arguing that intelligence emerges socially.

Heather Knight is the final robotics and AI researcher discussed in this chapter. She

represents both a continuity with and departure from Moravec, Brooks, and Breazeal. She

continues in this line of academics (Breazeal was her master’s research supervisor at MIT), but

77. Hans Moravec, Mind Children: The Future of Robot and Human Intelligence (Cambridge, MA: Harvard
University Press, 1988); and Robot: Mere Machine to Transcendent Mind (New York: Oxford University Press,
1999).
37

much like Brooks and Breazeal, Knight makes significant departures from those who were most

formative in her introduction to robotics and AI. Though she is still early in her career (currently

Assistant Professor of Robotics at Oregon State University), Knight shows signs of being a

game-changer when it comes to the methods roboticists use to advance their research. These new

methodological frontiers offer challenges and opportunities for interdisciplinary theology aimed

at robotics and AI research.

What follows is a critical analysis of their contributions to robotics and AI research.

Special attention is given to aspects of their research that dovetail with the theological questions

and concerns discussed in Chapters Two and Three. This overview helps orient a theological

audience to some basic issues that shape contemporary robotics and AI research. The following

analysis also highlights the diverse and sometimes incompatible understandings of human and

artificial intelligence at play in robotics and AI research. Further to this, the discussion of

Moravec, Brooks, Breazeal, and Knight looks at the ways these researchers respond to their

historical setting and the resultant biases present in their work. Such a comprehensive study is in

keeping with the spirit of postfoundationalist interdisciplinarity, which requires that historical

location, context, and the methods of science and theology be put into close contact with

theological reasoning.

Hans Moravec

As a doctoral student at Stanford University and researcher at Carnegie Mellon University,

Austria-born Hans Moravec was at the forefront of robotics and AI research in the United States

during the 1960s and 1970s. One of his earliest projects was the Stanford Cart, a simple and

early effort to develop robotic ‘vision’ sophisticated enough to navigate Earth’s landscapes
38

without any preprogramming. His persistence against the limitations of 1970s computing power

and social imagination for robotics research opened the way for later contributions to the Mars

rover and self-driving cars that are currently in development around the world. Today, Moravec

continues to work on the interconnected problems of robot vision and robot locomotion in

industrial settings. Though he retains a research position with Carnegie Mellon University,

Moravec now funnels most of his energy toward his role as Chief Scientist at Seegrid

Corporation, a company devoted to developing “vision-guided driverless robotic industrial

trucks.”78

Moravec is also notable for his efforts to popularize discourse about robots and AI. This

is especially evident in his two books written for broader audiences, Mind Children and Robot.

In these he details many of the technical challenges of building robots with vision suited for self-

directed locomotion, especially those relating to advances in computing power. More interesting

for the purposes of this research project, however, is Moravec’s vision for the future of robots

and AI. Mind Children and Robot are as much reflections on the purpose and future of humans as

they are works of science and technology. In both, he draws on other reasoning strategies to

make claims about the progress of humans and their societies, and the meaning of biological

evolution in a future with robots. His social commentary on robots and AI details a future where

humans will have to negotiate their ongoing existence with robotic successor species.

Some commentators have responded to the apocalyptic themes present throughout

Moravec’s work. Religious studies scholar Robert Geraci, who specializes in the intersection of

AI and religion, claims that Moravec is the founder of “Apocalyptic AI” and that he along with

78. “Seegrid: Flexible VGVs, Robotic Industrial Trucks,” Seegrid, accessed December 21, 2012,
http://seegrid.com.
39

noted futurist Ray Kurzweil, “eloquently conjure a fantastic paradise in which robotics and AI

improve humankind and the world. In doing so, these AI advocates lead a scientific movement

that never strays far from the apocalyptic traditions of western culture.”79 Geraci is not the only

commentator to note the ways in which Moravec’s work takes up such quasi-theological themes.

Russell C. Bjork makes the same case as Geraci and notes the parallels between Moravec and

Kurzweil and Christian claims about redemption.80 Michael DeLashmutt also brands Moravec’s

approach as “techno-theology,” in which scientific and technological pursuits “ground

theological aspirations (hope for a better life, concern over human destiny, notions of the good)

in technological realities.”81 In this way, Moravec’s scientific writing takes up traditionally

theological discussions while never fully acknowledging that this is taking place. DeLashmutt, a

theologian, is skeptical of such methods for interpreting humanoid robotics and our common

future. He finds Moravec’s views on the human especially unsatisfying and critiques his

“reductionistic philosophical anthropology” and its distilling of the “complexity of the human

subject” into a technological problem.82 Such theological critique foreshadows Chapter Four,

which emphasizes a complex and polyvalent view of the human. These efforts are critical for

interdisciplinary theological reflection about robots and AI as they clarify the ways distinctively

non-scientific ideas influence researchers like Moravec. Together these theological critiques

79. Robert M. Geraci, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence,” Journal of the
American Academy of Religion 76, no. 1 (2008): 139-140.
80. Bjork, 101.
81. Michael W. DeLashmutt, “A Better Life through Information Technology? The Techno-Theological
Eschatology of Posthuman Speculative Science,” Zygon 41, no. 2 (2006): 268. See also DeLashmutt 274-277 for a
summary of the theological themes and eschatological promises found in Moravec’s work.
82. Ibid.
40

form a good argument against Moravec’s close association of computing power with intelligence

and intelligence with what it means to be human.

Intelligence as computing power

A striking feature of Moravec’s approach to robotics and AI research is the consistent

positive correlation of computing power with intelligence. This was a common approach in the

early decades of robotics and AI, as it emerged from the related fields of mathematics and

computer science.83 Moravec, however, maintained this view for decades even as his

contemporaries abandoned the model in light of its inability to give rise to successful humanoid

robots.84 He argues for this approach in both Robot and Mind Children, where he says that

“humanlike performance” requires “greater computer power.”85 The importance of this

relationship for Moravec is clear even from his research at Stanford in the 1970s. At that time,

researchers struggled to develop and afford the computing power to process the amount of data

required to give robots even rudimentary vision. A computer program requires an impressive

amount of computational ability to match the functionality of a human retina. As Moravec notes:

“It takes robot vision programs about 100 computer instructions to derive a single edge.”86 A

83. This view, of course, is not uncontroversial for reasons that are described more fully below and in Chapter
Four. Other commentators, however, have also directly challenged such a strong and linear correlation. Christopher
B. Kaiser, for one, raised the critical question of the relationship between the computational power of the brain and
the extent to which this is determined by genetic material. “How Can a Theological Understanding of Humanity
Enrich Artificial Intelligence Work,” Asbury Theological Journal 44, no. 2 (1989): 62.
84. Moravec repeatedly stated this argument in many of his publications. One of the latest examples appeared
in 2003 in the publication of the Association for Computing Machinery, where Moravec repeats nearly verbatim his
argument that the success of AI, and the definition of intelligence are in direct correlation to advances in computing
power. Hans Moravec, “Robots, After All,” Communications of the Association for Computing Machinery 46, no. 10
(2003): 90-97.
85. Moravec, Robot, vii.
86. Ibid., 54.
41

quick scan of any environment reveals edges upon edges upon edges (e.g., a simple setup of a

laptop, a coffee cup, and an iPod on a table has dozens of edges). Most environments have

moving edges (e.g., humans, traffic, litter blowing in the wind, and so on), which further

complicates the computational task of robot navigation. The results of these early efforts seem

almost unimaginably primitive today with Moravec recalling that with one program “. . . the

robot moved in meter long steps, thinking about 15 minutes before each one. Crossing a large

room or a loading dock took about five hours, the lifetime of a charge on the Cart’s batteries.”87

In contrast, the human eye can perform about one million detections ten times per second, which

is the equivalent of 100 million computer instructions.

Such striking contrast between the early capabilities of AI and the human eye make it

very clear why Moravec and his contemporaries believed advances in computing power were the

key to success in developing humanoid robots. In such an approach, the only barrier to robot

locomotion was more processing power. This assumption failed as it was destined to when

researchers realized that there were many more significant challenges in developing humanoid

robots than processing power alone. As the fields of robotics and AI grew throughout the 1970s

and 1980s, computing power (and its affordability) improved to the point where researchers saw

new technical and methodological issues apart from these first hurdles that preoccupied

Moravec. It is here that Brooks, Breazeal, and Knight took up the torch, and with it robotics in

unforeseen directions.

Early twenty-first century robotics and AI research largely encourages different

approaches than those espoused by Moravec. As the fields matured, the widespread observation

87. Hans Moravec, “Robots that Rove,” Newsletter of the Special Interest Group on Bioinformatics,
Computational Biology, and Biomedical Informatics (SIGBIO) 7, no. 2 (June 1985): 13-15.
42

that what is easy for computers is hard for humans, and what is easy for humans is hard for

computers, challenged a strong link between computational power and successful AI. For

example, a computer’s ability to calculate energy profiles of chemical reactions far outpaces

what even the most talented theoreticians could do with Schrödinger’s equation in a lifetime. In

contrast, researchers have discovered it is enormously difficult to develop AI that can distinguish

between an apple and a tomato—a trivial task for most young children. In the face of these

significant challenges, researchers have responded with new approaches to AI and new ways to

evaluate its success. Rodney Brooks turned away from computation and toward sensation and

perception to build robots who could deftly navigate their environments without sophisticated,

humanlike vision and the processing power required to mimic it. Cynthia Breazeal offered

another response to these early understandings of AI. She argued that for AI to be analogous to

human intelligence it must be developmental, that is, robots must learn as humans learn

embedded in human social environments. Other developments in engineering further dismantled

Moravec’s approach. For example, the difficulty in developing a robotic hand that mimics

human dexterity again stressed the importance of non-computational aspects of robotics and AI.

Regardless of these later developments, Moravec’s work rested on the foundational assumption

that greater computational ability is the greatest barrier to success in AI. This assumption shaped

not only how he approached challenges in robotics, but also how he viewed the human and

human futures on this planet and beyond.

Understanding intelligence, understanding the human

The development of robots and AI often takes place in parallel with debates about what it

means to be human and the nature of human intelligence. For both roboticists and theologians

interested in robotics and AI, such discussions figure prominently in their work. As detailed in
43

Chapters Two and Three, in theological terms the relationship between intelligence and

understanding the human is often framed in terms of the imago Dei. Noreen Herzfeld draws on

several historical interpretations of the imago Dei and how they correspond to movements in AI

research. Her 2002 book, In Our Image: Artificial Intelligence and the Human Spirit, outlines

three influential interpretations of the imago Dei—substantive, functional, and relational.88

Though discussed in greater detail later in this thesis, they provide a helpful hermeneutic for

Moravec’s work. Briefly, the substantive interpretation of both the imago Dei and human

intelligence focusses on what people are, the functional interpretation on what people do, and the

relational approach emphasizes the social bonds between people and their communities. While

these emerge from theological reasoning, Herzfeld creatively applies them to similar transitions

in AI research. Such an approach allows for close interdisciplinary contact and the evaluation of

scientific claims through a theological lens.

The first among these—the substantive interpretation—is essentially Cartesian in spirit.

This approach sees the ‘rational,’ disembodied mind as the ground of human uniqueness and

intelligence. It recalls early AI research from the 1950s and the decades immediately thereafter,

when that research was emerging from a tradition of mathematicians and computer scientists.89

In this time, most of what was taken as success in developing AI was mathematical, algorithmic,

or logical reasoning—the kind of efforts Moravec valued. Social intelligence, development and

learning, and the role of embodiment did not figure in these early efforts. This approach gave rise

to IBM’s Deep Blue, which trumped Garry Kasparov at chess but could not distinguish between

88. Noreen Herzfeld, In Our Image: Artificial Intelligence and the Human Spirit (Minneapolis, MN: Fortress
Press, 2002).
89. Noreen Herzfeld, Technology and Religion, (West Conshohocken, PA: Templeton Press, 2009), 58.
44

a picture of a duck and a picture of a tree, nor could it understand the arc of a story, learn any

new skills, or describe something beautiful when prompted. This emphasis on algorithmic

intelligence came at the expense of other facets of human intelligence, notably the ability to

understand, make meaning, experience and express emotions, enter into relationship and build

community, and take on new challenges.

Moravec’s emphasis on intelligence as computational power aligns him with these early

AI researchers and the substantive interpretation. Like those before him, Moravec believed that

success in AI was imminent, depending only on higher computational power and lower prices. In

1984 he remarked, “The new interest in crunch power should insure that AI programs share in

the thousandfold per decade increase from now on. This puts the time for human equivalence at

twenty years.”90 Like so many other tech predictions, this bold hope faded to quaint dreaming as

the two-decade deadline came and went. Though such claims align Moravec very closely with

what Herzfeld describes as the substantive approach to AI, there are nuances in his work that

complicate this categorization.

Moravec’s interest in developing AI systems that can move and navigate in robot bodies

through human environments points not to a substantive interpretation, but a functional one. As

described by Herzfeld, the functional interpretation fits well with AI research emerging in the

1980s.91 At this time researchers increasingly set their sights on having robots and AI perform

tasks that humans usually do and to increasingly “act as a human deputy.”92 While substantive

90. Hans Moravec, “Robots that Rove,” 14.


91. Noreen Herzfeld, “Creating in Our Own Image: Artificial Intelligence and the Image of God,” Zygon 37,
no. 2 (2002): 307.
92. Ibid.
45

interpretations focused on artificial intelligence alone, the functional interpretation focused on

the integration of simulating thought and behavior in robots. Though Moravec is primarily

concerned with the technical challenges aligned with substantive approaches, he situates these

within research that very much has functional goals (e.g., getting a robot to ‘see,’ move, and

navigate). This reveals an interesting tension in Moravec’s work. On the one hand, he is

captivated by posthuman, apocalyptic AI, which requires a substantive approach. On the other,

his day-to-day efforts are grounded in a task-oriented, functional approach. His work as founder

of Seegrid, a company devoted to automating commercial processes through autonomous

‘seeing’ robot vehicles is the functional approach par excellence. This hybridity of substantive

and functional approach ultimately gives theologians two significant ways to relate to his work.

While Geraci and others have responded to the former, the latter contributes to the acceleration

of cycles of consumption, depleting resources, and the amplification of inequality which are the

subject of theological reflection in Chapter Four.

Embodiment

Moravec shows an equally mixed relationship with embodiment, both prioritizing and

rejecting human and robot bodies in his research. In his more functional efforts robot bodies are

central. In his research at Stanford and later at Seegrid, Moravec devoted attention to building

robots fit for working in and negotiating human environments. With such goals on the horizon,

robot bodies are equally important as the computational power needed to move them. Success is

contingent on embodiment. These pursuits stand in sharp contrast to Moravec’s posthuman

speculation about the future of robotics and AI research. Here, his emphasis on mind—both in

the common sense related to human cognition and in the futurist sense of a disembodied, eternal

form of existence he calls ‘Mind’—suggests an approach to robotics and AI where embodiment


46

features little at all. This second understanding—mind as a disembodied, eternal existence free

from physical constraints—is the ultimate goal of his robotics and AI research.93 In these futures,

humans surrender their place in evolutionary history, AI takes over, and bodies are but memories

of a bygone time. Even though Moravec’s day-to-day research projects touch on the problems of

embodiment, these seem to be rewarding technical challenges on the way to higher, disembodied

heights.94

Mind and evolution

Moravec situates robotics and AI research within the scope of evolutionary history,

emerging as an extension of human evolution and the result of the “same pressures” that gave

rise to all life and ecosystems.95 This evolutionary reading of robots supports his arguments for a

strong association between successful AI and computational power. In a reduction of cosmic

complexity, Moravec focuses on the place of the brain in evolution and organizes his

interpretation of evolution according to this single feature. The more instructions per second a

brain can process, the more ‘evolved’ it is. This scheme leaves no other option but for humans to

sit at its apex, followed closely by other large-brained animals like whales and elephants. In

stripping away all other features of life and intelligence, Moravec sets the stage for the inclusion

of robots and AI in evolutionary history. If computational power is the defining attribute of an

organism and is uniquely linked to its place in evolutionary hierarchies, then robots and AI can

93. Robert Geraci notes that exponents of Apocalyptic AI hold a low view of embodied intelligence and that “a
mind trapped inside a human body learns only with difficulty, thinks slowly, and has difficulty passing on its
knowledge.” Geraci, “Apocalyptic AI,” 147.
94. On this point Moravec again aligns himself with Apocalyptic AI, which “looks forward to a mechanical
future in which human beings will upload their minds into machines and enjoy a virtual reality paradise in perfect
virtual bodies.” Ibid.,140.
95. Moravec, “Robots that Rove,” 13.
47

easily fit into this scheme without objection. This linear evolutionary scale closely parallels

advances in computing technology measured in a similar way (i.e., by millions of instructions

per second, or MIPS).

For Moravec, it makes sense to align artificial intelligences with those already occurring

in nature because, “machine intelligences will be direct imitations of something already existing

biologically. Every step toward intelligent robots has a rough evolutionary counterpart, and each

is likely to benefit its creators, manufacturers, and users.”96 To support his vision of evolutionary

futures where AI displaces other forms of intelligence, Moravec ignores all other features of

evolutionary history and conflates organic evolution with scientific progress. In doing so, he

develops a certain tunnel vision that leaves out of field much of the complexity and even mystery

found in most readings of evolutionary history, both scientific and theological. It is his hope that

robots and AI will soon come to represent a successor species in such a schema. As for all non-

computational related features of humans, Moravec expects that these will be easily acquired by

sophisticated robots of the future. Once robots and AI achieve a certain escape velocity, freeing

them from the messy and inferior processes of organic evolution, “machines more capable than

any we know will, without further help from us, grow more capable still, learning from the

world, as we did in our biological and cultural evolution.”97

This junction in evolutionary history, where robots surpass humans in intelligence and

become responsible for their own evolutionary futures, is Moravec’s expected outcome of a

moment where “our stone-age biology and our information-age lives grow ever more

96. Moravec, Robot, 13.


97. Ibid., 2.
48

mismatched.”98 He is not skeptical or fearful of these futures, and is instead excited by its

prospects. In Robot and Mind Children, he gives little consideration to the details of these futures

and only indicates that artificial minds born out of human minds should bear some semblance of

human ethics, values, and so forth, noting that “the historical roots of their transcendence will be

preserved in some form.”99 Many, including Bill Joy in his seminal 2001 article “Why the Future

Doesn’t Need Us,” have expressed hesitation about such lofty goals and warn against allowing

ourselves to “drift into a position of such dependence on the machines that [we] would have no

practical choice but to accept all of the machines’ decisions.”100 Moravec makes some effort to

quell those unsettled by the prospect of surrendering their place at the apex of evolution. He

suggests that humans could manage robotics and AI through democracy. “We voters should

mandate installation of an elaborate analog of Isaac Asimov’s ‘Laws of Robotics’ in the

corporate character of every powerful intelligent machine.”101 For such a bold claim, this

proposal and its variants woven throughout Robot are surprisingly underdeveloped. Moravec

offers nothing by way of articulating who these voters are, how this analog would be developed,

identifying competing interests in robots and AI, or even the impressively different approaches to

democracy found throughout the world. Nor does he make any allowance for robots and AI

systems that will one day be responsible for their own programming and evolutionary futures,

possibly removing this elaborate analog from their own code. He even argues that robotics and

AI systems are responsible for “stretching the limits of both our biological and institutional

98. Ibid., 7.
99. Ibid., 11.
100. Bill Joy, “Why the Future Doesn’t Need Us,” Ethics & Medicine, 17, no. 1 (Spring 2001): 14.
101. Moravec, Robot, 140.
49

adaptability,” without any consideration for how these limits may be stretched resulting in

violence, death, or destruction.102 This overly optimistic and insufficiently critical view does not

escape other commentators, and as Geraci remarks, “Moravec does not believe that war and

strife will characterize the future.”103 In the end, Moravec’s hope that AI soon will pursue

“uncomfortable, uncharted but grand opportunities” seems detached from critical analysis

grounded in the real conditions of history.104

Moravec’s account of evolutionary history chafes against those found in ecotheology.

Thomas Berry, for one, proposed an interpretation of evolution—or universe story in his

parlance—that is radically different from Moravec’s. Whereas Moravec sought to simplify

evolution, Berry seeks to magnify its complexity with a view of deepening appreciation for

humans and their humble place within cosmogenesis.105 In The Dream of the Earth, Berry writes:

One of the more remarkable achievements of the twentieth century is our ability to tell
the story of the universe from empirical observation and with amazing insight into the
sequence of transformations that has brought into being the earth, the living world, and
the human community. There seems, however, to be little realization of just what this
story means in terms of the larger interpretation of the human venture. For peoples,
generally, their story of the universe and the human role in the universe is their primary
source of intelligibility and value.106
Berry’s corrective to Moravec’s interpretation of evolutionary history is just a first taste of how

contextual theologians can engage directly with the claims of roboticists and AI researchers, and

offer important counterpoints to anthropocentric, materialist worldviews. These are developed

more fully in Chapter Four.

102. Ibid., 8.
103. Geraci, “Apocalyptic AI,” 155.
104. Moravec, Robot, 10.
105. The term cosmogenesis and its implications for this project are given more attention in Chapter Four.
106. Thomas Berry, The Dream of the Earth (San Francisco: Sierra Club Books, 1988), xi.
50

The importance of perception

Among the most important debates emerging in the first decades of robotics and AI

research is the strong versus weak AI debate. Proponents of weak AI argue that artificial

intelligence will only ever somehow be a representation or mimicking of intelligence and that AI

can never constitute intelligence in the sense in which it exists in humans.107 Proponents of

strong AI believe intelligence is possible for AI systems, however far into the future this

achievement may be. As Edmund Furse succinctly notes, the “weak AI hypothesis states that it is

possible to simulate human behaviour, whereas the strong AI hypothesis says it is possible to

replicate human behaviour.”108 Christian thinkers, including Russell C. Bjork, have given this

internal debate some limited attention in developing theological responses to robots and AI.

Bjork proposes that advances in weak AI prompt “significant ethical issues related to the

appropriateness of entrusting certain tasks to machines” and by comparison it is strong AI that

poses anthropological questions in raising “issues related to the essential nature of humanity.”109

On the issue of strong versus weak, Moravec again represents s a hybrid position. He is

easily in the strong AI camp in his visions of a future where “artificial progeny will grow away

from and beyond us, both in physical distance and structure, and in similarity of thought and

motive.”110 In this he hopes for a future where AI reaches—then surpasses—human intelligence,

becoming self-directing and responsible for its own unfolding future. This is no mere simulation

or representation, and clearly an espousal of strong AI. Moravec, however, is not so

107. Defining and understanding human intelligence plays an important role in robotics and AI research. These
debates also impact theological responses and are discussed in more detail in the following chapters.
108. Edmund Furse, “The Theology of Robots,” New Blackfriars 67, no. 795 (1986): 378.
109. Bjork, 96.
110. Moravec, Robot, 11.
51

straightforward. He seemingly sets aside the strong versus weak debate and argues for an

approach that goes beyond this distinction. He suggests that what is important for humankind

and robotics is not the particularities of AI, but the ways in which humans perceive it. He

encourages his reader to understand the features of robot intelligence in terms of what will

prompt humankind to ascribe intelligence to mechanical others. In other words, AI is replicated

intelligence if humans judge it to be so, regardless of the actual makeup of the computer

programming underlying it.111 As robotics and AI become increasingly present in our day-to-day

lives and popular culture, there is much empirical support for Moravec’s emphasis on

perception.112 One recent experiment, for example, reveals that many people have the same

emotional response when witnessing abuse of a robot toy dinosaur as they would a human. It

appears that humans are primed to perceive and respond to robots as though they were no

different than a defenseless cat or human baby.113 Such internal debates are important not only

for roboticists and AI researchers, but theologians as well. Edmund Furse, for one, regarded “a

machine intelligent if it behaves in an intelligent manner”114 but was rebutted in an response

from Sean Barker who questioned whether intelligence was sufficient or even necessary for

111. Moravec notes that if robots are “properly communicative” most people will interpret the artifacts as
having “thoughts and beliefs and feelings,” regardless of “how they internally achieve the behavior.” Ibid., 83.
Many science fiction portrayals of robots and AI echo such an argument, including in Steven Spielberg’s 2001 film
A.I.: Artificial Intelligence, which explores robot-human filial love and other related themes.
112. Noted futurist Ray Kurzweil advances a similar position in The Age of Spiritual Machines where he argues
that in the future artificial intelligences will have humanlike consciousness enough to convince us that they have
humanlike consciousness. Ray Kurzweil, Age of Spiritual Machines: When Computers Exceed Human Intelligence
(New York, NY: Penguin Books, 1999), 63.
113. Charles Q. Choi, “Brain Scans Show Humans Feel for Robots: A Show of Affection or Violence toward
Either Robots or Humans Causes Similar Changes in the Brain,” IEEE Spectrum, April 24, 2013, accessed April 24,
2013, http://spectrum.ieee.org/robotics/artificial-intelligence/brain-scans-show-humans-feel-for-robots.
114. Furse, 379. Others have also worked on these important philosophical questions. Alexander R. Pruss, for
one, addresses how “mere appearance of thinking and acting rationally” is in fact easy for AI to achieve, and in
many ways already has. Alexander R. Pruss, “Artificial Intelligence and Personal Identity,” Faith and Philosophy
26, no. 5 (2009): 487.
52

defining the human. Barker argued that “we do not know what is intelligent human behaviour,”

which echoes Anne Foerst’s resistance to defining the human in terms of what robots are not

(yet).115 These example of strong and weak AI pushes theologians to clarify their own

understanding of AI and its role in defining the human, and to also seriously consider the

importance of these debates for theological reasoning.116

Conclusion

Moravec offers his reader a detailed account of the technical challenges that early

roboticists and computer programmers faced in building robots with vision and locomotion. He

also reflects on evolutionary history and what it means to be human, and expresses his views on

the purpose and ends of robotics and AI research. In doing so, Moravec plays with the

boundaries of traditional robotics research and is a harbinger of the cross-platform,

multidisciplinary, commercial enterprise that robotics and AI has now become. His work is

especially rich for theological dialogue given its interest in posthuman futures and anthropology.

His unchecked optimism for a world without humans naturally begs theological critique. Further

unpacking his work in terms of embodiment, views of the human, and other methodological

concerns makes Moravec an essential figure in developing a dialogue between robotics and

theology.

115. This point is developed more fully in Chapter Two.


116. Anne Kull also remarks on the challenges humans face in self-definition and that historically this is
accomplished in part by “clarifying the differences between humans and those nonhumans and things that share this
planet.” Anne Kull, “Speaking Cyborg: Technoculture and Technonature,” Zygon 37, no. 2 (2002): 279.
53

Rodney Brooks

Rodney Brooks came to Stanford from Australia in the late 1970s. In the decades that followed

he challenged many of the assumptions about AI held by his early mentor and took the discipline

in unprecedented directions, especially in the area of military application.117 Brooks also writes

for both his immediate scientific community and popular audiences where he often discusses

how his approach to robotics is technically and philosophically a departure from Moravec. One

of the most significant differences between these two thinkers is about robot cognition. In the

development of the Stanford cart, Moravec argues that robots require three-dimensional models

to navigate and interact with their worlds. The more detailed the model and the more that is

demanded of the robot moving through these spaces, the greater the computational power

required. Brooks rejects this premise, insisting that “animals, including humans, do not make

accurate three-dimensional maps” of their worlds and are still able to act intelligently.118 With

this as central to his approach to robotics, Brooks’s robots rely on sensation and perception,

rather than models, for navigation and interaction.119 This departure from Moravec put Brooks at

the forefront of a larger historical transition in robotics and AI research. By the early 1980s

researchers began to turn away from rule-based action toward neural networks, which effectively

allow AI to learn. This transition is also a movement toward robot autonomy, which is essential

for AI success by all definitions.120

117. For more on the differences between Brooks and the traditional approach espoused by Moravec and others,
see Rodney A. Brooks, “New Approaches to Robotics,” Science 253 (September 1991): 1227-1232.
118. Rodney Allen Brooks, Flesh and Machines (New York, NY: Pantheon Books, 2002), 28.
119. Rodney A. Brooks, “The Cog Project,” Journal of the Robotics Society of Japan 15, no. 7 (October 1997):
968.
120. William Sims Bainbridge, God from the Machine: Artificial Intelligence Models of Religious Cognition
(Lanham, MD: AltaMira Press, 2006), 4.
54

Today Rodney Brooks is one of the most influential roboticists on the planet. He holds

significant influence not only in academic robotics, but now also in industrial and military

domains. He held a senior post at MIT and launched robot juggernauts iRobot and Rethink

Robotics, which under his direction have launched popular robots including the Roomba (a small

household vacuum) and the PackBot (a bomb-disposal robot). With telling names like Attila and

Genghis, even his early robots foreshadowed his rise in the world of military applications.121

Like Moravec before him, Brooks’s popular writing also reveals much about his assumptions

about the human, and the purpose of robotics and AI research. These assumptions are the

grounds for fruitful interdisciplinary interaction with theological reasoning.

The differences between Moravec and Brooks start to show the diversity and even

competition within robotics and AI research, which is often glossed-over in theological accounts

of the discipline. The more theologians interact with such differences, the better they can respond

to challenges brought about by increasingly humanlike robots and AI. The case study of Brooks

also highlights several ethical and historical issues important for theological analysis, including

the role of privilege, power, and social location in shaping technical research and the interplay of

military and academic interests in driving robotics and AI in modern-day America.

Robots in history

Throughout his writing Brooks argues for a clear distinction between representation and

reality, future and present. In clear contrast to Moravec, Brooks insists on discussing robotics

and AI research based on current realities and only very immediate futures. He keeps speculation

to a minimum and argues that it is too difficult to predict with much accuracy the future of

121. Rodney Brooks, “Integrated Systems Based on Behaviors,” SIGART Bulletin 2, no. 4 (August 1991): 47.
55

robotics and AI research, and to do so distracts from pressing scientific and historical challenges.

In support of this approach he points to “disruptive technologies,” which are those technologies

“that fundamentally change some rules of the social games we live with.”122 Disruptive

technologies emerge unpredictably in history and can derail current research pathways and the

assumptions that support them. Though Brooks is reluctant to speculate too much about the

future, he suggests that remote presence robots will be one such disruption emerging from his

field. “Disruptive technologies have no respect for age-old traditions and practices . . . it is

foolish, arrogant, and unwise to try to predict the future very far at all.”123 Brooks further

grounds his research by making a sharp distinction between his robots and those of the

imagination; there “are machines of science fiction fantasy, and then there are the machines we

live with. Two completely different worlds.”124 He, however, betrays this commitment through

his own admission. Interestingly, while Brooks wants to separate representation of robots from

reality, it was the infamous HAL in the film 2001: A Space Odyssey that helped spark his interest

in robotics: “HAL turns out to be a murdering psychopath, but for me there was little to regret in

that. Much more importantly HAL was an artificial intelligence that could interact with people as

one of them, using the same modalities that people use to interact with each other. HAL was a

being. HAL was alive.”125 Further to this, Brooks also seems to want to cultivate vivid

imaginative worlds about his robots. At the turn of this century he offered a telling soundbite to

122. Brooks, Flesh and Machines, 100.


123. Ibid., 101.
124. Ibid., 5.
125. Ibid., 64.
56

Australasian Science: “I will feel my life is complete when my graduate students feel

uncomfortable turning off their creations.”126

Like Moravec, Brooks reaches audiences beyond the robotics and AI community through

popular writing. In these he explores how robots and AI fit with broader historical patterns and

organizes history into a sequence of revolutions—agricultural, civilization, and industrial—that

have already come and gone. Three more revolutions have already started, but are not yet

complete: information, robotics, and biotechnology. Brooks suggests that the information

revolution provides groundwork for the nascent robotics revolution, which will in turn prompt a

full biotechnological revolution. The robotics revolution is the fulfillment of the “centuries-long

quest to build artificial creatures.”127 He suggests that its defining feature is that “relationships

with these machines will be different from our relationships with all previous machines. The

coming robotics revolution will change the fundamental nature of our society.”128 The

biotechnology revolution will go further and “change the fundamental nature of us.”129

This chronology, outlined in Brooks’s 2002 work Flesh and Machine, predates a social

media-saturated, cloud-based World Wide Web; the popularization of robotics in military

application; open source robotics, three-dimensional printing, and many other technologies that

reveal the interwoven nature of information, robotics and AI, and biotechnologies. These three

revolutions are in fact unfolding in parallel and interconnected ways that have seemingly not

been anticipated by Brooks.

126. Guy Nolch, “Face-to-Face with Machine Intelligence,” Australasian Science 22, no. 10 (Nov/Dec 2001):
20.
127. Brooks, Flesh and Machines, 10-11.
128. Ibid., 11.
129. Ibid.
57

The importance of perception

As noted above, Rodney Brooks departed from some standards for robotics and AI

research established by Moravec and his contemporaries. A point of convergence between the

two, however, is their insistence on the importance of human perception of robots above all other

measurements of success. This is an emphasis that Brooks and Moravec share with AI pioneer

John McCarthy, who maintained that to “ascribe certain ‘beliefs’, ‘knowledge’, ‘free will’,

intentions’, ‘consciousness’, ‘abilities’, or ‘wants’ to a machine or computer program is

legitimate when such an ascription expresses the same information about the machine that it

expresses about a person.”130 Brooks turns to the history of robotics to show how easily humans

will ascribe agency, emotion, and thought to robots, AI, and their precursors. Some of the earliest

robots were simple “tortoises” built by British researcher William Grey Walter in the late 1940s

and early 1950s. These robots had basic sensors and could return to a power source when their

batteries started to run low. What Brooks finds compelling about these robots is that an “observer

finds it easier to describe the behavior of the tortoises in terms usually associated with free will.

For example, observers reported that ‘it decided to go into its hutch’ rather than describing

tortoise behaviour in terms of mechanism and programming.”131 In other words, observers

defaulted to language typically used to describe animal or human behaviour when describing

what the “tortoise” robots were doing. Regardless of how the robot was negotiating its

environment via sensation, perception, and artificial intelligence, observers assumed it was an

130. John McCarthy, “Ascribing Mental Qualities to Machines,” in Formalizing Common Sense: Papers by
John McCarthy, ed. Vladimir Lifschitz (Norwood, NJ: Ablex Publishing Corporation, 1990), 93.
131. Brooks, Flesh and Machines, 21.
58

agent deciding to return to its hutch.132 Walter’s behaviour-based robotics is the historical

inspiration for Brooks’s own approach, which he has variously described as subsumption and

behaviour-based programming.133

Brooks repeats his argument for the importance of perception using a few contemporary

examples, including Sony’s turn-of-the-millennium robot dog, AIBO. These robot puppies

gripped their owners’ imaginations in impressive ways. As Brooks recounts, Sony had to argue

against AIBO owners that certain features were not present in the robotic dogs. People believed

that the small robot could recognize faces and voices, despite these features being absent from

the actual programming.134 This is an extension of a broader phenomenon that Brooks describes,

where people regularly “project too much intelligence on our children” and

“overanthropomorphize” pets.135 This, however, is not a problem for researchers like Brooks.

Instead, he and others, can capitalize on human willingness to anthropomorphize or over-ascribe

intelligence for the sake of developing robots that integrate seamlessly into human worlds.

Brooks’s famous robot Cog also underscores the importance of perception in human–

robot interactions. He describes how one critic of “the claims of artificial intelligence research”

had a conversion experience in an encounter with Cog.136 Sherry Turkle, to the point of meeting

132. The evidence for this tendency has kept coming over the years from different areas of research. William
Sims Bainbridge, for one, remarks that, “On the basis of psychological and anthropological data, some have argued
that the human brain evolved in such a way that people tend to see complex phenomena as the actions of conscious
beings.” Bainbridge, 120.
133. See Rodney A. Brooks, “From Earwigs to Humans,” Robotics and Autonomous Systems 20, nos., 2-4 (June
1997): 291-304; Brooks, “Integrated Systems Based on Behaviors,” 1; Rodney Allen Brooks and Rosalind W.
Picard, “Living Machines: Can Robots Become Human?,” in Place for Truth (Downers Grove, IL: IVP Books,
2010): 195.
134. Brooks, Flesh and Machines, 106.
135. Ibid., 107.
136. Ibid., 149.
59

Cog, was skeptical of the influence robots could hold over human emotion and rejected them

outright as mere artifacts of the laboratory. After meeting Cog, she wrote about how interacting

with the robot elicited strong visceral reactions, and she confessed her behaviour around the

robot led her to abandon her previously staunch distinction between alive and not-alive.137

Brooks reflects on the paradoxical and intuitive nature of such judgements:

On the one hand, I believe myself and my children all to be machines. Automatons at
large in the universe. Every person I meet is also a machine – a bag of biochemicals
interacting according to describable and knowable rules…. But this is not how I treat
them. I treat them in a very special way, and I interact with them on an entirely different
level. They have my unconditional love, the furthest one might be able to get from
rational analysis. Like a religious scientist, I maintain two sets of inconsistent beliefs and
act on each of them in different circumstances.138
Brooks’s attention to human perception of robots illustrates that distinctions between alive or

not-alive, being or non-being, intelligent or non-intelligent, are ultimately not that important.

Though perhaps a compelling philosophical question, this issue does not really impact day-to-

day experiences with robots. Humans are primed to receive robots as social actors and

companions, and little will dissuade us from doing so.139

Functional interpretation

Brooks’s approach to robotics and AI is quite different than the one espoused by Hans

Moravec in part because they have different understandings of intelligence. Whereas Moravec

rested his understanding of intelligence on the measurable computational ability of an organism’s

brain, Brooks turned to animals like earthworms and insects for a new way of thinking about

137. Sherry Turkle, Life on the Screen: Identity in the Age of the Internet (New York and Toronto:
Touchstone/Simon & Schuster, 1995), 266.
138. Brooks, Flesh and Machines, 174. Brooks expressed similar sentiments in a panel dialogue with Rosalind
Picard, where he says, “I am a robot . . . I view myself as a living machine. I am made up of biomolecules. Those
biomolecules interact in rule-like patterns and out of that emerges this thing you see before you. But I also claim to
be a human. I claim I am both a human and a robot, and all of us humans are robots.” Brooks and Picard, 196.
139. Brooks, Flesh and Machines, 154.
60

intelligence and AI. Rooted in the task-oriented functional interpretation of intelligence, Brooks

turns to biomimicry to develop his novel approach to robotics and AI. While such inspiration is

common in the early twenty-first century, a generation or two earlier, Brooks was truly a

trailblazer, breaking with academic tradition and risking his own reputation within the

community. Brooks rejected Moravec’s insistence on sophisticated robot vision, noting that

many organisms with far less complex eyes and brains than humans successfully navigated all

manner of environments. What these species could do, computers could not yet do, and by

extension intelligence—as conventionally understood in AI research—is not the purview of

humans alone.140 He looked for robotics and AI solutions that mimicked the adaptations of these

simpler creatures.141 In this new approach, sensation and perception are paramount, robot

embodiment matters, and computational power recedes as a distant concern.142 These revised

priorities meant that Brooks completely avoided the computational problems that burdened

Moravec in his efforts to address robot vision through “detailed representations of the world.”143

This approach subverted conventional thinking, which made a strong connection between

successful AI and the kind of intelligence valued by and represented in early AI researchers.

In this new way of dealing with robot locomotion, Brooks represents a variation of a

functional interpretation of intelligence. His robots are built for task-based performance in

household, industrial, and military application, with a view to improving on human performance

in these domains. An endorsement of John McCarthy by Brooks is telling on this front:

140. Rodney A. Brooks, “Intelligence without Representation,” Artificial Intelligence Journal 47 (1991): 140.
141. Rodney A. Brooks, “Elephants Don’t Play Chess,” Robotics and Autonomous Systems 6 (1990): 5.
142. Brooks, “From Earwigs to Humans,” 293.
143. Brooks, Flesh and Machines, 38.
61

“McCarthy was not impressed by the levels of emotion and illogic that were displayed by

humans driving cars. He wanted to replace human drivers by safe computers.”144 Remarks like

these indicate Brooks’s allegiance to a functional interpretation of intelligence, where the human

is defined largely by what it can do, and the robot by what it can do better. In this expression of

robotics and AI research, emotion is a hindrance to intelligence, rather than a part of it. In

relatively short order, one of Brooks’s own students would reject this assumption and move

toward the integration of emotion in her embodied and social approach to AI. Cynthia Breazeal’s

work on this is detailed in the next section of this chapter.145

Situated and embodied robots

The turn to sensation and perception, especially as inspired by non-human animals,

forced Brooks to more seriously consider robot embodiment.146 His robots would have to be built

for success in particular environments, rather than rely on an abstract general intelligence, as was

the aim of traditional AI before him. This means that robots must have physical features

compatible with their environment, which in turn relies on advances in areas like material

sciences, and mechanical and electrical engineering. Robot body structure becomes inextricable

from AI success, with one informing the other. For Brooks, this means dealing with situatedness

and embodiment ahead of the computational concerns that dominated Moravec’s work.

Situatedness points again toward the importance of sensation and perception in developing

successful AI without sophisticated models of the world. Situatedness means that a robot “does

144. Ibid., 26.


145. Breazeal’s robot Kismet in particular was programmed to “extract these emotional messages from human
speech.” Ibid., 95.
146. Brooks, “Intelligence Without Representation,” 140.
62

not deal with abstract descriptions, but through its sensors with the here and now of the world,

which directly influences the behavior of the creature.”147 Here researchers orient themselves to

the complex worlds their robots will inhabit, and especially to the dynamic and unpredictable

aspects of these environments. Roboticists concerned with situated success also focus on a

robot’s ability to respond to unforeseen circumstances and learn from them. In robotics,

embodiment is closely related to situatedness, focussing on the bodies required to navigate in and

interact with these dynamic and unpredictable environments. According to Brooks, an

“embodied creature or robot is one that has a physical body and experiences the world, at least in

part, directly through the influence of the world on that body.”148 This emphasis on body over

robot ‘brain’ has significant implications for both robotics research and theological responses to

it. First, it decentralizes the human. In Moravec, the human is the apex of evolution, the most

successful brain. In Brooks, the human is burdened with an energetically expensive brain and no

more successful in evolutionary terms than the average earthworm or moth. Second, it challenges

foundational assumptions about the success of AI. Greater and cheaper computational power no

longer correlates with success. Third, the focus on bodies and context is a first step toward more

inclusive and representative robotics and AI research. This shift allowed for the kind of research

undertaken by Breazeal and Knight, and opens way for robotics that is more representative of the

diversity of human embodied experience.

Brooks frames his discussion of situatedness and embodiment in terms of robot ecology,

which resembles organic ecology in most ways. This means that Brooks is uninterested in

147. Brooks, Flesh and Machines, 51-52.


148. Ibid., 52.
63

studying robots in isolation; rather he is interested in how robots interact with each other and

their environments.149 The language of ecology is symbolic and important. It points to a desire

for strong integration of robots into human environments, where robot and human ecologies

intertwine. Perhaps more importantly, it points to futures where robots interact with each other

and form their own social structures. These robots then could bear characteristics of not just

individual humans (as per substantive or functional interpretations of intelligence), but also of

humans as groups, communities, or species. Brooks imagines for his robots cooperation in

“ecological niches” very much like those already present in nature.150 For example, they would

cooperate according to social rules for the benefit of the group, each assuming slightly different

roles that make sense only when discussed in the context of group behaviour. These robots

would have some autonomy, but remain under human control much like other tools or

instruments. Brooks does not imagine or desire a future where robots take on agency for

themselves or do very much by way of learning. Robot agency remains with programmers and

operators, and even in his most ambitious moments Brooks’s robots collaborate only for the sake

of efficient household management.151

Situatedness, embodiment, and ecology all led Brooks to a key, inevitable insight: worlds

built for human bodies and human intelligence are best suited for robots that also have

humanlike bodies and intelligences. This means that for humans to understand and interact with

robots as a peer in this world, robots must take on human form.152 So was the genesis of the Cog

149. Ibid., 53.


150. Ibid.
151. Brooks notes, “Our houses will become menageries of little cleaning robots. A symbiosis will develop
between people and the artificial creatures whose only role in life is to keep the house clean.” Ibid., 122.
152. Ibid., 66.
64

project, one of the first attempts to build a fully humanoid robot.153 Though the project was

eventually abandoned, Cog inspired not only further technical achievement in robotics and AI,

but also fueled first-of-its-kind theological reflection as well. It was Cog that ignited Anne

Foerst’s theological imagination and with it a whole new area of interdisciplinary theological

reflection. In the early 2000s, around the same time Foerst authored God in the Machine, Cynthia

Breazeal was responding to Cog in a completely different way. This student of Brooks followed

his move toward an embodied approach, but developed it further, challenging the status quo not

unlike her mentor. It is to these efforts we now turn our attention.

Cynthia Breazeal

Cynthia Breazeal is Associate Professor of Media Arts and Sciences at MIT and is director of the

Personal Robots Group. As discussed above, her formation in the field was guided by Rodney

Brooks, though her doctoral research marked a significant departure in approaches to robotics

from that of her supervisor.154 At a 2007 Veritas Forum event, Brooks anecdotally summarizes

the difference between their two approaches, noting that he is interested in building appliances,

and Breazeal in building friends.155 Diverging on this point led Breazeal to become a pioneer in

social robotics, a subset of robotics and AI research that draws from several other areas to build

153. Development of Cog stopped in 2003 and the robot is now housed in the MIT Museum in Cambridge,
Massachusetts. Note that while Cog was one of the first in America, globally was part of a trend in this area, and that
other laboratories were also developing similar projects at the same time as Brooks was advancing Cog. See for
example K. Hirai, M. Hirose, Y. Haikawa, et al., “The development of Honda humanoid robot,” in Proceedings.
1998 IEEE International Conference on Robotics and Automation, Leuven, Belgium (1998).
154. Brooks did emphasize incrementality and learning in some of his research, but these were only starting
points that were taken up by Breazeal and much expanded. See for example Rodney Allen Brooks, Cynthia
Breazeal, Matthew Marjanovic, et al., “The Cog Project: Building a Humanoid Robot,” in Computation for
Metaphors, Analogy, and Agents, ed. C. Nehaniv (New York: Springer, 1999): 56.
155. Brooks and Picard, 198.
65

increasingly “socially intelligent” robots. The goals of social robotics are plural. Breazeal, for

one, hopes that humans will “interact with [such a robot] as if it were a person, and ultimately a

friend.”156 Like Brooks, she was inspired by popular culture, especially the humanoid robot

C3PO in Star Wars, whom she saw as an example of the kind of relationality she sought to

implement in her own robots.157 These efforts were brought to their earliest expression in

Kismet, a well-known robot developed according to these principles in the 1990s. Until this

point, researchers largely relied on substantive or functional approaches to robotics and AI. With

Kismet, Breazeal developed one of the first examples of a relational approach to AI and changed

the discipline from that moment forward.

In her distinctiveness from both Moravec and Brooks, Breazeal raises another set of

issues important for theological consideration of robots and AI. She works from a relational

approach to AI and, therefore, challenges theological ideas about the human and even the imago

Dei in new ways. In arguing that robots can also be relational, Breazeal challenges theologians to

rearticulate their understanding of relationality, both in terms of human relationships and the

relationality embedded in a trinitarian understanding of God. Breazeal’s work also requires

contextual analysis and closer examination of how her social location contributes to bias in her

work. She implicitly supports an ableist view of the human and excludes many from her

representation of the human. These unintentional gaps in her work provide opportunity for

critique emerging from contextual theological traditions. Such analysis helps to reveal how far

156. Cynthia Breazeal, Designing Sociable Robots (Cambridge, MA: MIT Press, 2002), xi.
157. Rebecca Greenfield, “How ‘Star Wars’ Influenced Jibo, The First Robot For Families,” Fast Company,
July 7, 2014, accessed October 22, 2017, https://www.fastcompany.com/3033167/how-star-wars-influenced-jibo-
the-first-robot-for-families. Breazeal is, of course, among many who find Star Wars so captivating. For example,
Irving Hexham was also inspired in the early 1980s by these fictional robots. Irving Hexham, “Learning to Live with
Robots,” Christian Century 97, no. 19 (1980): 574.
66

robots are from humans not only in their general intelligence, but also in their lack of social

complexity and representation of human diversity.

Relational interpretation

Breazeal represents a turn toward the relational interpretation of AI. Through this turn she

challenges both the substantive and functional interpretations, especially how they are manifest

in researchers like Moravec and Brooks. Her work draws on developmental psychology,

relationality, and embodiment in developing humanoid robots. This approach stresses the social,

contextual, and dynamic qualities of the human. In contrast to many of her peers, Breazeal turned

her attention toward building robots with humanlike “social characteristics” so people would

easily interact with them in “natural and intuitive” ways.158 This interaction is her measure of

success. She argues that humans are primed by our long evolutionary histories to interact with

robotic others if “a technology behaves in a socially competent manner.”159 As evidence of

success, Breazeal points to people who experience sadness or disappointment when Kismet gets

powered down after a particularly enjoyable playtime. She also writes of experiments with chat

bots where people exhibit politeness or “feel good if the computer compliments them” even

though “after the experiment, they insist that they treated the computer as a machine.”160 This is

a subtle but important inversion of a longstanding assumption in robotics and AI—rather than

training humans to use or interact with these technologies, the technologies should instead be

developed for easy and intuitive engagement with humans.

158. Breazeal, Desiging Sociable Robots, xii.


159. Ibid., 15.
160. Ibid.
67

Breazeal’s latest invention, a robot named Jibo, is the fullest expression of her relational

approach to robotics and AI. While Kismet’s body was very much humanlike, Jibo’s form is

more abstract, resembling a limbless white jelly bean. Breazeal designed Jibo to help people with

day-to-day tasks like cooking, scheduling, and taking pictures. In its role as a household

assistant, it prompts important questions about the value and nature of work, how societies and

labour are organized, and sheds light on the values different people bring to robotics and AI

research. Robots like Jibo could potentially perform all kinds of jobs ranging from piano teachers

to baby sitters. Jibo could also become a bit of a digital Swiss-army knife, providing all manner

of tips and tricks to make a comfortable digital life even more comfortable. This could include

step-by-step instructions for editing a digital book of family photos, troubleshooting digital

climate control systems, or even curating the perfect playlist for a dinner party based on guests’

tastes in music. In this way, social robots both take over jobs from humans and create new ones

altogether, which forces adaptation. Though Breazeal’s work shows the potential of social

robotics and its relational approach to transform human family and social life for the better, she

fails to consider the shadow side of her efforts. Robots like Jibo encourage the rise of a digital

class and prompt a shift toward digital careers, rewarding technological literacy and wealth. As

professions go extinct new ones will rise to take their place, and those most able to adjust will

flourish and profit at the expense of those who cannot. The danger of Jibo is to make comfortable

classes even more comfortable, while at the same time contributing to the exclusion of already

marginalized people. With this in mind, theologians must critique relational approaches to AI

that value only certain types of relationships and only promote the flourishing of a select few.
68

Weak AI, strong impression

With Kismet, Breazeal wanted to create a programming scaffolding sophisticated enough

to allow for humans to interact with the robot as they would a human baby. This interaction

would then give Kismet the kind of social fodder humans receive in their own development and

learning. With sociable robots built like this, every user becomes an AI programmer just by

using their innate social skills. Programming by relying on natural and uninhibited interactions

between robot and human maximizes the possibility of Kismet’s success in settings outside of

the laboratory. The process of learning is as important as the result. For decades, theologians and

others have stressed the importance of learning in consideration of human intelligence. Some

even go beyond what Breazeal proposes in Kismet and other projects. For example, in a mid-

1980s article, Allen Emerson and Cheryl Forbes discussed the importance of human imperfection

in learning and development. This imperfection includes forgetting and learning from mistakes,

which in many dominant approaches to AI are simply shortcomings in programming.161 Other AI

researchers, including Breazeal, typically do not consider these flaws. These two different views

grate against each other with one suggesting these are essential aspects of intelligence and the

other that these are problems to be solved.

Breazeal finds some sympathy in Ian Barbour’s limited work on AI. They share an

interest in the relationship between embodiment and learning, and argue that they cannot be

uncoupled. Barbour notes that both scientific and theological researchers increasingly argue for

“the role of the body in human learning.”162 Like Breazeal, he points to everyday social

161. Emerson and Forbes, 15.


162. Ian Barbour, “Neuroscience, Artificial Intelligence, and Human Nature: Theological and Philosophical
Reflections,” Zygon 34, no.3 (September 1999): 375.
69

experiences (e.g., riding a bicycle, conversation) as the heart of human learning. Where Barbour

and Breazeal diverge, however, is the extent to which robot and human learning overlaps.

Researchers like Breazeal see hope for ever-greater similarities between robots and humans,

especially in terms of embodiment. In contrast, Barbour sees canyons that cannot be bridged,

noting that their “mechanical bodies are of course very different from our biological bodies; what

they learn from their actions will be different from what we learn from ours.”163

Christopher B. Kaiser, an interdisciplinarian with training in theology and astrophysics,

names another important challenge for researchers like Breazeal. Though he wrote a decade or

more before Breazeal established herself in robotics, he makes an important point about the

differences in primitive machine learning and that which is evidenced in humans. AI, as it stood

in its early decades, was rule-based and rather inflexible. Human learning, by contrast, tends

towards generalizations rather than rules, which is clearly distinct from robot experience.

Humans, for example, are taught through varied experience of social norms about politeness, but

almost all these norms will find exception as a person matures and encounters more and more

situations to apply such learning.164 For example, it might usually be polite to shake hands when

introduced to someone, except it may not be permissible to touch a member of a royal family or

for members of opposite sexes to come into contact. Kaiser stresses that this is not something

that can be learned all at once. He remarks, “we must suppose that a lengthy process of

socialization would be required before the ‘intelligence’ of the machine would be formed in a

way that would allow it to function in real-life situations with any degree of success.”165 This

163. Ibid., 376.


164. Kaiser, 67.
165. Ibid., 68.
70

insight points to an understanding of intelligence and artificial intelligence that is lived,

practised, and developed.

Breazeal engineered for Kismet a synthetic nervous system (SNS) made up of six

subsystems that manage things like processing sensory information, directing a robot’s gaze to

visual stimuli, perceiving when to carry out certain behaviour, and regulating motor skills and

expressions. Some of these are further regulated by sub-subsystems, revealing the complexity of

engineering required for robots to act like a human baby. In this way Breazeal’s work seems like

biomimicry, but it is not. Kismet’s programming or SNS does not parallel the internal

organization of the human or any other organic being. In developing this robot she organized the

SNS according to engineering priorities rather than biological ones. For Breazeal, this departure

from the human as a template for AI does not matter. Robots do not really need to ‘think’ or

‘feel’ as humans do––they only need to convince their human interlocutors they do. What is most

important is that the robot elicit the same responses and behaviours from humans as a human

baby does. Even in early attempts with primitive robots like Kismet, Breazeal discovered it was

remarkably easy to get people to act naturally with a robot. Its big eyes and red lips and floppy

ears seemed to be all the invitation people needed to interact with it as they would a human baby.

With this approach, Breazeal makes it clear that the strong versus weak AI debate is not one

worth resolving. If people believe that robots are intelligent, then that is all that matters.

Motivations

In her 2002 book Designing Sociable Robots, Cynthia Breazeal reveals a little about her

motivation for pursuing “socially savvy robots.”166 Importantly, Breazeal believes that robots

166. Breazeal, Desiging Sociable Robots, xii.


71

and robotics are a very special way to learn about the human. They allow humans to test the

limits of their own intelligence through technical achievement, while at the same time bringing

into sharp relief our distinctly human qualities. As she writes, robots “should not supplant our

need to interact with each other, but rather should support us in our quest to better understand

ourselves so that we might appreciate, enhance, and celebrate our humanity and our social

lives.”167 The robot as process and product for human self-discovery is central to Breazeal’s

research. In building both the mechanical and computational elements of Kismet, researchers

were again and again astonished by the apparent complexity of the human. Every failure and

difficulty revealed more and more about how little researchers know about human behaviour,

cognition, and social lives. Along with many others, Breazeal noted that what is simple for a

human sometimes seems nearly impossible for robots or AI. For example, gripping a coffee cup

or observing a simple task and repeating it are easy for most humans, but very difficult to

achieve in robot bodies with AI.

Breazeal is motivated to build robots for the “average person” in an “average person’s

social world.”168 Unfortunately, she offers little explanation of who these average people are and

what their average social world looks like. Perhaps this is intentionally vague so that readers may

insert their own experiences into her vision of robotics. Perhaps she takes for granted that her

idea of average is universal. Perhaps she has not given it much critical thought because her own

experiences of privilege protect her from deep consideration of those who are not living average

lives. Regardless of the reason, the weight of an unexplained “average person” and their social

167. Ibid.
168. Ibid., 15. For similar remarks, see also B. Adams, Cynthia Breazeal, Rodney A. Brooks, et al., “Humanoid
Robots: A New Kind of Tool,” IEEE Intelligent Systems and Their Applications: Special Issue on Humanoid
Robotics 15, no 4. (July/August 2000): 25.
72

world is problematic given the social goals of Breazeal’s research. Truly sociable robotics would

reflect the diversity of human embodied and contextual experiences. Social worlds vary

immensely not only from one culture or location to the next, but also within these settings based

on dis/ability, race, class, gender expression, sexual orientation and so on. How people behave

and are treated varies immensely according to these factors, which are all but absent in

Breazeal’s work. This unexamined homogeneity is a significant weakness in this approach to

social robotics. Robots and AI developed in this tradition ultimately reflect the social norms of

just a very few people. This means that robots like Kismet and Jibo risk uncritically replicating

patterns of oppression, where the privileged and powerful dictate what counts as normal,

acceptable, and “average.” Breazeal expresses hope that one day social robotics will advance to

the point where humans will pass on their own norms and values to robot friends. Theologians

must hope that these near futures will include those who have traditionally experienced high

degrees of social exclusion. Only in this way can robots truly become the helpful, friendly,

empowering tools that Breazeal imagines them to be.

Heather Knight

Up-and-coming researcher Heather Knight, an assistant professor of robotics at Oregon State

University, completes this brief study of influential roboticists. While she has not yet established

herself in the way of Moravec, Brooks, and Breazeal, her work is a compelling case study for

theologians nonetheless. Her contributions present unique methodological challenges for

interdisciplinary theology and she reveals much about future directions in robotics and AI

research. She seeks to build on Cynthia Breazeal’s work in social robotics, who supervised her

master’s research at MIT. Knight relies on new and social media to both develop her research

and to disseminate it. She turns to media like performance art and micro-blogging to develop her
73

research and share her results. Here, in contrast to much conventional academic work, robots are

witty and charming, and the atmosphere is lighthearted. Knight develops her public persona

through popular media like her active Twitter presence, features in Wired magazine, a robot-

themed film festival, and TED talks. The products of her research are as diverse as the SyFy

channel’s battlebot show, mesmerizing music video installations, and stand-up comedy

performances by her two-foot-tall Alderbaren Nao robot.169 Data, the aforementioned Alderbaren

humanoid, is of special importance to Knight, as she worked in development for the French

company following her time at MIT and contributed to the design of one of its sensors.170 She

stands out, especially against Moravec and Brooks, for blending entertainment, technology,

pedagogy, and art.

Her presence in such diverse areas is an intentional methodological move on her part. In a

2010 TED Women talk entitled “Silicone-based comedy,” Knight reveals a bit about her

reasoning for taking to new and social media.171 There she expressed desire to create

increasingly savvy social actors that function very well outside laboratory settings. This strategy

means robots must be superb social actors in a range of commonplace social situations. For

Knight, acting and stand-up comedy are examples of smart social behaviour. They are dependent

on context and have to be adjusted in response to different situations and audiences. In focusing

on vernacular entertainment like stand-up, cabaret, and dance, Knight says she selects behaviour

169. “Cyborg Cabaret: Passion, Terror & Interdependence,” accessed March 9, 2013,
http://cyborgcabaret.org/program.html; “Robotic Reality,” Carnegie Mellon University, accessed March 9, 2013,
http://www.cmu.edu/homepage/computing/2013/winter/robotic-reality.shtml; OK Go, “This Too Shall Pass” (music
video), directed by James Frost, OK Go, and Syyn Labs, posted March 1, 2010, accessed October 22, 2017,
https://www.youtube.com/watch?v=qybUFnY7Y8w.
170. Rachel Wolff, “A Researcher and a Robot Walk Into a Bar . . .,” The Wall Street Journal, April 20, 2012,
accessed March 28, 2018, https://www.wsj.com/articles/ SB10001424052702304432704577349860916159278.
171. Heather Knight, “Silicon Based Comedy” (video lecture), TED Talks, December 2010, accessed March 9,
2013, http://www.ted.com/talks/heather_knight_silicon_based_comedy.html.
74

for her robots that mirrors common daily social interactions. Absent from her robots’ repertoires

are the more specialized or privileged intelligences seen in earlier forms of robotics and AI

research. This includes AI systems that, for instance, beat humans at chess or Jeopardy!

Knight is very much interested in getting robots out of the laboratory and increasingly

into all aspects of human society. Robots, she believes, are potentially very good companions

and colleagues. Further to this, Knight sees the potential for robots to attract others to scientific

and technological research. The more people interact with robots, especially ones like her cute,

joke-telling Data, the more they will be inspired to join the ranks of roboticists and work for the

advancement of the field. Performance and art are the best gateway for advancing and already

fast-advancing area. As she summarizes nicely in a 2017 interview, “The endpoint of my

research is research,” she says, “and the endpoint of Marilyn Monrobot is entertainment.”172

Social media

Knight’s research does not fit well with traditional academic venues, in both robotics and

theology. She uses platforms and methods that challenge theologians to think creatively about

the interdisciplinary task of robotics and AI. One major aspect of this challenge is her use of

social media to educate people about her work and to generate interest in it. In particular, Knight

is active on Twitter, which confines users to visual media and short text messages. The

Twitterverse, therefore, very much encourages a kind of expression that is not well represented

in theological reasoning. Communication on Twitter is also fast paced and conversational.

172. Kara Cutruzzula, “Engineering Community With Social Roboticist Heather Knight,” Magenta , June 14
2017, accessed March 19, 2018, https://magenta.as/engineering-community-with-social-roboticist-heather-knight-
725e21a1c107. Marilyn Monrobot is Knight’s robot theatre company that develops events like robot cabaret and an
annual robot film fest.
75

Trends rise and fall within a few hours. The platform also lends itself to Knight’s trademark

cheerfulness and robo-optimism, and does not leave much opportunity for careful and protracted

critique that is the trademark of much academic debate. Analysis of Knight in these spaces, then,

requires not only knowledge of how the platform works, but also of the context of conversations.

Taken out of context many tweets almost seem nonsensical. Increased use of such spaces puts

pressure on traditional academic discourse, which is dependent on the peer review model. On

Twitter almost everyone is your peer, and the platform is free and accessible, much unlike

academic journals and monographs. Interdisciplinary theologians must embrace platforms like

Twitter as a legitimate source of reasoning about robots and AI, otherwise they will fail to fully

understand the methods and goals of robotics and AI research.

Robot optimism

Knight’s work paints a dynamic and vivid portrait of contemporary robotics and AI

research. In her efforts to take robots out of the lab, she gives others a firsthand experience with

their three-dimensionality. Above all else, the robots of Knight’s world are fun. They are cute,

wave at people, tell jokes, and perform kitschy cabaret. Implicit in her work is an argument that

robots are harmless and not to be feared. They are the best of twenty-first century creativity and

serve to fuel human interest in science and technology. Robots are objects of delight.

Knight is hopeful and confident about robots and their place among humans, and is

straightforward with her audiences about this optimism. This positive outlook, however, is an

incomplete picture of robotics and AI research, from both scientific and theological perspectives.

For example, a 2017 essay published on Medium, discusses the potential for remote presence
76

robots to help with parenting.173 This essay, while admittedly geared for a popular audience,

shows Knight’s lighthearted and unsophisticated approach to many of the social issues

undeniably attached to social robotics. Here, she argues that remote presence robots could be

helpful to separated or travelling parents, and might also mitigate children fighting over iPads or

watching too much Netflix. These examples reveal much about Knight’s imagined social worlds

for robots. She, for example, makes no mention of the use of remote presence robots in co-

parenting in the case of chronic illness, extended hospitalization, or disability. From the child’s

perspective, there is no mention of how these same devices could help with learning, developing

computer literacy, or facilitating interaction in the case of autism spectrum disorders or other

developmental differences. To compound these problems there is no accounting for the cost of

home robots, or how lack of access to them might compound already existing social inequalities.

Her view is firmly on the kind of challenges faced by healthy, privileged, relatively wealthy

people.

Elsewhere, Knight is dismissive of critics of military application of robots and AI,

including Human Rights Watch and its Campaign to Stop Killer Robots (discussed further in

Chapter Four). She writes of autonomous weapons systems, “No such robots currently exist, nor

does any military have plans to deploy them, nor is it clear when robotic performance is inferior,

or how it is different from human performance in lethal force situations.”174 Claims like this are

problematic for several reasons. First, they are offered with her authority as a roboticist, coming

from a prestigious heritage of robotics research institutions including MIT, Carnegie Mellon, and

173. Heather Knight, “Co-Parenting with Tele-Presence Robots,” Medium, June 10, 2017, accessed March 19,
2018, https://magenta.as/engineering-community-with-social-roboticist-heather-knight-725e21a1c107.
174. Heather Knight, “How humans respond to robots: Building public policy through good design,” Brookings,
July 29, 2004, accessed March 27, 2018, https://www.brookings.edu/research/how-humans-respond-to-robots-
building-public-policy-through-good-design/.
77

NASA’s Jet Propulsion Laboratory. Those unversed in issues of military applications of robots

and AI would have no good reason to doubt her claims or question her confidence. Second, it is

truly impossible for Knight to know if any military in the world has plans to deploy autonomous

weapon systems. Intuitively, it seems that the robotics arms race would be mired in secrecy just

as any other global competition to develop military technology. She also does not account for

paramiltary or private sector interest in these technologies that may very well undermine even

the best and well-supported robotics arms treaties. There is also other evidence that undermines

Knight’s certainty. P. W. Singer, expert in robots and warfare who also has experience working

in the Pentagon and the Central Intelligence Agency (CIA), repeatedly stresses that it is

impossible to predict the future of AI and robots in this domain, and that many legal, ethical, and

political challenges are already before us.175 While the “killer robots” of Knight’s imagination

may not-yet be a daily reality, Singer clearly illustrates how widespread precursor and related

technologies are in global military application. As of 2016, “86 countries have military drones”

with varying degrees of autonomy and “at least 30 countries have missile and gun systems like

the Aegis or CRAM to defend their warships and bases. These can be set to an autonomous mode

to shoot down incoming airplanes, missiles, or rockets.”176 Even if Knight remains optimistic in

light of these facts, she fails to acknowledge that her industry and her academic community are

largely supported by military interests and the funding thereof.

175. Sean Illing, “The rise of AI is sparking an international arms race,” Vox, September 13, 2017, accessed
March 27, 2018, https://www.vox.com/world/2017/9/13/16287892/elon-musk-putin-artificial-intelligence-war
accessed.
176. P. W. Singer and August Cole, “Humans Can’t Escape Killer Robots, but Humans Can Be Held
Accountable for Them,” Vice News, April 15, 2016, accessed November 18, 2016, https://news.vice.com/
article/killer-robots-autonomous-weapons-systems-and-accountability.
78

Unfortunately for this theological analysis, Knight does little to publicly situate herself in

relation to other roboticists and to internal debates going on within her research community.

Though her work inspires others to join the robotics community, it is difficult to discern her

long-term goals, her views on the human, and her position on central issues like strong versus

weak AI. It is even difficult to tell how Knight will measure the success of her own research.

Perhaps she is uninterested or has other justification for setting aside these issues. No roboticist

is bound to address all possible concerns, however, Knight—as a leader in new ways of doing

robotics and AI research—truly misses an opportunity to discuss the significance of these

methodologies. Her high profile and talent with creative dissemination of her research yield

excellent circumstances to discuss broader trends in robotics and AI research. Three-dimensional

printing and open source software, for example, will make robots much more affordable and

accessible, especially to those outside traditional research centres. With her media influence and

insight into cutting-edge robotics, Knight has the capacity to spark lively debate about these

futures, but to date has shown little evidence that she is interested in such a role.

Knight’s work is a beacon for theologians interested in robotics and AI. Her creative and

bold use of new and social media show interdisciplinarians the future of robotics and AI

research. She challenges theologians to attend not only to peer-reviewed discourse, but to these

expansive and largely unmediated forums as well. This is a distinctively different kind of work

than carried out by Moravec and Brooks, and even Breazeal who introduced Knight to the field

of social robotics. Though the first three thinkers discussed here represent the foundation of

humanoid robotics research, Knight represents its future. Increasingly, theologians will have to

interact with researchers like Knight, not only for their contributions to robotics research, but

their upending of traditional methodologies as well.


79

Conclusion

The differences among Moravec, Brooks, Breazeal, and Knight are clear. Each sought to

advance the field of humanoid robotics in a different way, based in part on historical context, but

more so on individual visions for robotics and AI research. As one progresses from Moravec to

Knight, it is easy to see increasing integration of humanoid robots in daily lives. Moving from

the ahistorical, transcendental goals of Mind to the little joke-telling Data, robotics becomes

more mundane, more pragmatic. Their combined research reveals many technical barriers to

achieving their stated goals, which will only continue to spark important discussions about the

future of robotics and AI for the communities most directly involved in their development. Their

work is also critical for theologians for other reasons. The successes and failures of robotics, and

how roboticists talk about them, reveal much about their vision for the future. In these often-

implicit statements, roboticists disclose much about their priorities and values. Unfortunately,

these priorities all too often fail to consider potential negative impact of robotics and AI,

including their role in perpetuating patterns of social exclusion and oppression. Their research

shows that humans are primed to interact with robots and AI as they do with animals, and even

other humans. This readiness to receive robots means that theologians must be thorough in their

analysis of robots and AI, especially given the extensive role of military interests of many

countries in funding this research.177

177. The military apparatus of the United States is the focus of P. W. Singer’s Wired for War, which stands as
one of the most accessible accounts of robotics and AI in contemporary warfare. Singer does include much
exposition on how other nation states are engaging in robotics and AI research for their own military purposes.
Given the difficulty in researching trans-national military interests, research and development, it is difficult to assess
the full extent to which any country, even one’s home country, is leveraging robots and AI research in service of
military ambitions. International arms show—much akin to any other trade show—are increasingly featuring
robotics and AI arising from and being sold to an array of countries around the world. Leaders apart from the United
States include China, the United Kingdom, Russia, Israel, Iran, Japan, Sweden, Turkey, and many others. For more
detail on international and trans-national military interests in robotics and AI, see Singer pp. 237-260.
80

The research discussed above shows what is at stake in efforts to develop humanoid

robots and AI. This includes redefining intelligence, challenging longstanding ideas about what it

means to be human, and the role of learning and development in our self understanding. In all of

this, robotics research and broader social trends are always in interplay, even though this is not

well acknowledged by any of the researchers named above. Growing social, military,

biomedical, and industrial applications of these technologies will continue to illustrate the

different ways robots and AI can be used in both oppressive and liberating ways. It is these

relationships between roboticists and application, that gives rise to essential interdisciplinary

theological work. It is here that converging questions about what it means to be human and our

common future on Earth with robots and AI begin to emerge. Moravec, Brooks, Breazeal, and

Knight lay the groundwork for this crucial and compelling groundwork. They are not, however,

alone in their consideration of robots in the wild. Theologians Anne Foerst and Noreen Herzfeld

realized the significance of robots and AI early on and have developed two major theological

responses as a result. It is to these efforts that we now turn our attention.


Chapter Two

Introduction

Theological discourse about robots and AI is growing in popularity, but as an emerging area of

research it is still finding its bearings. To date only a small number of theologians have directly

addressed issues arising from robotics and AI research. Their work includes reflections on ethics,

understanding the human, and theological method, as well as other related concerns. While this

makes for a good preliminary response, the existing body of English-language theological

literature on robots and AI shows little dialogue among these researchers, and published

engagement with each other’s work is rare. This creates a theological landscape presenting

significant opportunity to find points of interdisciplinary convergence and dialogue in this

emerging area of theological inquiry. This chapter and the one that follows describe and analyze

two existing types of theological treatment of robots and AI. The first of these types, which has

Anne Foerst’s work as its core and includes contributions from Peter Heltzel, Theodore Metzler,

Michael Anderson and others, is described and analyzed here.

Anne Foerst, who has ample credentials in both theology and AI research, was one of the

first theologians to publish extensively on the intersection of theology and robotics. Her 2004

work God in the Machine: What Robots Teach Us About Humanity and God still stands as one of

few theological monographs on robots and AI published to date.178 Now at St. Bonaventure

University in New York, Foerst was once simultaneously a researcher in Rodney Brooks’s

laboratory at MIT and a research associate at Harvard Divinity School. This dual heritage is

178. Anne Foerst, God in the Machine: What Robots Teach Us About Humanity and God (New York: Dutton,
2004).

81
82

manifest in research concerned with interdisciplinary epistemology and the impact of science on

theological reasoning, and in a keen ability to find religious themes in scientific narratives.

Foerst’s approach to robotics and AI research is inspired by firsthand experiences with robots;

theological reasoning grounded in tradition, with an emphasis on the work of Paul Tillich; Judeo-

Christian scriptural and oral traditions; and the authority of human emotion. Her work is

optimistic about the rise of sophisticated humanoid robots and AI, and she even suggests that it is

“difficult to criticize” robots from any perspective, theological or otherwise.179

Foerst protects her claims from theological critique in part by focussing very much on

laboratory research. She does not typically address applications outside laboratory settings,

which are more easily subject to critique by Christian ethicists and theologians. Foerst rarely

mentions theologians in dialogue with her research, even when addressing theological critiques

of or concerns with robotics and AI, so her rationale for arguing against specific perceived

theological fears remains unclear. Ian Barbour published commentary on her work in advance of

God in the Machine, but was not overly critical. Barbour notes with appreciation her attention to

the major characteristics of contemporary AI research including embodiment, developmental

learning, and social interaction. He also remarks on her efforts to round out scientific images of

the human with complementary stories from theology.180 Foerst, perhaps, takes her cues from

earlier material, which does discuss potential theological objections to robots and AI. This

material includes Alan Turing’s objection, which was discussed in the introduction to this thesis.

Edmund Furse, a noted professor of mathematics who has additional training in computer

179. Anne Foerst, “Cog, a Humanoid Robot, and the Question of the Image of God,” Zygon 33, no. 1 (1998):
103.
180. Barbour, 377.
83

science and psychology, also addresses theological objections to robotics and AI in his work.

Furse self-identifies as a Christian in his 1986 article “The Theology of Robots,” and provides

one of the first attempts at interdisciplinarity between theology and robotics.181 He writes that

one of the greater theological fears about robotics and AI is that “our place at the pinnacle of

evolution” and of God’s “creation is called into question. We may feel sibling rivalry as a

species, fearing that our younger brother the robot will supplant us, not only in our jobs and

endeavours, but in God’s love.”182 He goes on to draw parallels between the “playing God”

objection that often accompanies biomedical research and the theological “unease about building

robots.”183

Ultimately Anne Foerst hopes theological reflection on humanoid robots will further

human self-understanding. She also hopes that these technologies will cultivate compassion and

acceptance in humans for humans by dismantling empirical criteria for personhood. Humanoid

robots like Cog and Kismet challenge conventional understandings of what it means to be

human, and in doing so cultivate inclusivity and defeat prejudice. By mimicking human

thoughts, emotions, and ways of relating, humanoid robots chip away at any essential

characteristics of the human. Foerst argues that in excluding Cog and Kismet from “the

community of persons” we must avert to empirical criteria that will “also exclude human

beings.”184 She cites, for example, humans in comas or with Alzheimer’s disease as having

atypical human experiences of consciousness and relationships. These criteria cannot really be

181. Furse, 386.


182. Ibid., 379.
183. Ibid.
184. Foerst, God in the Machine, 189.
84

used to deny robots personhood, because it would also deny personhood to these humans. Foerst

does not, however, develop concrete criteria (theological or otherwise) for ascribing personhood

to humanoid robots. For example, she does not question how humans might be accountable to

their robot creations, or how robot identity might evolve into robot societies.185 What remains

essential in Foerst’s view is that the more theologians understand the significance of humanoid

robots among us, the more pathways we have to honour the intrinsic worth and dignity of all

humans.

Foerst’s research advances some themes that are helpful in organizing theological

discourse on robots and AI. These themes are often partially explored in other theological

writing on robotics and AI and are useful in giving the discourse some structure. Four are

detailed in this chapter. The first of these themes is that robots and their AI systems are a

powerful, even necessary, vehicle for human self-discovery.186 Through efforts to build

humanoid robots, humans stand to learn much about human intelligence and emotional

complexity.187 The second is that robots are symbols. This implies that robotics and AI are not

simply technological achievements, but the coming together of multiple spheres of meaning.

Robots are part of a larger narrative about humans’ place in evolutionary history and

cosmogenesis as a whole. The third theme emerging from Foerst’s work is the theological

appropriation of traditionally scientific concepts of embodiment. As evidenced in Chapter One,

185. These, and related issues, are addressed by Peter T. Chataway in his review of Steven Spielberg’s film A.I.
Peter T. Chattaway, “I, Robot: Despite Steven Spielberg’s Reputation for Producing Warm Fuzzies, A.I. Is Bleak,”
Christianity Today 45, no.10 (2001): 67.
186. Other researchers have noted a similar importance for robots and AI and related technologies. Christopher
B. Kaiser, for example, cites AI as an exciting and frightening accelerant for “asking questions about ourselves.”
Kaiser, 63.
187. This goal of AI is echoed by Barbour, but he takes a more substantive view and asserts that AI research is
largely about “understanding how the human brain functions.” Barbour, 375.
85

roboticists and AI researchers are increasingly attentive to the role of robot bodies in shaping

robot intelligence. Researchers like Foerst laud this consideration of embodiment and apply

many scientific concepts and arguments about embodiment to their theological reasoning. The

final theme important for Foerst’s theological approach to robots comes from her doctoral

research on Paul Tillich. From his work Foerst develops an understanding of God as ultimate

concern, with a strong emphasis on God as Creator. This emphasis helps her draw parallels

between divine creative activity and the creative activity at work when humans build robots.

Together these four themes give shape to her unique work on robots and AI and provide foil for

further theological reflection in Noreen Herzfeld and others.

Robotics and Robots as Self-Discovery

Several theologians, including Anne Foerst, Theodore Metzler, and Peter Heltzel, have expressed

appreciation for robots and AI and how they shed light on the complexity of the human body and

brain, the difficulty in understanding human intelligence, and the issues at stake in accepting

robot counterparts.188 This is a moderate position between Moravec’s unfettered ambitions for a

robotic successor species and Knight’s argument that robots are but playthings. These

theologians imagine robots and AI neither as a hoped-for evolutionary heir nor as an instrument

for entertainment. Rather, they see robots as experiences that help articulate anew what it means

188. Christopher James Calo, Hunt-Bull, Nicholas, Lundy Lewis, et al., “Ethical Implications of Using the Paro
Robot, with a Focus on Dementia Patient Care,” Workshop, Association for the Advancement of Artificial
Intelligence, San Francisco, 2011; Lundy Lewis, Ted Metzler, and Linda Cook, “Results of a Pilot Study with a
Robot Instructor for Group Exercise at a Senior Living Community,” Workshop, Practical Application of Agents and
Multi-Agent Systems, Seville, 2016; Theodore Metzler, “Can Agent-Based Simulation Improve Dialogue between
Science and Theology?” Journal of Artificial Societies and Social Simulations 5, no. 1 (2002), accessed March 19,
2018, http://jasss.soc.surrey.ac.uk/5/1/5.html; Theodore Metzler, “And the Robot Asked ‘What Do You Say I Am?’
Can Artificial Intelligence Help Theologians and Scientists Understand Free Moral Agency?” Journal of Faith and
Science Exchange 4 (2000): 37-48; Peter Heltzel, “Cog and the Creativity of God,” Journal of Faith and Science
Exchange 2 (1998): 21-29.
86

to be human, from both the perspective of faith and science. Robots and AI represent an

outpouring of human creative activity, but never a replacement for it. Others have also stressed

the importance of robots and AI as avenues for self-discovery. For example, Kevin Kelly, the

founder of Wired magazine, writes, “We investigate the nature of intelligence, not by probing

human heads, but by creating artificial intelligences. We seek truth not in what we find, but in

what we can create.”189

This emphasis on self-understanding is critical to Foerst’s interdisciplinary approach to

robotics and theology. She claims that these reasoning strategies “meet when we attempt to

understand ourselves, who we are, and what our role in this world is.”190 From her experience,

this is also directly tied to the two core goals of AI research. The engineering goal is about

application in digital technologies, medicine, and so on. The scientific goal is “the attempt to

understand human intelligence by building smart machines.”191 To her, pursuit of this scientific

goal is a “spiritual enterprise” because it is undertaken only due to God’s creativity at work in

us.192

In one of Foerst’s most memorable anecdotes, she describes how standing face-to-robot-

face with Cog and Kismet gave her insight into human creativity and identity as God’s partners

in creation.193 Foerst’s affection for robots and AI is so strong that she suggests they are

189. Kevin Kelly, “Nerd Theology,” Technology in Society 21, no. 4 (1999): 388.
190. Foerst, God in the Machine, 38.
191. Ibid., 67.
192. Ibid., 68.
193. Ibid., 7. As discussed in Chapter One, Cog and Kismet are early humanoid robots developed by Rodney
Brooks and Cynthia Breazeal respectively. They were also influential in Anne Foerst’s work on robots and AI. Both
Brooks and Breazeal undertook these projects in part to learn more about the human and its unique intelligence and
social worlds. These motivations are echoed by Foerst, who worked with Brooks at MIT.
87

necessary for human actualization.194 In her estimation, humanoid robots must be made to reveal

the full complexity of what it means to be human. By building robots that can learn, laugh, and

pick up a coffee cup, researchers reveal the depths of human complexity. For example,

researchers are persistently confronted with the mysteries of the human brain when they fail to

build anything that remotely resembles or replicates human cognition in machines. With the full

resources of an MIT research group, Cog and even Kismet did not rival the sociability and

learning of most human infants. These experiences, however, elicited strong responses within her

that prompted critical self-evaluation. Forest questioned why she was so primed for relationship

with these robots, why she reacted with frustration or affection when she knew the robots only

had artificial intelligence, and so on. These experiences proved so formative for Foerst that she

writes, “While working with Cog and later Kismet, I have learned more about myself than at any

other time.”195

This is also the lens she applies to biblical and other ancient theological sources as an

important part of her theological analysis. For Foerst, Jewish golem narratives are an especially

important historical foundation in her theological response to robots and AI. She argues that

many of these medieval narratives about anthropomorphized clay beings who act as servants for

their human creators are also about human self-understanding. “With the construction of golems,

people felt they learned more about God’s creation of humans and their special capabilities.”196

In drawing on these kinds of stories, Foerst seemingly dismisses theological concerns like

194. Anne Kull makes a similar remark. In her estimation, robots and AI research is “embarrassingly self-
serving” and humans stand to learn more about themselves in the process than about robots or AI. Anne Kull,
“Cyborg Embodiment and the Incarnation,” Currents in Theology and Mission 28, no. 3-4 (2001): 279.
195. Foerst, God in the Machine, 9.
196. Ibid., 35.
88

Turing’s and Furse’s. Here she finds hints of the same theological concerns brought up by

contemporary humanoid robotics research, and she uses these stories to resolve them.197 For

example, golems become animated only when their human masters place a slip of paper in their

mouths with the name “God” written on it. From this, Foerst infers that the moral of the story is

that “the ultimate power of life is God’s and God’s alone.”198 Though Foerst claims her

interpretation stays close to those found in contemporary Judaism, she does not indicate source

material for original interpretations of these stories emerging from medieval settings, mainly

referencing conversations with Jewish colleagues at MIT. Foerst also notes that Jewish tradition

sees golems as an expression of prayer rather than an expression of hubris. In building them,

humans are exercising their imaginations, reflecting on their needs and desires, and thanking

God for having been made in “God’s image, which includes creativity and intuition as well as

artisanship.”199 Foerst notes that there are some stories of golems running amok when they are

not properly cared for by their human progenitors and uses this ambiguity in support of robotics

and AI research. These misfiring golems, in Foerst’s view, are a warning that humans in general

and Christian theologians in particular cannot reject humanoid robots completely.

Jewish scholar and rabbi Azriel Rosenfeld saw a place for golems within broader Jewish

thought about robots and the human.200 They are a litmus test for Jewish understandings of

197. Alexei Grinbaum also touches on the importance of stories in translating science and technology to broader
audiences. He suggests that even if they are “irrational and unscientific” that such stories become “part and parcel of
the social reading of technology.” He also expresses some concerns with deploying stories like these to shore up
contemporary readings of technologies. Though he agrees that our “moral judgement of such acts is inevitably
placed in the context or fold myths” that these are too easily manipulated to serve the ends of the narrator. Alexei
Grinbaum, “The Nanotechnological Golem,” NanoEthics 4, no. 3 (2010): 192.
198. Anne Foerst, “Artificial Sociability: From Embodied AI toward New Understandings of Personhood,”
Technology in Society 21, no. 4 (1999): 375.
199. Ibid., 376.
200. Azriel Rosenfeld, “Religion and the Robot,” Tradition 8, no. 3 (1966): 23.
89

personhood and the limits thereof. In his view golems are “the relevant halakhic precedent” for

determining whether robots “could be legally human.”201 He reflects on the personhood of

golems from several angles, including the parentage of golems and the licitness of “killing” them

in Jewish law. Rosenfeld’s conclusion is prudently inconclusive: “Perhaps it would be wise to

wait and see what artificial intelligence research will accomplish, rather than making a hasty

decision on this point.”202

Writing in a more contemporary setting, Alexei Grinbaum also speaks about the role of

golems in cultural and theological interpretations of technologies, this time in the context of

emerging nanotechnologies. Grinbaum draws on other stories than Rosenfeld and Foerst, and

comes to difference conclusions. For example, he turns to a story from the Rhine valley, where

people had a myth about the prophet Jeremiah and his golem. The prophet created an

astonishingly humanlike golem, whose first words were “What have you done?” The simple, but

penetrating question provoked a crisis for Jeremiah, who had not thought through the

implications of what would happen after it came to life, and so the golem was undone and the

story ends.203 For Grinbaum, this is simply an ancient iteration of an ongoing problem—that

once something has been made it is not so easily unmade. This is a central ethical challenge for

technological research that continues through to today, but not addressed by Foerst.

In a rare instance of direct published theological dialogue on robotics and AI, Michael

Anderson responds to Foerst’s use of golem narratives. He notes that in arguing for parallels

201. Ibid.
202. Ibid., 26.
203. Grinbaum, 193.
90

between robots and golems, Foerst is “describing a kind of theologically grounded proto-AI.”204

He goes on to critique Foerst’s appropriation of these stories for the purposes of legitimizing

robots and AI within Christian theology, noting that this “interpretation allows her to make the

reverse claim that modern AI is in fact a quasi-religious practice, a reverent imitation of God’s

creative abilities.”205 As Anderson correctly notes, Foerst fails to acknowledge significant

contextual differences between contemporary robotics research and medieval golems.

Importantly, the mythology arose from a context where Jews were persecuted, framed for

infanticide, and ghettoized. It held a special protective function for the people who generated and

passed along this mythology and was directly connected to divine will for this protection and

God’s concern for God’s chosen people. Robots like Cog and Kismet hold no similarly

significant place, especially in Christian theological tradition. The researchers who developed

them imagined no protective or talismanic function for the robots, nor did they especially require

any such protection. Though perhaps both golems and robots can be correctly conceived as

“reverent imitation[s] of God’s creative abilities,” Foerst ultimately does a disservice to her

argument for contemporary robotics through strong emphasis on an analogy with little historical

support for it.

Peter Heltzel, a theologian and pastor based at New York Theological Seminary, echoes

Foerst on the theme of robots as conduits for self-discovery. He writes that the “wonder factor”

within AI research arises because “the human mind-and-body is a mysterious and complex

frontier.”206 This wondrous space between the human and the state of robotics and AI research is

204. Michael L. Anderson, “Why Is AI So Scary?,” Artificial Intelligence 169, no. 2 (2005): 206.
205. Ibid.
206. Heltzel, 22.
91

exceptional and unique ground for the pursuit of human self-understanding. He is not alone in

seeing a link between the human experience of wonder and theological investigation of robots

and AI. Margaret A. Boden explores this link in-depth starting from the theological claim that

wonder “is a prime component of the religious sentiment. Anyone devoid of wonder is incapable

of a religious response to the world and our experience in it.”207 It is easy to see this sense of

wonder in Foerst’s work. Take, for instance, her emphasis on the handshake between Cog and

Harvey Cox as a watershed moment for theology. This wonder, however, comes with drawbacks.

In Foerst, like others including the roboticists discussed in Chapter One, this admiration prevents

her from seeing humanoid robotics and AI research as both wonderful and worthy of dread. Her

work suffers as a result, but provides excellent groundwork for other theologians to provide a

more critical take on increasingly humanlike robots and AI.

Robots as Stories

For Foerst, robots are stories about humans and reveal much about what it means to be human.

These are stories that are theologically compelling, worthy of discussion alongside scripture and

other great narratives holding influence in our culture.208 As such, theologians cannot fear or

object to robots. Foerst builds this argument in playing with traditionally scientific metaphors

used in biological classification. Here, the human is Homo sapiens—the wise man. This

emphasizes the human brain and the cognitive skills that set it apart from others in the genus

Homo and kingdom Animalia. Foerst finds this metaphor insufficient, so adds a number of

metaphors to round out her image of the human. Humans are also makers and builders, pleasure

207. Margaret A. Boden, “Wonder and Understanding,” Zygon 20, no. 4 (1985): 391.
208. Foerst, “Cog, a Humanoid Robot, and the Question of the Image of God,” 103.
92

seekers, religious creatures, self-centred beings, and storytellers. At once Homo sapiens are also

Homo faber, Homo ludens, Homo religious, Homo economicus and Homo narrans.209 The first

of these, Homo faber, reflects the drive to create. This impetus combined with the desire and

capacity for relationships, gives rise to humanoid robotics research. Homo faber also “addresses

our ability to follow our intentions and wish over long periods of time through many

generations.”210 When humans as Homo faber undertake projects that cannot be brought to

fruition during one lifetime, we are enacting our anthropology as part of a much broader

biological, historical, and cosmic history. Church architecture offers good examples of this,

including the Gothic cathedrals of France or architect Antoni Gaudí’s Basílica I Temple

Expiatori de la Sagrada Família, which is still under construction in Barcelona almost one

hundred years after his death. The playful human, Homo ludens, is a social animal, one who

takes on family and relational roles and “seeks fun and entertainment.”211 This layer of Foerst’s

anthropology primes humans for relationship with robots. Homo religiosus is the human of

theological inquiry and “aims to be spiritual and to act in a meaningful way.”212 This figure also

appears in robotics and AI research insofar as Foerst imagines them to be meaning-making and

spiritual enterprises. The Homo economicus is the “self-centered, pleasure-seeking humanoid.”213

This is the human of economic imagination. It is the human that buys and consumes and takes

care of itself.

209. Foerst is but one theologian who sees the usefulness of diverse metaphors for the human in theological
conversations about robots and AI. Antje Jackelén names these diverse metaphors as well with the hope that it
underscores the relationality and complexity of the human. See Jackelén, 300.
210. Foerst, God in the Machine, 14.
211. Ibid., 15.
212. Ibid.
213. Ibid., 17.
93

Finally, if robots are stories, then humans are the storytellers. This metaphor, Homo

narrans, is the most important for Foerst as she relates robotics to theology, robots to humans. It

is the Homo narrans that propels human myth- and meaning-making. Humans take part in this

storytelling in many ways, from arts and literature to the natural sciences. It is how we construct

and interpret identities.214 Humanoid robots and AI are a uniquely important part of these efforts.

These robot-stories try to make sense of human’s place in evolutionary history, and

cosmogenesis as a whole. Building robots and AI is a process of self-definition unlike any

other.215 Homo narrans is the human at work in the robotics and AI laboratory, the one who

builds Cog and Kismet. The human as storyteller, claims Foerst, is also the one who sees in this

enterprise value rather than threat.

This metaphor speaks volumes about how Foerst sees her own contributions to

interdisciplinary reflection on robots and AI. Throughout God in the Machine, Foerst describes

her task in terms of diffusing perceived theological threats arising from robotics and AI research,

and repeatedly draws on the metaphor of humans as storytellers to describe this work.216 At the

conclusion of her seminal work, Foerst expands the Homo narrans to encapsulate the relational

and communal quality of human storytelling and myth-making. Here she speaks of the desire

humans have to find others who understand their stories and tell similar ones. This invariably

leads to the formation of social groups based on these shared stories. It also leads to the

214. Ibid., 49.


215. Ibid., 21.
216. Irving Hexham, for one, was an early theological commentator on the “threats” robots pose not only for
theological thinking, but for humans themselves. He proposed in 1980 that robots “present a serious challenge to
human authority and power” and that humans are “in danger of being replaced by the product of its own ingenuity
and becoming the victim of a very unnatural selection.” Hexham, 576.
94

intermingling of people who disagree with each other, but remain in community because the

benefits far outweigh the costs.

Foerst is just one researcher who appreciates the narrative aspect of both science and

technology. Thomas Berry, for one, who is influential in developing the theological response in

Chapter Four, also applies this hermeneutic to the so-called natural sciences. He remarks that

science “is providing some of our most powerful poetic references and metaphoric expressions.

Scientists suddenly have become aware of the magic quality of the earth and of the universe

entire.”217 These powerful references draw humans into intimacy with cosmogenesis, “We see it

and hear it and commune with it as never before.”218 This awe, however, prevents Foerst from

developing a critical relationship with robots and AI, and the stories they tell. While her

emphasis on their ability to teach humans about themselves is mostly correct, it is incomplete.

The case of military application of robots and AI illustrate perfectly that robots have other stories

to tell beyond the positive, heuristic ones in Foerst. Robots also lay bare human propensity for

violence and destruction, and our seemingly endless appetite for power and resources at the

expense of others.

Robots as Symbols

As described above, robots as stories and humans as storytellers, is central to Anne Foerst’s

theological reflection on robots and AI. This approach is a springboard for another distinguishing

feature of her work. Not only are robots stories, but symbols as well. In both common sense and

philosophical terms, the word symbol is dense and meaningful. The focus of this thesis does not

217. Thomas Berry, The Dream of the Earth, 16.


218. Ibid.
95

allow for detailed analysis of this point, but can note that for Foerst, symbols are “a more

advanced form of narratives. A symbol brings two very different spheres together, two realms

that usually have nothing to do with one another.”219 The flag of the United States is one such

symbol, according to Foerst. The flag is both the material object, the arrangement of stripes and

stars, but also, as an identifier of a nation, it embodies ideas about identity and patriotism. Thus

the flag as symbol brings together its own self-definition with the definition of a nation-state.

Similarly, robots are artifacts of metal, silicon, plastic, and so on. Their material sphere is

impressive in terms of engineering and other technical achievement. But robots for Foerst are

also solutions to problems, stories about the human, and a glue that holds communities together.

This is an entirely different sphere from their material one. These two spheres then come

together to form the core symbol of a new interdisciplinary frontier in theological research.

Robots as symbols bring together science and theology in new ways. Their ever-more

human form and intelligence raise new questions about the definition of life, what it means to be

human, and even what it means to be made in the image of God. In such a way, robots bring new

insights from science into direct contact with old questions from theology. Such a symbolic

approach to robots and robotics “opens up levels of reality which otherwise are closed for us.”220

Importantly, this line of action is one of the few ways Foerst addresses contextuality in her work.

Symbols, she argues, allow for different and even competing interpretations of the same

phenomena. Since the meaning of symbols is always negotiated through place and time, they are

always context-dependent and at least partially socially constructed. Though some symbols may

219. Foerst, God in the Machine, 18.


220. Foerst’s understanding of symbols is largely drawn from Paul Tillich, who informed this and other aspects
of her theological approach to robots and AI. See Ibid., 18-19. Paul Tillich, Systematic Theology, 3 vols. (Chicago:
University of Chicago Press, 1951-1963), 42.
96

tend toward universal significance (e.g., Foerst names scientific theories as symbols), most are

rooted in far more local contexts.

Anne Kull also proposes the idea of robots as symbols. Her historical analysis, however,

runs much deeper than Foerst’s. She points, for example, to Romantic thinkers who protested

technology as a symbol. As technology took up more and more space in society, there is less

room for the symbols that had historically given humans meaning in their lives. While Cog and

Kismet come centuries later, Kull’s work shows that thinking like Foerst’s has significant

historical precedent. In contrast to Foerst’s optimism, however, Kull finds historical problems

with too close an association of human and their technologies. “What concerned Carlyle, Goethe,

and Schelling,” she writes, “was not so much the proliferation of technologies but the emergence

of the machine as the root symbol for all reality.”221 Importantly, Kull also introduces Donna

Haraway to this conversation. When it comes to discussions about humans, technology, and

symbols, her contribution is unavoidable. For Haraway, the cyborg, a creature of politicized

hybridity, is a symbol for our age. In bringing together achievements in engineering and related

technologies with important debates about intelligence and what it means to be human, robots

are cyborgs par excellence. Here Foerst and Haraway have their strongest agreement when

Haraway writes, “The cyborg as a cultural symbol provides a heuristic device for studying

contemporary humanness as it is technologically crafted simultaneous from the matter of

material bodies and cultural (scientific) fictions and facts.”222 This overlapping of spheres, the

221. Kull, “Speaking Cyborg: Technoculture and Technonature,” 280.


222. Ibid., 282. The term cyborg has its origins in the 1960s as shorthand for the “exogenously extended
organizational complex functioning as an integrated homeostatic system unconsciously.” The first users of the term
had adaptation in mind, especially in extreme environments like outer space and the changes that the human would
have to undergo to survive. Manfred Clynes and Nathan Kline, “Cyborgs and Space,” Astronautics (September
1960): 26.
97

hybridity between science and theology, encourages Foerst to rely on scientific reasoning for her

understanding of human and robot embodiment. It is to this next theme that we now turn our

attention.

Embodiment

For decades now, robotics and AI researchers have been aware of the importance of embodiment

for success in their line of research. As described in Chapter One, Rodney Brooks was especially

influential in bringing debates about embodiment to the fore of robotics research. He challenged

the conventional idea that success in robotics and AI required the replication of humanlike

cognition, and instead turned to other creatures for cues. After Brooks, Breazeal added a

developmental aspect to his insights and insisted that embodied, social learning be central to

robotics and AI. From their time forward, roboticist and AI researchers have continued to wrestle

with the relationship between robot brain, body, and environment. These conversations have not

escaped Foerst and she draws on a number of sources to develop her reflections on embodiment

vis-à-vis humanoid robots.

Foerst is concerned with defeating mind-body dualism in theological responses to robots

and AI. She argues that most of those with a western worldview imagine ourselves having a

body rather than being one.223 From both scientific and theological perspectives, this “makes it

difficult to grasp our own embodiment.”224 She offers a corrective to this problem by pointing to

the rich metaphors of Luther’s German translation of the Bible. This sixteenth-century

translation has many body-related metaphors that have been since stripped out of contemporary

223. Foerst, God in the Machine, 79.


224. Ibid., 81.
98

English translations. For example, Proverbs 20:27 often reads “innermost part” in English,

whereas Luther uses “belly.”225 Foerst argues that losing these scriptural roots contributes to the

neglect of the body and embodied experience in contemporary theological discourse.

Incarnational embodiment also informs Foerst’s reflections on human embodiment in

light of robots and AI. She suggests that in Jesus, by “becoming embodied like us, God can

understand us, empathize with our estrangement, and show us ways out of it.”226 Foerst adds

some new elements to traditional understandings of the full humanity of Jesus Christ, especially

in her focus on the bodily functions and sexuality of the human God.227 She critiques artistic

portrayals of the Christ child as perfect and pristine. Instead, she wants her readers to imagine

Jesus like other human children—fussy and dirty. She also rejects antiseptic images of the adult

Jesus and refutes notions that “Jesus as superhuman should be free from ugly bodily actions.”228

Beyond recalling these mundane aspects of Jesus’s embodiment, Foerst mentions something

more provocative. She raises the possibility that “Mary Magdalene was more than a friend to

Jesus” and that it “makes sense theologically that Jesus was sexually active, as sex is, after all, an

integral part of human life so that Jesus as true man must have encountered it.”229 Such

speculation does little to advance arguments for theological reception of robots and defeating

225. Ibid., 78. Bible translations provided by Foerst do not match published translations, suggesting these are
translations she has prepared herself, though this is unspecified.
226. Ibid., 106. When Foerst speaks about sin she uses the language of estrangement, which she adopts from
Paul Tillich.
227. A growing field of research and popular discourse focusses on robots and sexuality, which of course is
important for theological discourse about embodiment, relationality, and anthropology in general. See for example,
Rémi Lach, “Les humains pourraient épouser des robots en 2050,” Geeko December 22, 2016, accessed October 11,
2017, http://geeko.lesoir.be/2016/12/22/les-humains-pourraient-epouser-des-robots-en-2050/; Barbara Debusschere
and Sara Vanderkerkhove, “Op komst seksrbots en heel veel vragen,” DeMorgen, December 23, 2016, accessed
October 11, 2017, https://www.demorgen.be/ wetenschap/op-komst-seksrobots-en-heel-veel-vragen-b08d93b0/;
David Levey, Love and Sex with Robots: The Evolution of Human-Robot Relationships (New York: Harper, 2008).
228. Foerst, God in the Machine, 107.
229. Ibid., 108.
99

problematic dualism. There are several flaws with her approach, including that she sees a

necessary link between sexuality and sexual activity. It is one thing to assume Jesus had a sexual

orientation, but another to assume this means he acted on it. Also, she does not clarify what she

means by a “true man,” but it is difficult to accept that there are any necessary or sufficient

criteria for defining a man. Even biological sex distinctions start to dissolve when one considers

disorders of sexual development in humans, and instances of other animals that change sex

throughout their lives or play either role in mating.

Foerst agrees with Breazeal that having a humanlike body improves the chances that

humans will interact with robots as they do each other. This humanoid structure clearly helps

humans use body language and patterns of speech that resemble those used in regular human-to-

human interaction. Cog and Kismet have bodies organized like human bodies for this reason.

Though no one would confuse Kismet’s rabbit-like ears and toothless mouth for human, its

form—two ears, two eyes, one mouth, and so on—is clearly analogous to a human. Cog is more

typically human looking, but is still unambiguously robot in appearance with exposed wires and

bolts lending it a certain resemblance to the Terminator’s endoskeleton. Their embodied form is

important to Foerst for reasons echoing Breazeal and Brooks. First, as just noted, the more a

robot looks, moves, and behaves like a human, the more likely people are to interact with it as

they would another human. If humans are uneasy and interact awkwardly with the robot, then its

AI has little chance to acquire a repertoire of authentic human behaviour. Second, Foerst hopes

for a future with widespread robot presence outside the laboratory. More robots mean more

potential for bonding and partnership with robots. This also allows for more creativity with

robots and AI than allowed by restricted laboratory environments. To advance such goals, robots

will have to be built to work in a world organized by and for humans. Human tools, furniture,
100

homes, places of work, language, and art are all imprinted with the human embodied form. This

observation leads to Foerst’s support for the embodied AI hypothesis that “humanlike

intelligence can only emerge in a body which is as humanlike as possible.”230 As a criterion for

robotics and AI success, this view opposes the assumptions of the Turing test and Loebner prize

competition, both of which defend a decidedly disembodied understanding of AI.

Along with interaction, development, and integrated architectures, embodiment is one of

Foerst’s four defining features of embodied AI. Taken together, these four criteria mean

intelligence is always directly and inextricably related to the way the body functions, how it is

organized, and its placement in ecosystems and societies. Briefly, interaction in the context of

embodied AI means that robots react to changes in environments through sensors and actuators.

It also means that robots, or more aptly their AI, seek out opportunity for interaction with similar

intelligences. Development references the kind of learning Cynthia Breazeal designed for

Kismet, whereby AI is built to mirror the kind of organic, cumulative learning seen in humans.

This approach means that the intelligence unfolds in a way that is responsive to feedback, from

others and its environment. Finally, integration simply means the subsystems of a robot’s

synthetic nervous system “learn to interact and develop emergent higher-level behaviors.”231

This is essentially a holistic approach, where the parts of a system can coordinate with other

parts to yield more sophisticated and complex behaviour than on their own. From Cog and

Kismet, and these four criteria, Foerst takes her cues for her theological concept of embodiment.

This approach to embodiment raises far more theological questions than it resolves. In

one way, it represents a postfoundationalist approach and honours the successes of robotics and

230. Foerst, “Artificial Sociability: From Embodied AI toward New Understandings of Personhood,” 377.
231. Ibid., 378-379.
101

AI in addressing the challenges associated with robot and human embodiment. In another way, it

is asymmetrical in its reliance on science and ultimately does a disservice to theological

reasoning about what it means to be embodied. The importance, however, of these extra-

theological sources are clear for Foerst: “Research areas like psychology, anthropology, and

ethology can tell us about common human traits, common cultural traits, common primate traits,

and common mammal traits.”232 Throughout God in the Machine she turns to researchers in

these fields to develop her understanding of embodiment. For example, she draws on Stanford

sociologist Cliff Nass to argue that humans easily bond with digital technologies and will mimic

human-to-human relational patterns even with simple computers.233 His research indicated that

simply matching a coloured armband to a same-colour marker on a computer during an

experiment increased loyalty to, and reduced criticism of, the computer by human subjects. She

uses this to show how willing and able humans are to receive robots. Foerst also includes the

familiar story of how female primatologists reinterpreted primate behaviour to illustrate the

importance of the observer.234 Foerst takes this to show how human embodiment is close to other

primates and that we should understand ourselves increasingly as part of Kingdom Animalia,

rather than distinct from it. Foerst continues to piece together her understanding of embodiment

by turning to her computer scientist and roboticist colleagues at MIT who “understood

intelligence as an embodied and social phenomenon.”235 She mentions, for example, the love of

laughter, eating, and sport as evidence of this community’s implicit valuing of embodiment. In

all these references, as discussed further below, there is hardly a critical word against these

232. Foerst, God in the Machine, 68.


233. Ibid., 28.
234. Ibid., 60.
235. Ibid., 87-88.
102

approaches to embodiment, nor does she challenge them with theological reasoning as is her

interdisciplinary obligation.

The opening pages of God in the Machine include an anecdote that reveals much about

Foerst’s engagement with issues relating to robot and human bodies. Here she speaks of a

handshake between theologian Harvey Cox and Cog, the meeting of two spheres in one gesture.

In this socially significant moment between human and robot, Foerst sees the building of

interdisciplinary bridges. It is an act of friendship between two disciplines which, by some

accounts, have shared a past of animosity. Like elsewhere in her work, Foerst sees this as a

positive moment, one imbued with hope and interdisciplinary promise. She does not, however,

critically evaluate the assumptions she brings to this moment. For example, a simple handshake

comes with a number of power relationships that are contextually dependent. Not all cultures

shake hands as a form of greeting or agreement, and not all people are free to shake the hands of

others. Take for instance religious prohibitions on touching members of the opposite sex, or the

place of bowing in Japanese society or kissing on the cheek in many European countries. Also,

she assumes Cog behaves intelligently here because the handshake looks and feels like one

shared between humans. She fails to address if Cog understands the significance of the

interaction, and the social currency being traded in this moment. While Foerst takes this moment

to be the height of embodied robot intelligence, it is only the beginning of a far broader

conversation.

Foerst continues to take cues from her roboticist colleagues in addressing embodiment in

her own theological writing. For example, she writes “the team at MIT set out to build a body

that, through interaction, would develop motor control and coordination and thus would build up
103

the capability for intelligent thought.”236 This shows an appreciation for the mutual dependency

of intelligence and environment, but she does not question the roboticists’ assumptions about this

body. Human bodies always come with different kinds of privilege and power. Whether has a

disability or is tall or racialized dramatically affects how one experiences the world and learns

through it. How people interact with each other depends very much on these kinds of biases. If

robots are somehow exempt from this, then they do not truly demonstrate humanlike

intelligence, but if they are not then Foerst needs to discuss these socially conditioned biases. At

one point Foerst calls Cog “Schwarzeneggerian,” but apparently does not consider what this

means in her social location.237 Certainly Cog is square-shouldered and calls to mind Arnold

Schwarzenegger’s signature character, the Terminator. The adjective, however, also implies

maleness, whiteness, and able-bodiedness, all of which carry unearned privilege and change how

people interact with each other. These are the kinds of critiques contextual thinkers must bring to

theological conversations about robots and AI and are further developed in Chapter Four.

The handshake between Cox and Cog is representative of a more significant overlapping

of robotics and theology. In Foerst’s terms this is a symbol. In postfoundationalist terms, this is a

transversal moment.238 According to Wentzel van Huyssteen, a transversal moment is “a lying

across, an extending over, a linking together, and an intersecting of various forms of discourse,

modes of thought, and action.”239 These moments bring together the claims and methods of two

reasoning strategies based on shared concerns and questions. This is a robust form of

236. Ibid., 93.


237. Ibid., 92.
238. Like Foerst, Peter Heltzel also identifies and affirms the intersection of embodied AI and theology as an
interdisciplinary transversal moment, noting that, “Embodied intelligence is a point of contact in the dialogue
between AI and Christian theology, which affirms the psychosomatic unity of the human person.” Heltzel, 22.
239. van Huyssteen, The Shaping of Rationality, 136.
104

interdisciplinarity that looks at issues from as many vantage points as possible. Foerst seems to

take this challenge seriously and draw on many sources outside of conventional theological

reasoning. Van Huyssteen, however, cautions against finding easy agreement in these transversal

moments like humanoid robotics and AI research. He stresses that when researchers from

different reasoning strategies find shared concerns, their terminology and methodology do not

necessarily also overlap.240 In a postfoundationalist view, these nuances must not be glossed over

for the sake of harmony. These differences, however fine they may be, bring into relief points

where roboticists and theologians start to diverge on claims about the human, robots, and their

relationship to the world going forward. In Foerst’s work there are a number of places where she

too easily embraces the claims of roboticists without applying a critical theological lens.

Theologians and roboticists do share a common interest in robots, embodiment, and the human,

but the two reasoning strategies draw on different resources and methods in developing their

respective responses. Foerst does little to honour these differences and gives her audience the

impression that the relationship between theology and robotics is far more homogenous and

harmonious than it really is.

According to postfoundationalist critique, theological reflection on embodiment must rest

on more than easy appropriation of scientific stories and arguments. Postfoundationalism draws

theologians into the complexity of interdisciplinary theology, and in the case of robots and AI,

this implies the complexity of the human. Embodiment in this view is very little like the vision

found in Foerst. Here, embodiment is contextual and experiential. It is also far from the perfected

forms of robot bodies. Foerst does acknowledge that this necessarily includes “suffering and pain

240. van Huyssteen, Alone in the World?, 9.


105

due to the limitations of our very embodied physicality,” and the ways this informs how we

interact with and understand the world.241 She fails, however, to articulate well that there is also

a social dimension of embodiment that relates to experiences of marginalization, violence, and

oppression. There is little mention of race, expression of gender and sexual orientation, or

disability as experiences of embodiment that radically shape one’s experience of the world. By

extension, embodiment includes all the ways bodies are judged, and rejected, and oppressed in

human societies. It also includes all the ways bodies are revered, and celebrated, and privileged.

This is radically different than the robot-inspired embodiment found in Foerst, where bodies

experience little suffering or limitation. Robots sense and perceive the world consistently,

uninfluenced by racism, or sexism, or ableism. Further to this, they do not experience changes in

embodiment through injury, illness, and aging. They experience no growth, or development, or

decline, no estrangement, or oppression, or pain. Despite the developmental claims of Breazeal

and others, robots do exist in a certain homeostasis that has no real parallel in human experience.

Contemporary contextual theologies bring important critique to Foerst’s work at this point.

Theological expertise in this area includes contributions from all manner of feminist thinkers,

those working on all aspects of race and theology, and those advocating for the place of people

with disabilities in the church and academy. Here race, class, gender expression, age, sexual

orientation, ability and so on – features that are sorely missing from Foerst’s account – all give

texture to the experience of human embodiment. These features are only further complicated

when moving from one context to the next. For example, what is beautiful or polite in one

setting, may be ugly or rude in another. Foerst names eye contact as an important social signal

but does not acknowledge that this might be aggressive in some cultures or that people on the

241. Foerst, God in the Machine, 96.


106

Autism spectrum may have difficulty with it. Such brief examples show that there is impressive

diversity in human embodied experience and theologians must take care to honour these when

responding to embodied AI and robotics research. While some of these concerns are only

anachronistically applied to her research, others—like a feminist critique—were accessible to

Foerst as she wrote God in the Machine. These concerns are discussed in further detail in

Chapter Four as part of renewed understanding of the human vis-à-vis humanoid robots and AI.

Embodiment in community

In her theological response to robots and AI, Foerst tries to diminish any sense of human

uniqueness. This helps her place humans and robots in parallel, by focussing on the features that

they share, rather than the things that can set them apart. To help show how humans are not

meaningfully unique, Foerst often compares humans to other animals, especially in terms of

bodily functions and sexual activity. In focusing on these aspects of lived experience, humans are

but one animal among all animals. She suggests that “we can recognize how much indeed we are

part of nature, not distinct from it, not special.”242 While Foerst turns to science to argue that

humans are not unique, others draw on the same sources and reaches a different conclusion.

Barbara King, for example, sets the human in context with other species. “Scientists accept that

modern humans are unique in the way that biological definition requires each species to be

unique. The gelada baboon Theropithecus gelada that forages on the grasslands of Ethiopia, and

the common house spider Achaearanea tepidariorum . . . are each adapted to life on earth in

specific ways not found in any other species.”243 What Foerst fails to see in such an approach is

242. Ibid., 110.


243. Barbara J. King, “Primates and Religion: A Biological Anthropologist’s Response to J. Wentzel van
Huyssteen’s Alone in the World?,” Zygon 43, no. 2 (June 2008): 452.
107

that she is in fact stripping the human of many important characteristics—the ones relating to the

human as a social, communal animal. This puts the anthropological emphasis on the human as an

individual, which ultimately makes for far easier comparison with humanoid robots. The traits

and behaviours of a single human are far easier to replicate than the unpredictable and relational

aspects of human social experience. Such an approach artificially narrows the gap between

humans and humanlike robots.

In understanding the human in individual terms, Foerst cuts off the human story from the

story of the universe. A broader view of the human—one that accounts for social structures,

relationships, and ecosystems—challenges this approach. In this more complex narrative,

humans are intimately bound to many evolutionary pathways, a deep history that draws us into

relationship with the rest of cosmogenesis. Robots do not have this heritage, linking them to the

gelada baboon or common house spider. Their evolutionary roots are shallower. They lack the

intrinsic relationality seen in all animal species that allows for cooperation, survival, and even

flourishing. Using the language of AI, robots—as of yet—have no ability to synchronize

meaning among themselves, something humans and other animals do easily, regularly, and

inevitably. Human ability to coordinate and build upon these social structures is something

Foerst misses in her comparison of robots and humans.

Foerst does take some steps to address the implications of embodiment beyond the

individual. For example, she uses the language of robotics and AI to refer to humans “and all

other animals as embedded systems.”244 This means that humans are a system among

interdependent systems bound by real-time constraints, including those emerging from the

244. Foerst, God in the Machine, 113.


108

environment. She continues, this time using the language of cybernetics, “an organism and its

environment must be modeled together in order to understand the behavior produced by the

organism.”245 While this approach to embodiment does acknowledge the importance of factors

outside the body in shaping the human, it falls short of the deep relationality and holism found in

ecotheological approaches. In Foerst the emphasis is still very much on the individual human in

relationship to its environment. In ecotheology, the individual and environment shape each other,

but always within the context of much broader and more complex identities and stories. This is

discussed in greater detail in the approach to robotics and AI outlined in Chapter Four.

Foerst also speaks of sexuality as an indicator of the strong connection between

embodiment and community. This is a promising step away from the problematic image of the

human as individual found throughout her work. The example of sexuality points to the

relational and contingent quality of human life. It is a significant motivating factor in bringing

people together in relationship, the forging of bonds and formation of social groups, and in some

instances reproduction. Since sexuality can be expressed in relationship, it points beyond an

individual body and toward others in community. Sexual taboos also help congeal community

around deviant and non-deviant behaviour. Unfortunately, in Foerst’s work this is explored only

in heterosexual terms, relying on a conventional gender binary, and parsed in terms of

procreation: “The interactivity of sexuality creates a community between man and woman that

provides an environment in which offspring can grow.”246 This leaves out much of the diversity

of sexual orientation and gender expression, and also prizes heterosexuality and the nuclear

family over other arrangements. While the proliferation of queer and other voices in contextual

245. Ibid.
246. Ibid., 117.
109

theology perhaps postdates Foerst, her linking of the human with heterosexual reproduction does

not add much to her argument that sexuality fosters community.

Foerst seeks to correct a fear of embodiment she sees in theological thinking about the

human. She does so through revisiting biblical metaphors, emphasizing Jesus’s own humanity,

and drawing on a number of sources from outside theological traditions. Her efforts are creative,

but yield an incomplete image of the human that allows for too easy comparison with robots and

AI. While some ideas about embodiment overlap between robotics and theology, they are not

altogether convergent. Emphasizing the relational and holistic quality of the human, as in

Chapter Four of this thesis, helps flesh out an image of the human that stands more

distinctively—and authentically—next to robot counterparts.

God

The understanding of God found in Foerst is strongly linked to the themes of self-discovery,

creativity, and embodiment discussed above. God-given creativity helps humans understand

themselves through humanoid robotics. This creative expression then helps humans better

understand God as Creator. Foerst is among several theologians who see the connection between

human self-discovery through robotics and AI and understanding God. Peter Heltzel, for

instance, echoes this and advocates for an understanding of the human that recalls Philip

Hefner’s “created co-creators.”247 In Hefner, theological understandings of the human are

247. Heltzel, 22. For Hefner, the human is one method by which God uses to make divine intentions known and
continue the fulfilment of God’s own ends. The human role in this is to discern and work in harmony with these
ends, including through all manner of human technologies. See also Philip Hefner, “The Evolution of the Created
Co-Creator,” Currents in Theology and Mission 15, no. 6 (1988): 512-525; Philip Hefner, The Human Factor:
Evolution, Culture, and Religion (Minneapolis, MN: Fortress Press, 1993); Philip Hefner, “Technology and Human
Becoming,” Zygon 37, no. 3 (2002): 655-665; Philip Hefner, Technology and Human Becoming (Minneapolis, MN:
Fortress Press, 2003). Unlike Hefner and Herzfeld, Sallie McFague has a bleaker view of humans in creation. In
110

grounded in evolutionary processes, which speak to the freedom God gives the world and the

participation of humans in the fulfilment of God’s purposes. In Heltzel’s reformulation, there is a

correlation of “human creation of robots with the divine creation of the world.”248 Even secular

thinkers have weighed in. Kevin Kelly, for one, writes, “We tend to see God reflected in nature,

but my bet is that technology is the better mirror of God.”249 He goes further and suggests that

through increasingly impressive technical achievements humans themselves become gods.

God as creator is central to this treatment of robotics and AI. Foerst likens the creative

processes in building humanoid robots to the ones God uses in the creating of the human. She

writes, “God has created us in God’s image and we use the same process in humanoid

construction as we create them in our image.”250 She also claims that many “feel that robot-

building can be spiritual, as it taps into God’s creative powers in us.”251 This claim has an

important function. It builds sympathy for the robotics enterprise. If divine creativity activity is

deemed “good,” then what humans carry out in the robotics enterprise must also be so. This view

insulates robots from theological critique and leaves some theological questions unresolved. For

example, it does not emphasize the humility of the human in relationship to God’s omnipotence,

the relationship between divine and human will, or the place of humans in relationship to other

animals. Such a view also emphasizes God as creator at the expense of other characteristics of

God. For example, trinitarian language is all but absent in such perspectives. With this absence

comes the loss of other metaphors for God, including redeemer and sustainer. Strengthening

view of the destructive power of nuclear technologies, she wonders instead if humans are “uncreators.” McFague,
“The World as God’s Body,” 671.
248. Heltzel., 23.
249. Kelly, 392.
250. Foerst, God in the Machine, 39.
251. Ibid., 41.
111

trinitarian theology within theological discourse about robots and AI challenges the optimism

found in Foerst. Such a move demands consideration of how robots relate to the brokenness and

healing of the world. It challenges theologians to see how robots contribute to sin, estrangement,

and oppression. It also challenges theologians to consider how robots might participate in

goodness, reconciliation, and liberation.

Despite the strong parallels between God’s creative activity and human building of

robots, Foerst insists that there is room for humility in this understanding of the human. For

example, she agrees with the kabbalists of the golem traditions who see “the construction of

humanoids as a worship of God.”252 These creative acts make the complexity and love involved

in the creating of the world clearer. It also underscores the relational bond between creator and

created. Human limitations only further emphasize the difference between God and not-God.

“Modesty and awe come out of humanoid construction, as we can never be as successful as God.

We are a derivation of God and our creatures will be the next derivation, our images.”253 This

mirroring of God and human, and human with robot points to another important aspect of

Foerst’s understanding of God—the imago Dei.

Imago Dei

The imago Dei is essential in Foerst’s understanding of Homo religiosus. It is the

theological thread that unites human and the divine in humanoid robotics research. For Foerst,

what it means to be made in the image of God is the core of her theological reasoning and gives

all her other reflections their shape. Other theologians share this emphasis. Gregory Peterson, for

252. Ibid., 37.


253. Ibid., 39.
112

one, calls the imago Dei the “locus of human nature” and notes that “turn of phrase found

surprisingly sparingly in scripture, it nevertheless has become the central way of thinking of the

place and purpose of humanity in the Christian faith.”254 Foerst agrees with this assessment,

remarking on biblical sparseness in contrast with rich and still-unfolding theological tradition.

Foerst sees the biblical record’s few evocative words on the imago Dei and tradition developing

around it as the opening words in an ongoing story about what it means to be created in the

image of God.255 Here, in this ongoing interpretation and reinterpretation, Foerst finds hope for

theology receptive to humanoid robots. Just as God is creative and creating, so are humans in

their efforts to understand themselves and their relationship with the divine through robots and

AI. The imago Dei is the symbol that unites these parallel, yet distinct activities.

The image of God is the bringing together of “two different spheres.”256 In this doctrine

the story of God overlaps and overlaps again with the story of the human. Foerst again relies on

Tillich to forge her theological way forward. Quoting her mentor, she remarks that each

symbolic moment, “opens up levels of reality” that in turn allow for new and creative symbolic

contact between these two cosmic stories.257 Robots and AI, insofar as they are part of the human

creative narrative, participate in this human-divine contact. Others see the image of God as an

important analogy for illuminating both human and divine activity. The scientific narrative of

robot-building parallels the theological narrative of God’s cosmos-building. These two narratives

254. Peterson, 342.


255. Noreen Herzfeld echoes Foerst’s observation on the thinness of biblical commentary on the imago Dei.
See “The Idea of Creation” in Technology and Religion, pp.10-17. Herzfeld also notes that “human beings are
created in the image or likeness of God is stated explicitly in only three passages in the Old Testament” in Genesis
1:26-28, 5:3, and 9:6. Herzfeld, In Our Image, 11.
256. Foerst, “Cog, A Humanoid Robot, and the Question of the Image of God,” 97.
257. Paul Tillich, Dynamics of Faith, (New York: Harper, 1957), 42.
113

shed light on each other. Though this may seem hubristic, especially in that robots and AI are

still so primitive, theologians insist this is study in contrast. Peter Heltzel, for one, argues that

analogies—including the imago Dei “show similarity in difference.” The differences, rather than

the similarities, are the starting point. This means that human creative activity is identical with

God’s, or that robots and humans are equal creations. Rather, the imago Dei helps finds the

similarities and differences between creator and created. Heltzel goes beyond Foerst and

suggests that the building of humanoid robots is a different kind of reflection altogether. Instead

of the imago Dei at work in robots like Cog and Kismet it is in fact the imago humanitatis—

made in the image of the human.258

Tillich’s influence

Finally, Foerst’s understanding of God is influenced by Paul Tillich, the subject of her

doctoral thesis and ongoing source of inspiration in her later work. In particular, she names

Systematic Theology, Volume 1, Dynamics of Faith, and The Shaking of the Foundations as

helpful in developing an understanding of God that fits with her theological response to robots

and AI. She finds Tillich’s idea of “ultimate concern” useful in defining robotics and AI as a

spiritual question. As Foerst notes, Tillich defines it as “being unconditionally concerned about

the meaning of existence, taking something absolutely seriously, being grasped by life toward an

ultimately sublime or holy.”259 This is, of course, the theological quest. It is equally—perhaps

even more so in Foerst’s estimation—what researchers are doing in building humanoid robots

258. Heltzel, 23.


259. Foerst, God in the Machine, 57.
114

and AI. Such a yearning unites theology and science, because “everyone asks existential

questions . . . everyone has an ultimate concern.”260

Tillich also helps Foerst understand “sin as estrangement” and the “necessity of our

imperfections” in the context of God-human relationships and our self-understanding vis-à-vis

Cog and Kismet.261 While such a concept is important for Foerst, she does not consider how

humanlike robots and AI can promote or ameliorate existing estrangement, nor does she consider

how they might create new antagonism among humans. Foerst simply sees this as an

unavoidable fact of being human: “Thus, sin as estrangement from ourselves, from others, and

from God is a tragedy and yet also makes us the beings we are.”262 Understanding sin as

estrangement or alienation is common in theological writing about technology. Writing around

the same time as Foerst, Bill Wylie-Kellerman and David Batstone expressed a similar

sentiment, arguing that technological ‘tools’ often end up alienating humans from God and

themselves. Bruce Mazlish also picks up on a similar idea, but in reference to human

relationships with machines. He notes that humans experience alienation from machines when

they uphold discontinuity rather than embrace continuity between humans and the technologies

we have created.263 In his work on AI, Ian Barbour also speaks of the fallen and redeemed

human. For Barbour, these are always parsed in relationship. Redemption is the “restoration of

relationships—with God, with other people, and with other creatures.”264 Interestingly, Foerst

does not emphasize these aspects of the human when she is comparing human and divine

260. Ibid.
261. Ibid., 178.
262. Ibid., 25.
263. Bill Wylie-Kellerman and David Batstone, “God is My Palm Pilot,” Sojourners Magazine 30, no. 4 (200):
20; Mazlish: 8.
264. Barbour, 365.
115

creative activity. This leads to an uncomfortably close comparison of divine and human activity,

and an uncritical theological assessment of robots. Such less-than-perfect features of human life,

however, invariably inform robotics and AI research. If humans are inescapably imperfect and

estranged, then how can robot and AI creations be otherwise?

Conclusion

Anne Foerst’s efforts are an essential early theological response to robots and AI. Combined

with smaller contributions by others, her work forms one major approach to this emerging area

of theological research. Here, theologians are mostly sympathetic and even optimistic about

robotics and AI research. They find commonalities between theological and scientific reasoning,

and emphasize these commonalities to diffuse fear about robots and robotics. Absent, however,

is keener critical consideration of robots and AI drawing from theological resources. Even when

Foerst invokes such sources, she does so in ways that do not capture the complexity of Christian

tradition.

The theological voices in this chapter argue successfully that robots and AI—and the

process of developing them—reveal much about the human. Where these researchers fail,

however, is to also appreciate the ways robots represent the shadow side of humans and their

societies. As is well evidenced in Singer’s Wired for War, robots and AI stand to amplify human

greed and astonishing propensity for violence and destruction.265 Attention to historical factors

like military application of robots and AI, should alert theologians that robots may end up telling

a different story than the one Foerst so hopefully sees in Cog and Kismet.

265. See Singer, Wired for War, 10, 99-101. See also Armin Krishnan, “Dangerous Futures and Arms Control,”
in Killer Robots: Legality and Ethicality of Autonomous Weapons (Burlington, VT: Ashgate, 2009): 145-168.
116

This sympathetic discourse parallels much secular discourse about robots and AI.

Michael Anderson responds to Foerst from this perspective and works to dismiss fears about

robots and AI. He judges many other threats to be far greater than robots and AI, including

nuclear weapons, and unstable and overtaxed power grids.266 Anderson is less concerned about

application and more concerned with anthropology. He claims that it is not how people use

robots and AI, but how we will relate to them that will cause the greatest historical shifts: “AI

represents a threat not as a technology, but as a social movement.”267 Robots will upend human

self-understanding, including our place in evolutionary and cosmic histories. Insecurity about our

place in these histories fuels fear toward robots. This is rooted in the possibility that “they will

interact with us in such a way that we will feel connected to them, and this connection will

obligate us in specific ways that, thinking about it now, makes us uncomfortable.”268

A postfoundationalist approach to interdisciplinarity demands much stronger interaction

between ideas and historical context. Foerst evaluates robots and AI against her positive

experiences at MIT, but does not go much further. She does mention the importance of context

and values in shaping robotics and AI research, but then fails to carry out this analysis in her own

work. For example, she notes that the “background people come from, the values they have

grown to accept for their lives, their field of interest, their friends and communities, all these

elements will shape the way they see and interpret various inputs.”269 This is a good contextual

step, but there is little evidence that she has integrated her own experiences of whiteness or able-

266. Anderson, 201.


267. Ibid., 203.
268. Ibid., 205.
269. Foerst, God in the Machine, 53.
117

bodiedness, for example, into her work. She goes on to remark that features of human social

existence like intuition and intelligence are dependent on “personal worldview and the cultural

and societal setting in which they are defined.”270 The difficulty with Foerst’s work on this front,

then, is that she does not share with her reader what this looks like for her and how it influences

her theological analysis of robotics and AI. She contrasts American and German academic

environments and alludes to the differences between theological and technical research, but she

does not speak directly to the values and contexts that manifest in her enormous sympathy for

robotics and AI research.

Postfoundationalism calls theologians to scrutinize the political and social situations in

which robotics and AI research is immersed. As the influence of technologies grow, so will their

impact in history. This impact is not something theologians can ignore. In such a broader

historical setting, theological responses must face the complexity inherent in the task. This

means examining robots and AI as objects of both despair and hope, estrangement and

reconciliation, oppression and liberation. Only then will theological responses to robots and AI

meet the criteria of experiential adequacy demanded by postfoundationalism.271

270. Ibid., 66.


271. van Huyssteen, Shaping of Rationality, 223.
Chapter Three
Introduction

The approach outlined and discussed in Chapter Two represents one important theological

response to robots and AI. With Anne Foerst at its core, the approach yields much fruit for

interdisciplinary reflection, but ultimately is unsatisfying from a postfoundationalist and

contextual theological perspective. The approach discussed in this chapter shows a deeper

appreciation for context, history, and insights from Christian ethics. Theologians who make up

this group, most importantly Noreen Herzfeld, focus on the impact of robots and AI on

relationships among humans and with God. Their efforts represent both a continuity and

discontinuity with Foerst and similarly minded thinkers. Herzfeld currently works in a similar

cultural and academic environment as Foerst as professor of theology and computer science at

St. John’s University. Like Foerst, Herzfeld is a true interdisciplinarian, with training in

mathematics, music, computer science, and theology. Both are influenced by their contact with

non-theological reasoning strategies. In Herzfeld’s case, early studies in computer science and

mathematics prompted questions that encouraged her later theological research. When studying

AI, she became curious about the “human motivations” for replicating human intelligence in

machines.272 As she pursued this curiosity, she increasingly found compelling answers outside

conventional AI research. Ultimately, she settled on an interdisciplinary approach to

investigating why humans “continue to embrace the prospect of creating a machine in our own

image.”273 As she explored these questions as an interdisciplinary theologian, Herzfeld refined

272. Noreen Herzfeld, In Our Image, x.


273. Ibid., 5.

118
119

her research to focus on “what it means for humans to be created in God’s image” in a world

changed by new technologies like AI and robotics.274 She grounds her work in the Christian

tradition and identifies the Quaker movement as a significant influence on her theological

thought. She also makes attempts to relate Christianity to the other Abrahamic faiths, and

displays even broader interfaith sensibilities through some attention to Buddhism. She also gives

much more attention to context, history, and ethics in her work, including the relationship of AI

research to American military interests. These features come together to form an approach that is

continuous with Foerst, but also distinct.

Intelligence and the Imago Dei

The central organizing principle in Herzfeld’s work is the evolving understanding of the imago

Dei and how it relates to trends in AI research. Importantly, this approach also helps her examine

related contextual, historical, and ethical issues. Herzfeld sees strong parallels between how AI

researchers understand intelligence and how theologians interpret what it means to be made in

the image of God. This helps her form a link between the two reasoning strategies. Intelligence is

theologically important for Herzfeld, as it is a foundational aspect of a human made in divine

likeness. This inseparability of the human from its intelligence means that theological dialogue

about the imago Dei can be enhanced through interdisciplinary study of human intelligence,

especially via AI research. Herzfeld and some of her contemporaries share this curiosity about

the relationship between robots and AI and the imago Dei. A few years after the publication of In

Our Image, Russell C. Bjork, a computer scientist with theological training, also reflected on this

link. For Bjork, however, the theological dilemma is different. He wonders if it is “necessarily

274. Ibid., 303.


120

the case that creating a technological artifact that deserves to be called a person is tantamount to

creating artifact that is in the image of God?”275 While Herzfeld probes the similarities between

AI and the imago Dei, Bjork’s instincts take him toward the dissimilarities. Though their

approaches differ, Bjork and Herzfeld show that the imago Dei can be critically important for

interdisciplinary dialogue on this front. In his 2004 Gifford Lectures, Wentzel van Huyssteen

also addresses this cornerstone of interdisciplinary reflection on what it means to be human. He

asserts that the distinctiveness of the human lies in this very doctrine, remarking that “we are still

for some reason theologically designated to exemplify the image of God and represent God’s

presence in the world.”276 Though his work is oriented to cave-dwelling pasts, rather than robotic

futures, it presents an important challenge to contemporary theologians to reconcile the

distinctiveness of the human assured through the imago Dei in a world where it is increasingly

called into question. In this vein, fresh consideration of the imago Dei is an essential aspect of

theological consideration of humanoid robots and AI.

Like Foerst, Herzfeld also notes that the scriptural record offers little clear definition of

the imago Dei: “The problem with Genesis 1 is that it does not describe what this image is. Nor

is the image described anywhere else in the Christian scriptures.”277 This task of describing—and

describing again—is left to the post-scriptural tradition, which is in fact very rich. Herzfeld,

draws on this richness, but acknowledges that it could only emerge from the fecundity of these

first few biblical references.278 Even though ancient scriptures offered little explicit mention of

275. Bjork, 99.


276. van Huyssteen, Alone in the World?, 156
277. Noreen Herzfeld, “A New Member of the Family? The Continuum of Being, Artificial Intelligence, and the
Image of God,” Theology and Science 5, no. 3 (2007): 238.
278. Herzfeld, In Our Image, 10.
121

what it means to be made in the image of God, Herzfeld remarks that the “image of God in

humankind has become one cornerstone of Christian anthropology, a locus for understanding

who we are in relation to both God and to the world.”279

Past and present contexts inform Herzfeld’s interpretation of the imago Dei considering

advances in AI. She attends to the setting of the biblical texts, and notes that many influential

pieces of scripture that describe what it means to be human are “attributed to the Priestly

writer,”280 including the famed passage from Genesis that describes the imago Dei as

“specifically human and thus distinguishes humans within the animal world.”281 This source also

emphasizes the fidelity of God to humans and the rest of creation, and the transcendent,

omnipotent character of God. This attention to early sources reminds Herzfeld’s reader that all

interpretations of the imago Dei are historically conditioned. Even those who contributed to the

writing of the biblical record are bound to a unique time and place. Her attention to the Priestly

source also help illustrate the deep roots of theological discourse about robots and AI. These

ancient and sacred sources show that humans have wondered about the meaning of their own

bodies, cultures, and lives for thousands of years. For at least as long as humans could think and

discern and write in community, we have wondered about our relationship to God and to the rest

of creation. This focus on the human as the object of the imago Dei gave rise to a strong

anthropocentrism with an emphasis on intelligence as the defining feature of the human.

Developments in science and elsewhere have variously affirmed and challenged this view. Now,

with robots like Cog and Kismet, Herzfeld sees the need to reinterpret the imago Dei anew, this

279. Ibid., 6.
280. Ibid., 11.
281. Ibid., 12.
122

time considering not only the status of the created human but also the creative energy that seems

so central to human life on Earth.

Herzfeld presents three ways of interpreting the imago Dei commonly found in modern

theology. These are the substantive, functional, and relational interpretations, which she pairs

with corresponding movements in AI research.282 According to Herzfeld, these three ways of

understanding the imago Dei do not hold equal promise for dealing with the challenges raised by

robotics and AI. Each offers only an incomplete understanding of human intelligence, and the

three types suffer limitations as all typologies do. Such an approach, however, underscores a

strong link between science and theology, and clearly shows that debates about intelligence—in

both robots and humans—is a common concern for theologians and roboticists and AI

researchers. Her efforts to trace parallel trends in theology reinforce the efforts of Chapter One,

which emphasized the important internal debates within robotics that shape how researchers

understand the human. These differences, even emerging from a small research community, help

give texture to this emerging area of interdisciplinary theology.283

Substantive interpretation

The substantive interpretation of the imago Dei and its counterpart in robotics and AI

research is the earliest of the approaches described by Herzfeld. In both theology and science,

this hermeneutic emphasized the human mind as constitutive of human identity and supported

282. For Herzfeld’s succinct description of these three interpretations see Herzfeld, Technology and Religion,
58-64. For a fuller treatment of the same, see Herzfeld, In Our Image, 10-32.
283. John H. Robertson helpfully crafts a short survey of some of the ways people have tried to define,
categorize, and understand human intelligence. For his complete efforts, see John H. Robertson, “From Artificial
Intelligence to Human Consciousness,” Zygon 20, no. 4 (1985): 375-444.
123

strong mind-body dualism. In the 1950s, theological ethicist Paul Ramsey described the

substantive interpretation as such because it is “something within the substantial form of human

nature, some faculty or capacity man possesses” that distinguishes human from other animals. 284

In this view, what it means to be made in the image of God is exclusively linked to a narrow

view of human intellectual ability. Human relationality, developmental learning, and creativity

are not important in the substantive view. In Herzfeld’s estimation, this view is inadequate

because it does not speak to human intelligence as presently understood. It does not, for example,

consider the relationship between human embodiment and intelligence, the importance of

relationship in shaping human intelligence, or different kinds intelligences.285

Herzfeld argues that this approach is anti-Christian because it does not honour the

importance of the body as revealed in the corporeal resurrection of Jesus Christ. In raising Jesus

Christ from the dead, God affirms the importance of the whole integrated human—it is not just

disembodied intelligence that matters. In Herzfeld’s estimation, intelligence without embodiment

fails to capture the complexity of human intelligence and the fullness of what it means to be

made in the image of God. Herzfeld argues that substantive interpretations of the imago Dei

represent an effort to “escape from the messiness of human physicality,” which she takes as an

indispensable source for both religious and scientific reasoning.286

Herzfeld also finds this interpretation of the imago Dei unsatisfying because it makes

God too much like humans. In focussing on a uniquely human attribute, it projects our own

284. Paul Ramsey, Basic Christian Ethics (New York: Charles Scribner and Sons, 1950), 250.
285. Herzfeld, “Creating in Our Own Image,” 305.
286. Herzfeld, Technology and Religion, 69. See also Herzfeld, In Our Image, 19.
124

image back onto God. She also notes that the substantive approach is subject to critique from

ecotheology and paraphrases Lynn White, who rejected such an interpretation of the imago Dei

because it “separates human beings from the rest of the creation, giving us a connection to the

divine all else lacks.”287 Other theologians have expressed similar worries, even in the first

theological commentaries on robotics and AI research. Allen Emerson and Cheryl Forbes, for

example, saw a potentially clear parallel between humans made in the image of God, and robots

made in the image of humans. They write of robots as a new Adam, created with deity-like

powers, “We will stand to this creation as God stands to us.”288

In the language of robotics and AI, the substantive interpretation is “classical or symbolic

AI.”289 The researchers who advanced such an understanding of human and artificial intelligence

were mostly mathematicians and computer scientists. Unsurprisingly, the substantive approach

favours the kind of intelligence these researchers valued in themselves. In the early days of AI

research, intelligence had little to do with embodiment or relationality, and focussed on specific

kinds of problem solving and algorithmic computation. Herzfeld remarks, that for “a symbolicist,

any patternable kind of matter can represent intelligence by representing the basic ideas and

using programs to embody the rules that build these basic ideas into more complex thought.”290

Researchers measured AI success against these criteria, which were especially suited to the

algorithms and patterns endemic to foundational efforts in robotics and AI.

287. Herzfeld, “A New Member of the Family? The Continuum of Being, Artificial Intelligence, and the Image
of God,” 239.
288. Emerson and Forbes, 14.
289. Herzfeld, “Creating in Our Own Image,” 305.
290. Ibid.
125

Herzfeld notes that this approach “faltered” not against its own criteria for success, but

against challenges from small, pre-verbal humans. Babies began to frustrate researchers in

consistently and easily doing things that AI could not. Failure to create even infant-like abilities

for facial recognition or differentiating between an apple and tomato prompted second-

generation AI research to find a new approach and re-evaluate their understanding human

intelligence.291 This shift generated a turn toward task-oriented intelligence, abandoning many of

the suppositions of the substantive approach.

Functional approach

Though the passing from one approach to the next is not altogether tidy, the functional

interpretation emerged largely after the substantive interpretation faltered. Herzfeld attributes the

first reference of a functional interpretation of the imago Dei to a 1915 article by Johannes Hehn,

who “suggested that the image of God be understood as a title or designation rather than as an

attribute of human nature.”292 Such an approach, therefore, is a twentieth-century phenomenon

with many well-known theologians among its proponents, including Reinhold Niebuhr, Gerhard

von Rad, and a number of evangelical theologians, who expanded upon Hehn’s early

contribution.293 In contrast to the focus on what humans possess (i.e., intellect), functional

interpretations of the imago Dei focus on what humans do. This shifted theological focus to the

relationship between human and divine activity.294 The most important aspect of this focus, as

291. Ibid., 306.


292. Herzfeld, “A New Member of the Family? The Continuum of Being, Artificial Intelligence, and the Image
of God,” 239. See also Johannes Hehn “Zum terminus ‘Bild Gottes,’” in Festschrift Eduard Sachau zum siebzigsten
Geburtstag, eds. Gotthold Weil and Eduard Sachau (Berlin: G. Reimer, 1915): 36-52.
293. Herzfeld, “Creating in Our Own Image,” 306. See also Herzfeld, Technology and Religion, 11.
294. Herzfeld, “Creating in Our Own Image,” 306.
126

Herzfeld notes, is the human “acting as God’s deputy on earth” or “exercising dominion” over

the created order.295 Humans are the trusted agents of divine will, thus assuring their place of

primacy among all other animals.

The corollary of functional interpretations of the imago Dei in robotics and AI emerged

in the 1980s when researchers set their sights on having robots and AI perform human tasks.296

In other words, robots and AI will act increasingly “as a human deputy.”297 Such tasks included

working industrial assembly lines, bookkeeping, telecommunications, transportation

technologies, and so on. Scientists even sought to hand-over scientific research to robots and AI

(e.g., sophisticated modelling used in computational chemistry to circumvent laboratory

research, diffuse improvised explosive devices in military settings, or the robots that can extract

natural laws from sets of data).298 Today, interaction with functional AI systems is a daily and

seamless occurrence (e.g., Google search engine, traffic control systems in major cities),

highlighting an important challenge for both AI researchers and theologians alike. As people

grow accustomed to AI applications they are often no longer thought of as AI, which makes it

difficult under a functional interpretation to “determine what falls into the category of A.I. and

what is simply a normal computer application.”299

295. Ibid., 306.


296. Ibid., 307.
297. Ibid.
298. See Michael Schmidt and Hod Lipson, “Distilling Free-Form Natural Laws from Experimental
Data,” Science 323, no. 5924 (2009): 81-85.
299. Herzfeld, “A New Member of the Family? The Continuum of Being, Artificial Intelligence, and the Image
of God,” 240.
127

Hans Moravec and Rodney Brooks are among influential roboticists relying on a

functional approach. Moravec’s focus on robot vision and Brooks’s turn to insect embodiment

both indicate movement away from the brain- or cognition-centred approach of substantive AI to

a task-centred approach indicative of a functional understanding. Moravec’s robots serve as

human deputies in warehouses, whereas Brooks’s serve as deputies on the battlefield. Herzfeld

agrees that Moravec relies on a functional approach to robotics and AI,300 but argues instead that

Cog (and consequently Brooks) fall into the relational category, which is discussed below.301

Herzfeld’s analysis of Brooks focusses on his early efforts with Cog, which does bear some

similarities to a relational understanding of AI. For example, the robot was designed to converse

with humans rather than to take over a human task. A broader view of Brooks’s work, however,

including both the arguments of Cambrian Intelligence and his later success in military

applications, contradicts Herzfeld’s claim.302

At first, this task-oriented understanding of AI seems more promising for developing a

theological response to robots and AI. The functional approach develops a more expansive

understanding of the human, emphasizing both thought and behaviour. When viewed narrowly in

the context of robotics and AI research, the functional approach seems to rival human

intelligence in ways that substantive approaches to AI cannot. For example, a given AI system

can do a small number of things a human can (e.g., calculations, grasping a cup, identifying

voice commands, playing chess, and so on). Performing one such task well, especially in direct

comparison with a human, makes the AI seem successful. Evaluating task-oriented AI in a

300. Herzfeld, “Creating in Our Own Image,” 307.


301. Ibid., 311.
302. Rodney Brooks, Cambrian Intelligence (Cambridge, MA: MIT Press, 1999).
128

broader, holistic context, however, reveals its deficiencies and how little it resembles human

intelligence. For example, while an AI program might be better at chess than all humans, it does

not appreciate the global cultural significance of the game, the psychological and emotional

aspects of play, and controversies emerging from the modern competitive era.303

Herzfeld also has theological misgivings about functional interpretations of the imago

Dei. In one way, she appreciates that the dominion narratives in Genesis giving rise to the

functional interpretation are often interpreted in terms of responsibility to God. This diffuses

some of the “making God in our own image” seen in the substantive view. Herzfeld, however,

ultimately rejects this approach for both interpreting the imago Dei and for measuring success in

robotics and AI. Ultimately it reduces the human to what it can do and does not pay attention to

the integration of these tasks into a whole, multivalent human. It also prioritizes one kind of

intelligence above others, which easily privileges the already privileged. It misses the more

complex views of intelligence that consider development and learning, like the one advanced by

Cynthia Breazeal. Finally, the functional approach isolates the human from its context and

communities. It leaves no room for discernment, the influence of social forces, or differences

between one society and the next. It is robustly individualistic, reduces the human to that which

is computable, and ultimately leaves theologians looking for a better response to the challenges

of robotics and AI.

303. See for example Des Bieler, “Chess Champion Refuses to Defend Titles in Saudi Arabia to Protest
Treatment of Women,” The Washington Post, December 28, 2017, accessed March 15, 2018,
https://www.washingtonpost.com/news/early-lead/wp/2017/12/28/chess-champion-refuses-to-defend-titles-in-saudi-
arabia-to-protest-treatment-of-women/.
129

Relational approach

The substantive, functional, and relational approaches share a quest to find and articulate

“that which we share with God.”304 In this third approach, intelligence and the imago Dei are

rooted in relationships. It is the distinct capacity for relationship with each other and with God

that makes us human. Herzfeld sees this approach as the most theologically fruitful way to

understand humans and human intelligence in an age of robots and AI. Here, the Christian

tradition is rich with insight, especially in existing theological reflection on relationality and a

triune God. Herzfeld, along with others who subscribe to the relational approach, see the trinity

as evidence of the intrinsic relationality of the Christian God.305 By extrapolation, taking this

intrinsic relationality of God seriously means that humans—creatures made in the image of

God—must also take seriously their own relationality. In Herzfeld’s view, the relational

approach to the imago Dei and to robotics and AI is very promising. It focusses on human

connection with God and other humans, it is inclusive of all humans; and helps preserve our

status as being uniquely made in the image of God. It also diffuses some of the fears she sees in

light of increasingly humanlike robots and AI: “If our center is in our relationships, then we need

not fear replacement.”306

304. Herzfeld, “Creating in Our Own Image,” 241.


305. Though Herzfeld aligns von Rad with the functional interpretation of the imago Dei, she also finds his
research helpful in supporting a relational approach to intelligence and the imago Dei. She notes that von Rad
emphasizes the integration of the physical and spiritual in the human being, and that the imago Dei calls us to be
“invested with might” in the world. Herzfeld, Technology and Religion, 11-12. While Herzfeld relies on Karl Barth
for her exegesis on the relational interpretation of the imago Dei she also includes Emil Brunner, Dietrich
Bonhoeffer, Gerrit C. Berkouwer, Wolfhart Pannenberg, Tryggve Mettinger, Oswald Loretz, and Hans Küng.
Herzfeld, In Our Image, 30.
306. Herzfeld, “Creating in Our Own Image,” 312.
130

Herzfeld names Karl Barth as a leading exponent of the “relationship as image”

interpretation of the imago Dei.307 He takes as his starting point that “the human being is a

‘counterpart to God.’”308 This places God and humans in correspondence, worthy of comparison

and contrast with each other. The capacity for relationship is what permits such parallels with

God, the source of all relationship. Herzfeld distills Barth’s extensive work on relationality to

four basic features of relationality: openness and vulnerability to be known and seen by the other,

effort to truly understand and empathize with the other, offering assistance, and doing so from a

place of love and compassion.309 Herzfeld agrees with Barth about the centrality of relationality

in interpreting the imago Dei, but notes that subsequent scholarship is fractured in its

interpretations of Barth. There is disagreement about the “details of what constitutes authentic

relationship” and on the primacy of male-female relationships in Barth’s theology.310 She also

draws on Barth’s understanding of the trinity to develop her theological interpretation of the

relational approach. As in the work of his contemporary Martin Buber, the I-Thou relationship

was an important theological concept in Barth’s writing about relationality. In Barth, this concept

was woven into discourse on the triune God, namely that relationality exists both within God and

between God and humans. Herzfeld says that Barth finds evidence of this in scripture, where

plural pronouns refer “not to a heavenly court but to the nature of God himself.”311 Insofar as

307. Ibid., 308.


308. Karl Barth, Church Dogmatics III/2, trans. J. W. Edwards, O. Bussey, and Harold Knight (Edinburgh: T.
and T. Clark, 1958), 249.
309. Herzfeld, “Creating in Our Own Image,” 309.
310. Herzfeld, In Our Image, 25.
311. Herzfeld, In Our Image, 25-26. Herzfeld’s endnotes on Barth’s understanding of I-Thou relationality point
to Martin Buber’s foundational efforts in articulating “dialogical personalism in the 1920s and 1930s.” Herzfeld, In
Our Image, 105. Buber’s 1923 I and Thou introduced his work on this subject. In this volume, Buber, who
contributed to both Jewish and secular philosophy, and also found acclaim among Protestant theologians, describes
I-Thouness as all encompassing. The I-Thou relationship between God and human is only one kind of relation. For
Buber, this linking of words is a way to frame all existence, not just the particular existence of the Christian
131

humans are made in this image, they have unique relational capability that can never be rivalled

by robots and AI. Unlike humans, robots and AI do not have the freedom of will necessary to

participate in this kind of relationality, nor can they understand the significance of it. Herzfeld

seems to assume that this will forever be the case. Even as robots and AI tend asymptotically

toward humanness, I-Thou relationality manifest in the triune God secures our place as uniquely

made in the image of God.

Herzfeld finds a counterpart to the relational interpretation of the imago Dei in

contemporary robotics and AI research. In their commitment to social robotics, researchers like

Breazeal and Knight employ a relational approach to AI. Herzfeld also uses another well-known

case study from AI research to illustrate the relational approach—the Turing Test. Though the

test has its origins in 1950s computer science it is still a popular (albeit controversial) litmus test

for AI. It is relational in that it tries to bridge the gap between human and artificial intelligence

through language skills and conversation. Since 1990, the Loebner Prize has challenged the

Turing Test with a chat-bot based competition. In the competition, pairings of human judges with

both human and AI players use computers to type back-and-forth. Judges then decide which

actors are human and which are AI programs. The annual competition will end when a chatbot

successfully convinces the judges that it is human. Though the AI programs competing for the

Loebner Prize are increasingly successful, to-date none have won the grand prize.

According to Herzfeld the Turing Test “uses relationality to determine intelligence.”312 In

Godhead depicted by Barth. See Martin Buber, I and Thou, trans. Walter Kaufmann (New York: Charles Scribner’s
Sons, 1970), 53.
312. The Turing Test as a measure of success for AI is controversial for several reasons. For example, the
definition of intelligence is subject to much debate, including AI research. The Turing Test is also subject to
manipulation. A chatbot recently ‘succeeded’ in passing the Turing Test. These claims of success were met with
objections, including grievances that the chatbot’s poor English skills were masked with a narrative that it was a 13-
132

mostly mundane exchanges about travel or the weather, the human and AI build connections.

These short conversations help assess the “machine’s ability to relate to a human being in

conversation.” The emphasis is on the transaction between AI and human, and there is no AI

success without this test. Herzfeld also argues that this is a relational approach because it values

imperfect intelligence, just like a human. For example, the Turing Test does not succeed or fail

based on knowing answers to questions or spelling perfectly or knowing every word in a

language. Rather, the better it mirrors human patterns complete with inconsistency and

imperfection, the more successful it will be. This means getting things wrong, playing with

grammar and language, and modifying patterns of speech based on the interlocutors. This

intelligence, she says, has the “mistakes or hesitancy” that “are hallmarks of human

functioning.”313 It is far more representative of the kind of intelligence humans use on a daily

basis than either the substantive or functional approach. Her affinity for the relational approach is

clear. By her own account, it positively influences the way humans understand themselves, each

other, and their relationship with God. In her broader work on technology and religion, Herzfeld

aligns Amish communities as a good example of this approach. She describes their method for

assessing and living with technology as valuing relationship in community above all else. This

stance is something quite distinctive about these communities. She points to the questions Amish

ask themselves before adopting technological change, all of which are rooted in relationship and

focus on the bonds formed in community: “First, does the technology provide tangible benefits

to the community or individuals within that community? . . . Second, does the technology change

year-old Ukrainian boy. Ian Sample and Max Hern, “Scientists dispute whether ‘Eugene Goostman’ passed Turing
test,” The Guardian, June 9, 2014, accessed October 23, 2017, https://www.theguardian.com/technology/2014/jun/
09/scientists-disagree-over-whether-turing-test-has-been-passed.
313. Herzfeld, In Our Image, 46.
133

the relationship of the individual to the community? . . . Third, does the technology change the

nature of the community itself?”314 Herzfeld sees this as a powerful way to discern in community

relationships with technologies, including robots and AI. She also cites Jacques Ellul’s “76

Reasonable Questions to Ask about Any Technology,” but notes that while the list is

comprehensive “few are likely to take the time to consider such an extensive list.”315 More

importantly, for Herzfeld, “we can find in our religious traditions a shorter list of more general

concerns, a list that subsumes many of Ellul’s categories.”316

Herzfeld’s argument for a relational approach is largely convincing. It points to a more

universal experience of intelligence, which also leaves room for differences among humans and

from one context to the next. It also stays close to Christian tradition and constantly turns to

scripture, doctrine, and other theological sources. Such an understanding of intelligence is also

more inclusive and democratic, incorporating a much broader range of human experience,

including from those who are left out of the substantive and functional approaches. Herzfeld’s

categorization of the Turing Test as relational, however, is problematic. The Turing Test gives

attention only to disembodied intelligence, leaving no room for debate about the relationship

between body, environment, and intelligence. In the competition for the Loebner Prize, judges

interact only with computer screens and keyboards, and the AI tested does not have a body like

Cog or Kismet. This makes it impossible to unpack the meaning of human embodiment in terms

of relationality and the imago Dei. Though AI is changing how humans relate, including through

314. Herzfeld, Technology and Religion, 18.


315. Ibid., 10. For the complete list from Ellul see Herzfeld, Technology and Religion, 141-144. Herzfeld only
cites an internet source for Ellul’s list. Biographers cast doubt on the authorship of the list with one noted book
indicating that the work could plausibly have been inspired by Ellul, though it is far from certain that he is the
author. See Jeffrey P. Greenman, Noah Toly, and Read Mercer Schuchardt, Understanding Jacques Ellul (Eugene,
Oregon: Cascade Books, 2012): 37.
316. Herzfeld, Technology and Religion, 10.
134

social media and other digital technologies, human relationality remains inextricably a question

of embodiment. If, as Barth and Herzfeld posit, the imago Dei points to the intrinsic relationality

of the trinity, then this implicates Christ and God’s embodiment in Jesus of Nazareth. This link

between relationality and embodiment, therefore, cannot be so easily set aside. Contextual and

ecological theologians have long argued for the importance of this connection, stressing that the

body one inhabits radically shapes one’s experience of the world.317

Herzfeld’s ambiguity about embodiment continues in her appreciation of Barth and his

relational approach to the imago Dei. She fails, however, to work through the implications of

adopting this approach in her own work. For example, she notes that he has a “highly embodied

view of what constitutes authentic relationship,” but then fails to observe that the Turing Test is

not at all embodied relationality.318 She goes on to note that, according to Barth, some important

aspects of relationality in humans include “the ability to look the other in the eye, to speak and

hear, and to give aid,”319 all of which are absent when a human chats with a machine through

keyboard and screen. Barth’s instincts about relationality remain important for theological

conversations about distinctly twenty-first century technologies. His theological reasoning

foreshadows important questions about relationality brought about by robotics and AI research.

For example, what does embodiment mean when mediated through implants or prostheses? Such

are the important questions lost when Herzfeld insists that the Turing Test is relational.

For Herzfeld, the relational approach “places the centre of our humanity in a corporate

317. The link between the human and embodiment in response to robots and AI is developed further in Chapter
Four.
318. Herzfeld, “Terminator or Super Mario: Noreen Herzfeld, “Terminator or Super Mario: Human/Computer
Hybrids, Actual and Virtual,” Dialog 44, no. 4 (2005): 351. See also Barth, 250-253.
319. Herzfeld, “Terminator or Super Mario,” 351.
135

rather than an individual context.”320 This points to the kind of relationality espoused by

ecotheologians, one that focusses on systems of relationships, a holistic view of the human, and

the dynamic quality of life on Earth. Here are the seeds of promise for a view of intelligence, the

human, and the imago Dei suited to the challenges of robotics and AI. She does not pursue this

much further, however, and so others are left to develop her understanding of the centre of our

humanity. As it stands, this claim exists in tension with her claim that the Turing Test is

relational. The Turing Test very much focusses on individuals and is in fact severed from our

corporate humanity. The test is one human interacting with one AI or one other human. The

conversation and judgement is embodied in a single human isolated from the influence of other

individuals or communities. This supports an individualistic, anthropocentric view of the human

that many theologians find problematic. Such tension in Herzfeld’s work remains unresolved, but

points concretely to an area for further development as interdisciplinary dialogue between

theology and robotics continues to unfurl.

Herzfeld does expand her reflection on the relational approach beyond the Turing Test.

Importantly, she also discusses Cog and Kismet as representative of the relational approach in

robotics and AI research, and thus forges links with many of the other researchers discussed in

this thesis. She notes in particular Cog and Kismet’s humanlike ability to learn and develop as

promising for building ‘successful’ AI systems. Their ability to grasp objects, make eye contact,

and talk with researchers points to the embodied relationality described by Barth and adopted by

Herzfeld. In this view, machines are given identity and status through relationship with humans

and human researchers find a “co-respondent with which to relate.”321 Compared with the Turing

320. Ibid.
321. Herzfeld, In Our Image, 82.
136

Test, these robots are a clear step toward the kind of relationality experienced by humans. There

is still room for further theological reflection, however. Cog and Kismet are not without their

critique. They are analogous to perfected human form and leave no room for atypical human

experiences or behaviour. They also are without peers, families, or other humanlike social

systems, leaving their relationality stilted compared with that experienced by humans. These

shortcomings leave Herzfeld’s reader looking for an example of relational AI, which may

perhaps be an ever-moving target in light of evolving understandings of both human and

artificial intelligence.

Herzfeld’s treatment of the imago Dei differs from that discussed in the previous chapter.

Foerst, for example, saw both the human and God as mysteries not yet fully understood.

Humanlike robots and AI help resolve these puzzles and help explain what it means to be made

in the image of God and, therefore, what it means to be human. In approaches like Foerst’s there

is strong emphasis on the enigmatic quality of this intersection among robots, humans, and God,

leading to less attention to how people and even God are changed through these relationships.

The incompleteness of human knowledge, instead of the incompleteness of the human, is a

central feature of these interpretations of the imago Dei. Herzfeld points her attention elsewhere.

In her treatment, she stresses that robots and AI have the power to change humans and their

relationship with God and each other. In such a view, robots and AI as a counterpart to the

human impacts thoughts, beliefs, and behaviours. This includes how humans understand

themselves, their place in the world, and their relationship to God.

To explore how these advances in technology might change humans and their

relationship with God, Herzfeld recasts the pursuit of AI in theological language. She notes, for
137

example, that “one goal of AI is to create an ‘other’ in our own image.” 322 To make such an

image, humans will have to decide first what to pass on to our robot creations. In this choosing,

Herzfeld sees “implications for both our self-understanding and for our future coexistence with

our own creation.”323 Herzfeld does see a parallel between the creative work of the imago Dei

and that which takes place in robotics and AI research, but her argument is tempered. There are

differences between the two processes and these are largely interpreted through the three—

partial and historically conditioned—approaches to the imago Dei described above. Importantly,

Herzfeld also focusses more on the analogy between human and robot as created other, rather

than bringing the human into direct comparison with God. She argues that the relational

approach preserves distinctiveness for the human, especially when drawing on the theology of

Barth as described above. While she is open to the possibility of robots and AI fulfilling the

criteria for relationality, she cautions against finding too much meaning in this connection

between robot and human. “If we hope to find in AI that other with whom we can share our

being and our responsibilities, then we will have created a stand-in for God in our own image.”

This, Herzfeld concludes, is “bound to be a disappointment.”324

Other contributions

The three interpretations of the imago Dei are of course influential in some streams of

Christian tradition, especially in their pairing with related transitions in AI research. These three

views, however, are not the only ones available for interdisciplinary discourse on robotics and

AI. Notably, van Huyssteen’s Gifford Lectures details at least two other interpretations—

322. Herzfeld, “Creating in Our Own Image,” 304.


323. Ibid., 304.
324. Ibid., 313.
138

eschatological and existential—that help parse the meaning of human uniqueness in terms of

interdisciplinary dialogue between science and theology. The eschatological interpretation of the

imago Dei is unsurprisingly oriented toward the future and the fulfilment of the Reign of God.

Major proponents include Wolfhart Pannenberg and Jürgen Moltmann. Some commentators,

including LeRon Shultz, argue that this is not an entirely different approach to the imago Dei,

rather a comprehensive one that envelopes the substantive, functional, and relational

approaches.325 Counter Herzfeld, van Huyssteen proposes that Reinhold Niebuhr in fact

represents an existentialist reading of the imago Dei, especially in his own Gifford lectures

delivered just as World War II began to break out. In van Huyssteen’s interpretation, Niebuhr

warrants another classification because he is not entirely focussed on human cognition. Rather it

is an “existential longing for a God who transcends the world that really sets human beings

apart.”326 These brief characterizations do little to reveal the complexity of any of these five

interpretations of the imago Dei. Each is enriched by multiple contributions from theologians

coming from different traditions and each has evolved throughout history, even dating from

patristic and medieval scholars. These short summaries, however, do help fuel creative

theological responses to robots and AI. Importantly, understanding the diversity of

interpretations of the imago Dei helps critique the work of researchers like Anne Foerst, who are

overly committed to one approach at the expense of others. Including analysis like Herzfeld’s

and van Huyssteen’s prepares interdisciplinary theology for further developments in the

interpretation of the doctrine, especially as robots and AI continue to challenge what it means to

be human.

325. See van Huyssteen, Alone in the World?, 139-143.


326. Ibid., 133.
139

Ethics and Context

The approach to robotics and AI discussed here shows a much deeper appreciation for ethical

questions than the one developed in Chapter Two. This concern for ethics emerges from

attention to context. Herzfeld’s interest in how and why robots and AI are used outside the

laboratory inextricably binds their historical setting to their moral use. This interest means that

for Herzfeld robots are not merely the objects of entertainment or awe as in Knight and Foerst.

She appreciates that the entire world—and all humans—are a laboratory for robotics and AI and

that many consequences remain largely unimagined. Ethical questions emerging from AI

research help shape Herzfeld’s understanding of the relationship between science and theology.

Consistent with a postfoundationalist spirit, Herzfeld understands science and theology as

mutually-enriching reasoning strategies. She does, however, reserve special privilege for

theology in offering a critical and thoughtful response to robots and AI. Quoting Albert Einstein,

she makes a distinction between the task of science and the task of theology: “Science can only

ascertain what is, but not what should be.”327 It is this should that pushes Herzfeld toward

critical examination of robotics and AI, something that is largely missing in the approach

anchored by Foerst.

The distinction proposed by Einstein and endorsed by Herzfeld is, of course, not without

controversy. Many contributions to posthumanism and speculative science (e.g., Hans Moravec)

are certainly invested in describing what should be and how it should come about. Herzfeld has

little sympathy for these efforts, however, and says that theology is in a better position to reflect

327. Albert Einstein, “Religion and Science,” in The World as I See It, trans. Alan Harris (San Diego, CA: The
Book Tree, 2007), 25.
140

on what it means to be human—even in relationship to robots and AI.328 In the interdisciplinary

space between robotics and theology, theologians have a special role. According to Herzfeld, our

role here is one of resistance, including speaking out against a research culture where war and

profit subsume all other goals.329

Herzfeld approaches ethics in robotics and AI through a number of case studies, most

importantly in biomedical and military applications. These intertwined areas receive substantial

funding and research attention, especially in the United States where military budgets outpace all

other countries. In Herzfeld’s broader work on embodiment, she discusses the integration of

biomedical technologies with the human body. She is curious about how these technologies

might change experiences of embodiment, and the extent to which these changes should or will

be permanent. These technologies also raise important questions about autonomy and the rights

of an individual over their own body. Video games are another important case study for

Herzfeld, especially the link between virtual worlds and ‘real’ world war and violence. She

argues this is an area for theological concern in that the video game industry is often complicit

with acts of violence and subordinate to the ambitions of the military.

Actual hybridity

To unpack the web of relationships formed by ethics, context, and robotics, Herzfeld

draws on the concept of hybridities. This resembles in some ways Foerst’s understanding of

symbols as it points to the overlapping of two distinct spheres of meaning. When the world of

robots extends over the world of humans, humans become something new, something different.

328. Herzfeld, In Our Image, 5.


329. Ibid.
141

The fusing of identities, bodies, and intelligences creates two distinct kinds of hybridity: actual

and virtual.330 Actual hybridity takes place when technologies merge with the human body. Good

examples include brain-computer interfaces (BCIs), robotics exoskeletons, and older

technologies like pacemakers and cochlear implants. Such examples are directly integrated with

the physical matter of the human. Researchers hope that they will merge seamlessly with the

human body and provide a restored or enhanced experience of one’s environment.331

Theologian, and now archbishop of the Church of Sweden, Antje Jackelén also addresses these

kinds of hybridities in her work on the image of God. She is skeptical of those who are overly

enthusiastic about the transformative power of merging ‘man and machine,’ including Coventry

University researcher Kevin Warwick who says that “although I was born a human, I will die a

cyborg, a very, very enhanced being.”332 Jackelén is less convinced about these ontological

claims, even in the face of radical hybridities. She remarks that where “silicon and carbon merge,

Homo sapiens evolves into techno sapiens and we face a new dawn of human splendor—at least

if we choose to listen to some popular scientific writing.”333

330. Hybridities in Herzfeld recall the concept of cyborg famously advanced by Donna Haraway. In its original
context, the term cyborg was used in reference to adaptations and modifications for survival in new environments,
like outerspace. In Haraway, the cyborg takes on a political tone and seeks to dissolve—if not obliterate—
conventional boundaries between human and machine, and human from other animals. Anne Kull has done good
work in developing these concepts further, especially in view of technology and nature. She points to the cyborg as a
positive concept that can help disturb old ways of thinking, especially ones that are rooted in so-called western ways
of thinking. Donna Haraway, “A Cyborg Manifesto: Science, Technology, and Socialist Feminism in the Late
Twentieth Century,” in Simians, Cyborgs and Women: The Reinvention of Nature (New York: Routledge, 1991),
149-181; Kull, “The Cyborg as an Interpretation of Culture-Nature,” “Cyborg Embodiment and the Incarnation,”
and “Speaking Cyborg: Technoculture and Technonature.”
331. Andrew Silver, “Brain Implant Allows Man to Feel Touch on Robotic Hand,” IEEE Spectrum, October 13,
2016, accessed October 26, 2016, http://spectrum.ieee.org/the-human-os/biomedical/devices/ brainimplant-allows-
man-to-feel-touch-on-robotic-hand.
332. Andrew Smith, “Science 2001: Net Prophets,” Observer, December 31, 2000, 18, cited by Singer, 71.
333. Jackelén, 290.
142

Actual hybridity raises difficulties for an embodied view of intelligence. If one takes

seriously the embodied quality of human intelligence, then manipulating bodies changes

something fundamental about the human. It takes something essential for both the definition of

intelligence and the definition of the human and places it outside the body, yet somehow still

deeply connected with it. Though Herzfeld does not reject such extensive entanglement, she is

cautious and says it must be discussed from a number of perspectives. For example, about BCIs

Herzfeld remarks: “Sensory implants thus raise a caution not raised by neuromuscular

biomechatronics, namely, the potential of these devices to literally change our minds, to alter our

cognitive functionality, that which we most closely identify as ourselves.”334 In the rush to

enhance the human, Herzfeld urges her reader to first seek value in what already is. She also

considers the case of enhancing eyesight through sensory implants in the brain. This might make

an excellent therapeutic option for those who have compromised vision, but it might also be used

for enhancement and to alter the status quo. Such implants could best evolutionary shortcomings.

For example, humans have terrible night vision, especially when compared with other animals,

and sensory implants might turn this evolutionary ‘disadvantage’ into a competitive advantage

on the battlefield and elsewhere.

Herzfeld expresses reluctance about these technologies that bind human with machine

and raises a number of objections. First, she questions ownership and permanence, noting that

although “the army might find it advantageous to equip its soldiers with night vision, one must

recognize that this feature would not go away on a soldier’s decommissioning.”335 The body

modification benefits the employer, and makes one’s body inextricable from one’s profession. In

334. Herzfeld, “Terminator or Super Mario,” 348.


335. Ibid.
143

this example, the hybridity gives the military limited control over the embodiment of military

personnel. Of course, some modifications could be reversed, like the removal of a brain-

computer interface, but long-term effects are often unknown. Relying on devices like these,

especially if externally controlled or monitored, challenges bodily autonomy. As Herzfeld notes,

humans would then have to deal with a fear of a “loss of a sense of self or loss of control.”336 She

also notes that there is a possibility of malfunction or hijacking by others as an important ethical

consideration in developing such technologies. Increased reliance on digital technologies, in

combination with the rise of open source culture and cloud-based computing, exposes the human

to new kinds of risk. Malicious hacking or identity theft take on new meaning when the

computers attacked are fused with the human brain. While the American military invests

significant resources to counter hacking of new AI-based war technologies, allegiances change

quickly in history and many others are equally invested in such efforts.337

Another important example of actual hybridity in Herzfeld’s work are robotic

exoskeletons, including the Berkeley Lower Extremity Exoskeleton (BLEEX). This device

represents not only the merging of humans with robotics-related technologies, but also the

convergence of academic, military, and biomedical interests. The device was developed with

funding from Defense Advanced Research Project Agency (i.e., DARPA, an agency of the

United States Department of Defense) and can be used by people with a severely compromised

ability to walk. This might include people with spinal cord injuries or with degenerative neuro-

muscular conditions. Potential military application of a wearable robotic skeleton includes

336. Ibid.
337. Singer notes for example that “the Chinese army has set up a ‘cyberwarfare’ program staffed by some six
thousand paid hackers.” Singer, 245.
144

dramatically increasing a soldier’s ability to carry heavy loads by multiplying the force available

through leveraging human joints.

In the BLEEX exoskeleton, Herzfeld finds many ethical and theological challenges. First,

she highlights the close relationship between medical therapies and military interests. At this

moment, it is difficult to know how the future of exoskeletons will unfold as these kinds of

devices are not widely used in military or medical settings. Further to this, research companies

using DARPA funds are often secretive about their research, and the military is reluctant to make

public its robotics and AI capabilities for fear of competition and hijacking. Perhaps

exoskeletons will become the new norm, and facilitate military advancement into ever more

rugged and remote terrain. Perhaps military use of exoskeletons will cultivate sympathy for fully

humanoid robot soldiers. Perhaps exoskeleton technology will shift criteria for military medical

discharge when soldiers are injured or even paralysed in training or combat.338 All that is certain

is that robot exoskeletons will have applications yet-unimagined even by their developers.

The case of BLEEX and other robotic exoskeletons brings to light another constellation

of ethical concerns. Like the majority of robotics and AI technologies discussed in this thesis,

robotic exoskeletons are the result of expensive and time-consuming research. They often require

many years of development and many millions of dollars to bring them to market. Their high

expense and military use call into question their overall benefit, especially when one considers

the politics of accessing these technologies. Undoubtedly, BLEEX and related devices have the

potential to improve the quality of the lives of its users, but their extraordinary expense means

they are reserved for the privileged. Access to state-of-the-art prosthetics is largely limited to

338. For more information on exoskeletons, their research and development, and potential applications, see
“Ekso Bionics,” Ekso Bionics, accessed August 8, 2015, http://intl.eksobionics.com/.
145

those who can afford devices that cost at least a few thousand dollars, or who have coverage

under private or public health insurance. The growing open source movement and advent of

three-dimensional printing will likely democratize the use of prosthetics and exoskeletons

worldwide, however, these developments are only now emerging.339 Despite these obvious

inequalities, mention of the micro- and macro-allocation of resources is conspicuously absent

from theological reflections on robotics and AI. In response to this absence, the

postfoundationalist approach to interdisciplinary theology holds in tension local and global

contexts and concerns. While one’s personal experience and immediate surroundings matter, it is

important to yearn for that which connects this to the contextualized experience of others.

Postfoundationalism honours the particular while yearning for the universal. With these

considerations in mind, technologies like BLEEX become less cause for celebration and more

cause for ethical confusion in their complicated relationship with global justice issues.

Virtual hybridity

Instances of virtual hybridity look much different than actual hybridity, but the effects

remain quite similar. In virtual situations humans and digital technologies, including robotics and

AI merge in a way that does not impact their material existence. The hybridity takes place in

thought and patterns of behaviour, self-understanding and cultural shifts. Herzfeld argues that

even if physical boundaries between humans and robots and AI are distinct (e.g., as they are with

Cog and Kismet), the fusing of human and machine identities can take place. These virtual

hybridities are equally influential and transformative as actual hybridities. According to

339. Ian Birrell, “3D-printed prosthetic limbs: the next revolution in medicine,” The Guardian, February 19,
2017, accessed November 24, 2017, https://www.theguardian.com/technology/2017/feb/19/3d-printed-prosthetic-
limbs-revolution-in-medicine.
146

Herzfeld, experiences in and with virtual realities are powerful opportunities to try on new—

even dangerous—identities. In these worlds, cyberspace “gives an illusion of human

enhancement. Gamers report feeling empowered, freed from the structures of normal life.”340 In

these virtual realities, including social media, people become more aggressive, indulge risk-

taking behaviour, and act and speak in ways that are incongruous with offline personalities and

personas. These spaces and their associated technologies change human behaviour. As Nicole

Stenger of the Human Interface Technology Laboratory remarks, “cyberspace grafts a new

nature of reality on our everyday life.”341 In a 1994 article largely about personhood and AI,

Norman Lillegard raises another important point related to virtual hybridities. Humans easily

mix metaphors to bring themselves and technology into closer relationship (e.g., “I’m hardwired

for that,” “My cellphone died”). In this way, through the increasing technologization of our self-

understanding and the anthropomorphizing of technology a hybridity forms.342

Herzfeld notes that there is only slow adoption of actual hybridities given the health risks

associated with them and the fear that use of such technologies might lead to loss of control over

human minds and bodies. In contrast to this reluctance, there is a widespread voracious appetite

for virtual hybridization. Humans are increasingly engrossed in cyberspaces, video games, and

digital devices that change the ways they interact with each other and understand themselves. For

Herzfeld, video game use is one of the best examples of this kind of hybridity, and represents

well the interplay among biomedical, military, and commercial interests. In particular, she is

340. Herzfeld, “Terminator or Super Mario,” 349.


341. Nicole Stenger, “Mind is a Leaking Rainbow,” in Cyberspace: First Steps, ed. Michael Benedikt
(Cambridge: MIT, 1991), 58.
342. Norman Lillegard, “No Good News for DATA,” Cross Currents 44, no. 1 (1994): 28.
147

critical of the integration of video games with twenty-first century warfare.343 Often young,

marginalized men who do not perform well at school (and go to underfunded public schools)

play a lot of video games. Popular games often employ war culture and help turn these young

men into superb candidates for military service following high school. With few options and the

promise of a career and steady income, they enlist. Video games help these young people

improve manual dexterity and lose sensitivity to violence, skills that are ultimately valuable in

combat. So close is this link that researchers develop drones using Xbox gaming controllers

without any modification, making the transition between online and offline life ever more

seamless.344 This entanglement of worlds, between young men and games, between their

experiences of virtual and actual worlds, leads to a changing relationship with death and

destruction. They are desensitized and trained for combat through entertainment, and then slide

easily into settings where people rather than pixels are at stake. Similarly, Herzfeld points to a

school shooting in the United States where a child trained to kill in virtual reality. The shooter

“clipped off nine shots in about a 10-second period. Eight of those shots were hits. Three were

head and neck shots and were kills. That is way beyond the military standard for expert

marksmanship. This was a kid who had never fired a pistol in his life.”345 In her discussion of

343. Herzfeld also comments on the high price of this research, nothing that in “2000, the Pentagon invested
$45 million in a partnership with the video game industry.” Herzfeld, “Terminator or Mario,” 352.
344. Colin Schultz, “A Military Contractor Just Went Ahead and Used an Xbox Controller for Their New Giant
Laser Cannon,” Smithsonian.com, September 9, 2014, accessed January 21, 2018,
https://www.smithsonianmag.com/smart-news/military-contractor-just-went-ahead-and-used-xbox-controller-their-
new-giant-laser-cannon-180952647/. See also Hamza Shaban, “Playing War: How the Military Uses Video Games,”
The Atlantic, October 10, 2013, accessed March 28, 2018, https://www.theatlantic.com/technology/archive/2013/10/
playing-war-how-the-military-uses-video-games/280486/.
345. Gayle Hanson, “The Violent World of Video Games,” Insight on the News (June 28, 1999): 15. Cited in
Herzfeld, “Terminator or Mario,” 349.
148

this symbiotic relationship between game and war, Herzfeld notes a cyclical pattern, “We seduce

and train soldiers with one form of hybridization then fit them out with the other.”346

Herzfeld is critical of a culture that encourages virtual hybridization. The instance of

video games and their appropriation as a military training platform is only one way virtual

hybridities have a negative impact on the twenty-first century human. She also remarks on the

highly connected, highly wired lives of the undergraduate students she teaches at St. John’s

University in Minnesota, USA. These students spend more time than any previous generation in

front of a screen. Smart phones, laptops, tablets, televisions, and digital marketing encroach more

and more into their waking hours. All this screen time invariably leads to another problem

distinctive of the Digital Age—multitasking.347 Herzfeld is concerned with how humans are

changing themselves through these technologies, and fears for the quality of our relationships

and our spirituality. She notes studies that indicate that human brains have not yet evolved to

cope efficiently with frenetic toggling between media. “Over time, multitasking erodes our

ability to pay focused, close attention, and this eventually eats away at traits such as patience,

tenacity, judgement, and problem solving.”348 Herzfeld admits that some aspects of digital

culture can contribute to the learning of patience, tenacity, and problem solving. Their costs,

however, outweigh potential benefits. Video games may also cultivate violence, aggression, and

serve military ends. Children and youth are especially vulnerable to these influences given that

they are the majority consumers of video games, and other forms of new media. These activities,

346. Herzfeld, “Terminator or Mario,” 352.


347. Noreen Herzfeld, “‘Your Cell Will Teach You Everything’: Old Wisdom, Modern Science, and the Art of
Attention,” Buddhist - Christian Studies 29 (2009): 83.
348. Ibid., 84.
149

perhaps harmless in small doses or in isolation, quickly become patterns of behaviour and even

addiction. This move toward deeply entrenched hybridities concerns Herzfeld the most, and

cautions that those enmeshed in these worlds are “being formed by their practice.”349

In a 2009 article, Herzfeld searches for antidotes to virtual hybridity and its symptoms.

She identifies a Buddhist monk’s cell as one such place of reprieve. It is stripped of stimuli,

encouraging focus and contemplation. In the context of environments saturated in virtual

hybridities, such literal and metaphorical cells are rare, if not sacred. Herzfeld sees another

promising way forward in a turn toward relationality as a balm for virtual hybridization.

Relationality, she argues, is rooted in material—rather than virtual—encounter. This turn toward

human connection is less wired, less saturated by AI. It is also a turn away from mainstream

cultural practices. On this point, she finds Buddhist scholar Alan Wallace helpful, “We’re giving

our attention to what seems worthy of our life from moment to moment. Attention, the

cultivation of attention is absolutely core.”350 The kind of attentiveness modelled by Buddhist

mindfulness and meditation brings about positive, non-violent changes in the human. In this

view, virtual war in preparation for actual war is not worthy of human attention. Mindfulness is

the opposite of what takes place in virtual hybridization. Brain scans of experienced Tibetan

Buddhist meditators reveal that this radical, continuous attentiveness changes the human brain to

become better at “detecting emotion and in regulating bodily responses to emotion.”351 In these

insights from Buddhism, Herzfeld finds commonality with Christianity, especially historic peace

349. Ibid., 85.


350. Interviews with Alan Wallace cited in Maggie Jackson, Distracted: The Erosion of Attention and the
Coming Dark Age (New York: Prometheus, 2008), 259. Cited in Ibid., 85.
351. Ibid., 86.
150

churches in their peaceful resistance to war and violence. She also finds affirmation in the words

of Christ, who “calls it the greatest love to give up one’s life for one’s friends.”352 This she sees

as a scriptural invitation into deep relationship, one that values embodied humanity above all

else. In the context of her work, this means disentangling ourselves from destructive

hybridities—both actual and virtual—so that we may become more fully entangled in the

wellbeing of the world and God’s will for it.

Human Distinctiveness

Herzfeld’s approach to robotics and AI is more critical, with keener contextual awareness than

that discussed in Chapter Two. These differences continue when considering human

distinctiveness vis-à-vis robots and AI. Where Foerst downplays that which distinguishes robots

from humans, Herzfeld examines with much greater scrutiny. In God in the Machine, Foerst

encourages her reader to rethink personhood incorporating insights from robotics. She holds a

favourable view of the personhood of robots and argues that any criteria used to deny robots

personhood can equally be applied to humans. Such a refusal, therefore, discriminates against

humans who do not have the quality and conditions of a person. For example, if robots are not

persons because they cannot reproduce, then this excludes people who are also unable to

reproduce or choose not to. In Foerst’s view, this reasoning helps diffuse fear and

misunderstanding of humanlike robots and AI, while simultaneously developing a generous

understanding of personhood that includes all people.

While Foerst sees blurry lines between humans and robots, Herzfeld wants to sharpen the

distinction. In the latter, robots and AI may be a peer or counterpart to humans, but remain

352. Ibid., 85.


151

decidedly different from them. In Herzfeld’s own words, AI is an “attempt to create a new

member of the cognitive family.”353 This newness opens up relational possibilities for humans,

but it is not a move to ascribe personhood to robots. In this way, humans can relate to and assess

robots as external observers. While robots and AI may change humans, discrete boundaries

between us and them mean they cannot be humans. On this issue, Herzfeld finds a relational

approach very helpful. This helps preserve robots as a counterpart to humans and protects us

from too closely integrating robot and human identities. Barth again inspires her thinking in this

area, especially his “I-Thou confrontation.” 354 In the Barthian sense, confrontation takes on its

historical nuance, implying the bringing together of God and human, or human and human, into

each other’s presence. This kind of encounter takes place within the triune Godhead and among

humans through the imago Dei. In all such I-Thou relationships self differentiation is preserved,

the difference between you and me, self and other, human and God cannot be obliterated. Adam

and Eve provide the first biblical example of this differentiation, being set apart in relationship

from each other and from God. From this all other relationality flows. Such a theological

approach excludes robots insofar as they cannot authentically participate in I-Thouness, a

confrontation reserved only for God and humans, and mirrored in human relationality.

In the end, Herzfeld finds that all evidence points to the ultimate deficiency of AI: “We

should recognize that relationship with either an artificial intelligence or with other created

beings or things are no substitution for relationships with other men and women, nor for our

relationship with God.”355 The imago Dei helps Herzfeld argue for the distinctiveness of

353. Herzfeld, “A New Member of the Family? The Continuum of Being, Artificial Intelligence, and the Image
of God,” 235. Emphasis added.
354. Ibid., 241.
355. Ibid., 243.
152

humans, and decouple robots from human self-understanding. She brings robots and AI into line

with other human pursuits carried out for the sake of self understanding. Though robots and AI

provoke new debate, they are not entirely unlike other forms of representation—or imaging—

that humans have carried out for millennia.356 Technology, literature, and art have all been

powerful vehicles for humans to create in their own image—robots are but a new, unusual, and

even scary extension of this human endeavour.357 Finally, just as humans are imperfect and

incomplete, so too are our efforts to build robots and AI reflecting human values and the social

worlds in which they are embedded.

Herzfeld’s efforts to preserve human distinctiveness are sometimes ambiguous. At once

she relies on a relational approach to argue that humans are recognizably different from robots

and AI, while at the same time argues for a continuum that makes such distinctions very difficult.

For example, she draws on Wendell Berry to argue that there is no meaningful difference

between “creature and artifice, birth and manufacture, thought and computation.”358 Such a

proposal brings her more into line with Anne Foerst’s thinking than her arguments about the I-

Thou confrontation named above. She also goes on to argue that human life exists in continuum

with all other life, and references ecotheological critique from Paul Santmire and “traditions of

the Eastern world” to support this claim.359 Again, this proposal appears incompatible with the

specific relationality she preserves for humans as made in the image of God.

356. Herzfeld, In Our Image, 1.


357. Ibid.
358. Wendell Berry, Life is a Miracle (Washington, DC: Counterpoint, 2000), 6.
359. Herzfeld, In Our Image, 237.
153

Conclusion

Herzfeld makes a strong case for the relational interpretation of the imago Dei in her response to

robotics and AI. In her view, this approach is the most theologically fruitful way forward

considering the historical and theological challenges brought about by increasingly humanlike

robots and AI. It grounds human identity in relationship, including with God as creator. Herzfeld

claims that emphasizing the relational quality of the human helps buffer against making robots

into false idols, and draws on Barth to support these efforts.360 She also argues for an embodied

understanding of relationality, and consequently of intelligence and the imago Dei. This

understanding is clear in many places throughout her research, and she notes that the “move

away from a substantive approach . . . implies that the center of our being is dynamic and cannot

be isolated from the bodies, societies, and natural world in which we are embedded.”361

Elsewhere Herzfeld reformulates this position in equally clear terms: “AI simply replicates or

reproduces one function or a set of functions that are part of what makes a human. Humans,

however, in contrast bring their whole selves into relationship, not just the things that have

corollaries in AI. Embodiment is hugely and inextricably important for relationality. In fact,

excising embodiment from relationality destroys the concept of relationality altogether.”362

360. Concerns about idolatry or making gods for or out of ourselves are not unique to Herzfeld. Bainbridge, for
one, gives an account of a common theme in science fiction where “scientists build a huge computer and feed it vast
amounts of information in order to answer the most profound questions about existence. As soon as it is finished,
they turn it on and ask, ‘Does God exist?’ The computer smugly answers, ‘I do now!’” Bainbridge, 1. These
concerns are increasingly actualized in robotics and AI research. One recent example includes an attempt to make an
AI deity for the “betterment of society.” Olivia Solon, “Deus ex machina: former Google engineer is developing an
AI god,” The Guardian, September 28, 2017, accessed October 10, 2017, https://www.theguardian.com/technology/
2017/sep/28/artificial-intelligence-god-anthony-levandowski.
361. Herzfeld, “A New Member of the Family?,” 243.
362. Herzfeld, In Our Image, 51.
154

Underneath her seemingly clear claims, however, lies preference for an approach that is

not as robustly relational as she imagines. Instead, according to her own criteria, Herzfeld’s

approach resembles a modified substantive approach. Recall that most of Herzfeld’s research

focusses on AI, with occasional consideration of robots. In her work on the relational approach,

she suggests that AI (e.g., the Turing Test) can be relational.363 As discussed above, the Turing

Test and its contemporary corollary in the Loebner Prize contest are not relational in their

approach. Their underlying assumption about intelligence diminishes the importance of

embodiment in human relationships and runs counter to the well-developed arguments made by

feminist and ecotheologians, which are discussed in more detail in Chapter Four. Herzfeld also

emphasises the human brain in shaping human identity and meaning. Phenomenon like thinking,

reasoning, imagining, creativity, intelligence form the central part of what it means to be human,

and consequently what it means to be made in the image of God. She even suggests that “it is in

our thoughts, memories, and actions that many of us find the greatest sense of our identity as

persons.”364 What Herzfeld neglects to consider, however, is how all this thinking, remembering,

and acting are undeniably shaped by radically different experiences of embodiment. Even

beyond these differences, the balances of privilege change from one context to the next, so very

little about embodiment is universally applicable. Embodied experiences of race, gender, sexual

orientation, ability, age, and so on dramatically affect relational social structures. In this way, her

thinking on the link between relationality and embodiment requires further development.

Herzfeld’s contributions are well aligned with many aspects of postfoundationalist

interdisciplinarity. For example, her views on the relationship between science and theology are

363. Herzfeld, “A New Member of the Family?,” 242.


364. Herzfeld, In Our Image, 1.
155

congruent with much of van Huyssteen’s efforts on this front. Both have a deep appreciation for

scientific contributions to the world and to human self-understanding. Both also look to claims

and methods emerging from scientific reasoning to strengthen their theological efforts. Herzfeld

sees science as something that can invigorate theology and improve the quality of her own

investigation into AI, technology, and more. Like van Huyssteen, Herzfeld tempers this

enthusiasm, and advocates for interdisciplinary caution. She does not give science unchecked

authority over theological reasoning and does not accept it as a ‘spoiler’ to the ‘generally

erroneous’ knowledge of theology.365 This preserves space for theological claims and methods to

diverge from—and even conflict with—those emerging from scientific reasoning strategies.366

Though Herzfeld is clearly enthusiastic about science itself and its ability to contribute to

theological discourse, she also speaks critically about science and scientific interventions in this

world. This mix of enthusiasm and careful critique very much mirrors van Huyssteen’s

postfoundationalism. In her broader research on the relationship between theology and

technology, Herzfeld notes that even apparently value-neutral technologies can bring about

mixed relationships and conflicting historical results.367 For example, she notes that social media

can both enhance and destroy human relationships. She also discusses the Three Gorges Dam on

the Yangtze River. The massive dam is expected to reduce China’s significant dependence on

coal; however, it will also displace and destroy communities and significant archeological sites.

The ecological impact of this project is not yet fully understood. It is likely that the dam will

have profound destabilizing effects on the local environment.368 These examples help highlight

365. Herzfeld, “‘The End of Faith?’ Science and Theology as Process,” 288.
366. Ibid.
367. See for example Herzfeld, Technology and Religion, 3-20.
368. Ibid., 4.
156

the complicated relationships embedded in interdisciplinary theological responses to advances in

science and technology.

Herzfeld favours a constructive interdisciplinary approach. She seeks congruence

between science and theology, and often highlights similarities between the two. Both are

incomplete, ongoing processes, passed from one generation to the next for further development.

Both are equally subject to correction, discovery, and growth. Science and theology are verbs,

rather than nouns. Herzfeld claims that this understanding—human reasoning as process—is

better represented in scientific communities than theological ones. She wants theologians to take

cues from their interlocutors on this point, and develop theological methodology “as fluid as

science” and “to adopt many of the same processes that characterize the scientific method.”369

Ultimately, she hopes that through interdisciplinary engagement, theology will come to rely on

“experiment, questions, and useful results” in the same way as scientific reasoning.370 According

to Herzfeld the Quakers embody many of these methodological commitments. They traditionally

practice theology in community, with communal discernment and the needs of the group

prioritized over the individual. They also ground their theological reflection in local contexts,

and prompt critical examination of the role of the expert.

Herzfeld’s analysis of the substantive, functional, and relational approaches show very

good attention to the impact of context, history, and other reasoning strategies on theological

thinking. Her work on this issue illustrates the ways that both science and theology grew in their

understanding of the human and human intelligence. It also points to convergence between the

two reasoning strategies. This close comparison helps reveal important differences as well,

369. Herzfeld, “‘The End of Faith?’ Science and Theology as Process,” 289.
370. Ibid.
157

including the ethical challenges brought about in biomedical and military applications of robotics

and AI. These efforts combined represent a marked improvement over Foerst to engage with

context, history, and ethics. Herzfeld, however, still leaves room for further theological work in

this area. Often her work only briefly mentions important issues like race or class, without

probing more deeply their significance for interdisciplinary theology. Similarly, Herzfeld

appreciates the importance of relationality and embodiment for theological responses to robotics

and AI, but stops short of connecting these insights to other well-developed areas of theology,

including ecological, feminist, and contextual theologies. The approach developed in the

following chapter continues this expansion of concerns, relying on the work ably accomplished

by Foerst and Herzfeld and advancing it for a contemporary, contextual theological response to

increasingly humanlike robots and AI.


Chapter Four

Introduction

The analysis of Moravec, Brooks, Breazeal, Knight, Foerst, and Herzfeld found in the preceding

chapters was largely deconstructive. This detailed examination included a comparison of their

work, illustrations from relevant robotics and AI research, and a critique emerging from

postfoundationalist and contextual theology. Each of these six key figures contributed a partial

response to the challenges facing theological responses to increasingly humanlike robots and AI.

This deconstructive work, however, is only part of the theological task. Postfoundationalist

interdisciplinarity, especially as articulated by Wentzel van Huyssteen, also demands

constructive work. In fact, these efforts are the ethos of this approach. While

postfoundationalism always allows for questioning the aims and claims of a reasoning strategy, it

pushes toward common ground and creative responses to the challenges posed, in this case, by

robots and AI. Such creative constructive work is the task of this final chapter.

The analysis undertaken so far reveals four important areas for further development. The

first of these—the human—addresses questions at the core of both scientific and theological

inquiry. In this area, roboticists and theologians raise many important questions about the human

and our place in the world. Even the limited scope of this project shows how little consensus

exists in this area, and how much work lies ahead. Existing theological resources, especially

from ecological and contextual theology, help develop an understanding of the human fit for

interdisciplinary interaction with robotics and AI. A second area—contextual awareness—is a

growing concern for contemporary, constructive theological responses. This encourages

theologians to give ever greater attention to the historical and social forces that influence the

158
159

development of robotics and AI and theological responses to it. Importantly, it includes

consideration of how diverse experiences of race, gender, class, ability, sexual orientation, social

location, culture, and so on influence how researchers program, build, and reflect on robots and

AI. Intersectional approaches, emerging partially as a response to the shortcomings of white,

middle-class feminism, challenge theologians to consider how these various experiences interact

to compound already existing imbalances of power and privilege. Developing such critical

contextual awareness will fuel interdisciplinary theology and enable it to respond more wholly to

robotics and AI research.

Third, biomedical, industrial, and military applications raise ongoing ethical and

theological questions. Intersecting political and economic interests fuel expanding application in

these areas and contribute to growing daily contact with robots and AI. Closer examination of the

use of robots and AI in such settings reveals ethical and theological complexity, only somewhat

anticipated by Herzfeld in the approach outlined in Chapter Three. From Amazon to brain-

computer interfaces to campaigns to stop killer robots, relevant case studies illustrate how the

challenges of robots and AI exist already in the here and now. These examples also underscore

the robot dependency of day-to-day life for many of the world’s wealthy and privileged, and the

very social structures in which this is all embedded.

A fourth area for further development relates to methodological issues that impact

theological responses to robots and AI, especially the selection and use of sources and

intradisciplinary diversity. Attention to how these conversations take place contributes to the

quality of interdisciplinary dialogue in drawing out values and assumptions embedded in robotics

and AI research. This engagement will help theologians work toward deeper and more detailed

responses to robots and AI. Further to this, as robotics and AI research advances, theologians
160

will have to deal with an increasing range of objects of inquiry and their application from around

the world. Understanding how different kinds of robots relate, and how theologians might

narrow unmanageable amounts of information, is an indispensable part of developing this

research further.

Looking at these four areas helps create space to discuss the ethical and theological

questions that are always present—but rarely extensively discussed—in theological discourse

about robots and AI. Importantly, this includes consideration of military interests, commercial

applications, and emerging biotechnologies. Together these dimensions in robotics and AI

continually challenge theology to articulate anew its understanding in these areas, combined with

renewed attention to ethics, form the core of a fresh and much needed constructive approach to

theological study of robots and AI.

What follows takes these four key areas—the human, contextual awareness, application,

and methodology—and shapes a new theological way of responding to robots and AI. It

simultaneously builds on the first three chapters of this thesis, while pointing to new avenues of

inquiry going beyond all of this. This work also articulates more clearly the ability of robots and

AI to influence how theologians understand the human as made in the image of God, and how

this might affect human-human and human-divine relationships. Importantly, this chapter

addresses the shortcomings of the approaches described and analyzed in Chapters Two and

Three, especially as relates to the human, contextual awareness, application of robots and AI, and

interdisciplinary methodology. The view of the human proposed here emerges from contextual

and eco-theology, especially inspired by thinkers like Thomas Berry and Sallie McFague. Such

an approach counters the superficial understanding of the human often found in both theological

and scientific discourse about robots and AI. Attention to contextuality supports these efforts,
161

and is essential for carrying out interdisciplinary theological reflection that contributes to social

justice and the dissolving of oppressive ways of being. Attention to biomedical, industrial, and

military application of robots and AI illustrate these concerns for justice and peace in practical

ways, emphasizing once again the need for theological responses to a growing global force.

Finally, the return to methodological issues concludes the outline of this novel theological

approach to robots and AI. This move helps propel the discourse further, showing how the

selection and use of sources and attending to intradisciplinary diversity are essential in all further

theological work in this area. Such a contribution also honours the spirit of postfoundationalist

interdisciplinarity, which works on methodological development alongside traditional meeting

points of science and theology.

The Human

The intersection of theology and humanoid robots is inescapably an exploration of the identity,

purpose, and future of the human. From a theological perspective, this meeting of human and

robot raises important and multivalent questions: Will robots and AI replace humans in the

workforce? In evolutionary history? How do robots change humans and their relationship with

each other? With God? What are the moral and ethical implications of robotics and AI research?

How do robots promote the oppression—or flourishing—of humans? And so on. These and

related questions require fresh consideration of the human, drawing on existing theological

tradition.

The historical overview in the Introduction foreshadowed many of these questions. The

myth of Prometheus, for example, is also a thought experiment about the limits of human

activity. Scientific and theological researchers alike, including Anne Foerst, continue to consider

these limits millennia later. Another example from Classical Greece also raises questions
162

relevant for contemporary settings. The story of Pygmalion, a sculptor who crafted an artifact so

beautiful he falls in love with it, warns of creating new idols made in our own image. Such

reflections on human-robot relationships are later picked up by Noreen Herzfeld and others. The

examples go on—from Frankenstein and Vaucanson to Asimov and Čapek, robots and proto-

robots have prompted human self-reflection for centuries.

The roboticists featured in Chapter One take these questions further. Some of their claims

about the human were clear and bold, while others hidden or only implicit in their work.

Regardless, each in their own way argued for new understandings of the human, relationships,

and societies. For Moravec, this means dispensing with temporal and embodied experiences. In

his view, the human as it exists now is transitional, merely another step in evolutionary progress

toward improved forms of existence. Here the human is dispensable and imperfect. This

“contingent and finite” quality of the human is consistent with centuries of Christian teaching,

but Moravec diverges from it in his insistence that redemption comes from within as the result of

individual human effort.371 He also only considers the human as an individual, rather than as a

relational creature embedded in complex ecosystems and webs of social connections. Views like

Moravec’s are not rare in robotics and AI research, and spill over into the world of

transhumanism. Such approaches leave theologians with the important task of affirming

embodiment, temporality, and even frailty when discussing humans and their species. As

371. Elizabeth Barnes, “Gattaca and A.I.: Artificial Intelligence: Views of Salvation in an Age of Genetic
Engineering,” Review & Expositor 99, no. 1 (2002): 66.
163

discussed below, ecotheology is especially useful in developing such a response in its

simultaneous appreciation of the sacredness and humility of the human.372

Rodney Brooks also raises important questions about the human. Before his ground-

breaking work, researchers generally held a highly anthropocentric view of the human. We were

the only organism worthy of replicating, the only template for robotics and AI research. This

significance was due to human cognitional ability permitted only by energetically expensive

brains. As described in Chapter One, Brooks upset this scheme and displaced humans from its

centre. In his view, the human brain—and its ability to do things like math and play chess—was

not the Holy Grail of robotics and AI research. In contrast with Moravec, who found the fleshy

and fragile body cumbersome, Brooks saw it as a source of insight and inspiration. The centrality

of embodiment in Brooks, however, does not quite parallel that which is found in contemporary

contextual theology. His is a ‘thin’ understanding of embodiment, and misses all consideration of

power dynamics that affect lived human embodied experience. These are aspects ably developed

in feminist theological thinking, and others who attend to the social complexities of human

embodiment. These contributions are discussed in greater detail below.

Theologians may also find Brooks sympathetic in another important way. Unlike many

roboticists—but like many theologians—he stresses the intrinsic relational quality of all

organisms. He argues that organisms are necessarily inseparable from the situations,

environments, and the myriad other organisms that surround them. This concern for the webs of

372. These ideas are not exclusive to ecotheology as some theologians who are not typically considered
ecotheologians have also pointed to a similar direction for framing human self-understanding. Gordon Kaufman, for
example, wrote about imagining the human in its “biohistorical” context, which is the “wider context within which
humans have appeared on Earth.” Gordon Kaufman, “Re-Conceiving God and Humanity in Light of Today’s
Evolutionary-Ecological Consciousness,” Zygon 36, no. 1 (June 2001): 338.
164

connection that characterize life sets him apart from his predecessors and contemporaries,

including Hans Moravec. Brooks describes the importance of a relational view in terms of

situatedness and environment, which is discussed more fully in Chapter One. Though developed

differently in theological reasoning, this is an important first word from roboticists that neither

humans nor humanoid robots can be considered in the abstract.

Cynthia Breazeal offers a distinct take on relationality. While Brooks turned to non-

human, non-primate animals for inspiration, Breazeal reversed that turn and went back to the

human. Her research sought out that which is distinctly developed in humans, including emotion

and language. For Breazeal, these are the core of human identity and are the most important

aspects in developing humanoid robots.373 It is not just language and emotion as such that

Breazeal finds interesting, but how they are acquired and develop from birth and throughout

adulthood. Unlike Moravec and Brooks, who describe the human as static, Breazeal pays close

attention to the significance of human development. Though this picture of the human adds a

layer of nuance not found in earlier approaches to robotics, it is still lacking from a theological

perspective. As noted in Chapter One, her work has biases toward neurotypical humans and she

offers essentially no analysis of the privilege at work in her research. Her insistence on an

“average person’s world” is emblematic of these shortcomings. Theologians must take care not

to replicate these oversights in theological discourse about the human.

Theologians encounter yet another understanding of the human in Heather Knight. In

some ways, she is well-aligned with theological reasoning about robots and AI. For example, she

373. Breazeal does not emphasize proto-language and similar capabilities in non-human primates and other
animals. Her interest is more strictly in human development and psychology. This focus functions to give her
research a reasonable area of concentration, rather than as a strong anthropocentric bias. See for example Cynthia
Breazeal, “Insights from Developmental Psychology,” in Designing Sociable Robotics (Cambridge, MA: MIT Press,
2002): 27-38.
165

finds much in common with Foerst’s metaphor of the playful human, or Homo ludens. Here, the

human is a social animal whose identity is shaped through relationship and motivated by fun and

play. This emphasis dominates Knight’s work. She sees the human as primarily a creature of joy

and creativity. Hers is an uneven view with a strong emphasis on the human capacity for humour

and frivolity, but no mention of negative attributes of individual humans or societies. In giving

special importance to these aspects of the human, Knight implicitly downplays the potential for

robots to be a destructive force in history.

Knight’s work offers a few important lessons for theological consideration. Her

optimistic view of humanoid robots is well placed in her own context and for her own purposes,

but is theologically problematic. She works with and develops robots without apparent serious

critical reflection, camouflaging abundant ethical problems associated with robotics and AI

research and its resultant products. Her vision of the human is also incomplete in her inattention

to social context and human history. The theological approach developed in this chapter, and

throughout this research project, seeks to take seriously the different experiences of people and

the imbalances of power that lead to oppression and marginalisation.

The theologians discussed in Chapter Two and Three take the first steps toward a vision

of the human suited for the age of robots and AI. Their work, however, is limited by their own

historical settings. Their work also fails to robustly address the concerns of contextual theology,

including privilege, power, and social justice. These more substantial considerations of age, race,

gender, disability, class, sexual orientation, among aspects of human experience, are developed

below. Nevertheless, ultimately the theologians discussed above have prepared the way for such

contextual analysis and are indispensable for research projects like this one.
166

Existing theological resources easily act as a corrective to Foerst, Herzfeld, and others.

Of special importance are contributions from ecological and contextual theologians, including

Thomas Berry and Sallie McFague.374 In applying insights from these areas of theology, the

approach to robotics and AI developed below simultaneously builds upon theological tradition

and responds to new challenges in history. For example, the holistic view of the human so well

articulated in ecotheology addresses many of the shortcomings named above.375 Such a view

rejects partial consideration of the human, and replaces it with an understanding of the human—

and consequently of robots and AI—that is much more theologically satisfying. This holism

decentralizes the brain as the defining feature of the human and puts the human into relationship

with all aspects of its self, its communities, ecosystems, and even cosmogenesis. McFague’s

contributions have the additional benefit of feminist critique, essential for understanding human

embodiment and relationality. Drawing creatively on such sources, therefore, is an essential

theological move for this contextual, postfoundationalist approach to robots and AI, and is the

task carried out in the following section.

374. Given his historic contributions to the field, discussions about ecotheology inescapably draw on Thomas
Berry. While his contributions remain useful even in the context of this thesis, they are not without problems. For
example, his rhetorical style often leads to assumptions that contextual theologians should challenge. His evocative
and poetic style encourages fresh interpretation in light of new contextual challenges, including this present work on
robots and AI. A drawback of this style, however, is its reliance on generalizations that can be unhelpful or even
problematic. He speaks of the ‘Chinese world’ or an ‘Indian concept’ as though those exist in the singular and
therefore glosses over the sophistication and complexity of cultures other than his own. See for example Thomas
Berry, The Dream of the Earth (San Francisco: Sierra Club Books, 1988), 20.
375. A holistic view of the human is not exclusive to ecotheology. Some theologians working on robots and AI
have also advanced such a view, including Antje Jackelén, who emphasized the importance of the systems that make
up the human as both a social and physical animal. Jackelén, 291.
167

A theological response

The image of the human found in ecotheological discourse meets many of the needs for

interdisciplinary theological responses to robots and AI. These views emerged historically as a

partial response to the failure of humans and the failure of theology to grapple adequately with

climate change and the destruction of the planet. Ecotheologians have long-articulated the failure

of humans in light of global climate catastrophes, which bear some parallels to the destructive

use of robots in consumer culture and industrial applications. Thomas Berry, for one, notes: “The

human has become not the crowning glory of Earth, but its most destructive presence.”376 With

this as a motivation, ecotheology addresses the anthropocentrism, patriarchy, and estrangement

from each other and the world that brought us to this historical moment. These, too, are the

problems facing application of robotics and AI, and theological responses to it. As evidenced so

far in this research project (and continuing later in this chapter), robots and AI reveal much about

human propensity to place themselves at the centre of the universe, marginalize and oppress

other humans, and consume in unsustainable ways. With these parallels in mind, the

ecotheological view of the human—as a relational, holistic, contextual, and embodied animal—is

an indispensable resource for these distinctly twenty-first century challenges.

Holism and relationality

The human is an intrinsically relational animal, connected to others for survival. In the

womb, an umbilical cord connects the developing fetus to the life-sustaining placenta. As a baby

and young child, developmental learning promises connection with others through language,

376. Thomas Berry, “Christian Cosmology,” in The Christian Future and the Fate of Earth, eds. Mary Evelyn
Tucker and John Grim (Maryknoll, NY: Orbis Books, 2009): 28.
168

gestures, and other forms of expression. Growing into adulthood, we become aware of how

much we depend on all manner of people for our own security and comfort. From farmers, to

plumbers, and teachers, we very concretely depend on each other at every moment. At the social

and communal level, humans—and the species from which we directly descended—seek

relationship in still other ways. Humans also desire romantic and sexual intimacy, friendship and

companionship, and yearn for a connection with the natural world and the transcendent.377

Ecotheologians encourage a global view of this relationality, one that appreciates that all

these expressions of connection are held together in infinitely complex webs. All aspects of this

intrinsic relationality are bound up with all other humans, animals, social and ecological systems,

and even the unfolding of the universe. As Brian Swimme and Thomas Berry extensively argue,

the human story is only properly understood within the context of this broader perspective. This

awakening is recent for many of us in the west, including those who follow western Christian

traditions. “Only now do we begin to understand that this story of the Earth is also the story of

the human, as well as the story of every being of the Earth.”378

This emphasis on relationality means that human experiences are not isolated, nor are

they global. Individuality and universality live in dynamic relationship, with lives and

experiences existing in equilibrium, where shifts are informed by both choice and chance. In

terms of the human, these webs of relationship are radically shaped by social forces like race,

gender, and class (as discussed further below) and balances of power and privilege. Ecotheology

calls on all of us to honour the complexity of these webs of relationality as essential aspects of

377. See Berry, The Dream of the Earth, 13-15.


378. Swimme and Berry, The Universe Story, 3.
169

understanding the human. As Sallie McFague wrote in response to the nuclear technology crisis,

“we are part and parcel of the web of life and exist in interdependence with all other beings, both

human and nonhuman.”379 Berry’s work on retelling the universe story is partly motivated by the

need for this more dynamic, less individualistic view. He remarks, “Each human personality

must, in accord with personal abilities and opportunity, assume the global human heritage as his

or her own individual heritage if we are to fulfill the historical demands of our existence.”380

A relational view emphasizes the human place in society, ecosystems, and cosmogenesis,

and deemphasizes the comfort and status of the individual. The flourishing of one must be held

in balance with the flourishing of others and the resources available to sustain life and

relationship. This casts a critical eye on robotics and AI research, which so often calls attention

to the value of the individual at the expense of the community. In tandem with this, the majority

of applications of robots and AI are designed to bring comfort and power to the already

comfortable and powerful. Implicit in all of this is support for the modern myth of progress and

its insistence that quality of life and the wellbeing of the human are intrinsically linked to the

success of western science, technology, and industry. In commentary on AI in science fiction

films, Elizabeth Barnes notes that this emphasis is the partial legacy of “Enlightenment

individualism,” which stresses the “power of a self-made man to shape a bright future for

himself.”381 She goes on to say that Gattaca, a 1997 film about genetic perfection and human

worth, teaches the viewer that salvation rests on these foundational assumptions. Against this,

379. McFague, “The World as God’s Body,” 671.


380. Thomas Berry, “Contemporary Spirituality: The Journey of the Human Community,” Cross Currents 24,
nos., 2-3 (Summer-Fall 1974): 172.
381. Barnes, 61
170

she argues: “Sacrifice born in the womb of community and communal values is required to

complete a story which modernity and solipsistic individualism cannot.”382

Sallie McFague offers another perspective on relationality that is helpful for theological

understandings of the human vis-à-vis robots and AI. One of the ways she frames this is in terms

of kenosis—self-emptying—recalling Christ’s own humble life and death. For McFague, the

defining features of human kenosis include “restraint, self-sacrifice, give-and-take, limitation.”383

In addition, it means that to continue in the way of Christ, the way of kenosis, “life only comes

through death.”384 In the context of ecological crisis, kenosis is the death of expensive, rapacious,

and oppressive lifestyles and social structures that support and encourage environmental

destruction. As McFague articulates it, kenosis is the pathway to responsibility, accountability,

and liberation.385 In terms of robots and AI, kenosis looks much the same, especially in terms of

the industrial, biomedical, and military applications discussed below. Robotics and AI as a

reasoning strategy will also have to change and allow for the death of unchecked ambition,

creating for the sake of creating, and unexamined biases.

Restoring relationality to theological understandings of the human translates easily into

holistic approaches to interdisciplinary dialogue. Brian Swimme and Thomas Berry try to

achieve such restoration in their retelling of the universe story. They note that for too long

humans, particularly in western academic streams of thought, have emphasized one aspect of our

382. Ibid., 62.


383. Sallie McFague, Blessed are the Consumers: Climate Change and the Practice of Restraint (Minneapolis,
MN: Fortress Press, 2013), 141.
384. Ibid., 143.
385. Ibid., 166-168.
171

story at the expense of others. They note, for example, that historians focus on humans, and

scientists on physical phenomenon, and so on.386 The Universe Story is their attempt to break

down these silos and bring together a diversity of insights to tell a “comprehensive” narrative.387

McFague also shows an appreciation for understanding the human in reference to our broader

context in evolutionary history. She is careful, however, to make a distinction between

individualism (as valuing the individual over community) and individuality (a necessary feature

of evolutionary history). Individuals, she argues, are essential components even in holistic

worldviews and methodologies, “each part, no matter how small or large, makes its contribution

to the creation of more and more complex and diverse forms of life.”388 From an interdisciplinary

methodological perspective, van Huyssteen is largely in accord with these ecotheologians at this

point. Though he typically speaks of holism in terms of debates around postmodern

epistemology, his interpretation of postfoundationalism espouses similar ambitions for a

comprehensive account of the human. His Gifford lectures, an interdisciplinary study of human

uniqueness, are an excellent example. Here he remarks: “Interdisciplinary discourse, then, is an

attempt to bring together disciplines or reasoning strategies that may have widely different points

of reference, different epistemological foci, and different experiential resources.”389 Such

discourse expands the traditional scope of theological inquiry and makes boundaries between

reasoning strategies permeable. Relying on more traditions, and more reasoning strategies makes

clearer the gaps in the stories we tell about ourselves and the universe. This move toward holism

also impacts how theologians understand the place of the human in cosmic history. Importantly,

386. Swimme and Berry, The Universe Story, 1.


387. Ibid.
388. McFague, Blessed are the Consumers, 147.
389. van Huyssteen, Alone in the World?, 9.
172

it dismantles anthropocentrism and positions the human as but one species among species. This

move is a humbling one. No longer are humans the apex of creation, but are instead a late-

appearing animal relying on all that preceded for its very existence. “Fifteen billion years ago,

shape-shifting matter appeared as primeval fire . . . when shape-shifting matter suddenly

appeared in human form a great surprise took place.”390

This emphasis on relationality and holism is largely absent, or underdeveloped, in

theological discourse on robots and AI. Bringing relationality and holism into interdisciplinary

dialogue encourages theologians to look at robots and AI through a new critical lens. In this

view, we must continually ask how these technologies will impact not only their targeted end-

users, but also how they upset equilibria throughout social and ecological systems. It means

theologians must keep in mind those who are removed from the direct impact of these

technologies, and be ever-mindful of global implications. Honouring the relational quality of the

human also helps underscore differences between us and robots. As it stands now, robots and AI

are at best superficially relational, and function independent of social and ecological contexts.

These technologies mimic some of our social attributes, but do not participate in the same web

within the context of the same cosmic history. Though this is likely to change as social robotics

advances, it is important to recall the intrinsic relationality of the human so that we do not over-

identify with robots and AI at this moment.

In terms of robotics and AI research, emphasis on relationality and holism also brings

into sharper relief how roboticists perpetuate these gaps and shortcomings in their own work, and

then reflect this incomplete picture of the human in their so-called humanoid creations. Against

390. Swimme and Berry, 143.


173

this relational and holistic view, even the most sophisticated robots and AI only bear pale

resemblance to the human. Though this is likely to change as robotics and AI research

progresses, these differences must not be set aside while they still exist. For example, researchers

like Noreen Herzfeld and Antje Jackelén discuss the imago Dei considering humanoid robots and

AI. In examining what it means to be made in the image of God, they struggle to fully grasp the

distinctiveness of the human made evident by this aspect of ecotheology.391 For example,

Jackelén writes of AI that a “core goal is to build fully intelligent artifacts with cognition,

perception, action, learning, creativity, and emotion—all the characteristics of what might be

called a human being created in the image of God.”392 While she notes that AI researchers have

achieved only mixed results, she misses the opportunity to point to even bigger differences

between robots and human—those that rest on human’s intrinsic relational and holistic existence.

Such an emphasis would reveal that successes in the area of cognition, perception, or action (for

example), are false equivalencies between robots and humans. Serious consideration of human

relationality and a holistic approach to understanding our ‘global human heritage,’ makes robots

like Cog and Kismet, Baxter and Jibo, seem hardly humanlike at all. These robots have no

connection to each other, no robot culture or history, and no sense of the evolutionary or social

settings that led to their creation.

While this analysis makes it seem like science and theology are incompatible on this

point, there is good evidence that roboticists are moving toward a more relational and holistic

understanding of the human. For example, while ultimately unsatisfying, Breazeal and Knight do

391. See for example Herzfeld’s understanding of the relational approach and my critique of it in Chapter
Three.
392. Jackelén, 291.
174

make some efforts to approach their programming from a relational rather than computational

perspective. Elsewhere, Brooks’s appreciation for other animals in developing his approach to

robots and AI, shows a willingness to understand human intelligence and identity in gradation

with the rest of creation. Flowing from this, biomimicry and swarm behaviour are now major

areas of inquiry for robotics and AI research.393 These efforts honour the intelligence of animals

other than the human, and to an extent recognize the interconnectedness of all life, whether it be

in groups or ecosystems. This movement parallels shifts in evolutionary biology, where

increasingly ‘survival of the fittest’ narratives are set aside in favour of hermeneutics that

emphasize cooperation, altruism, and the importance of groups and colonies.394 Such reciprocity

of relationship helps strengthen animal groups, and humans express this through a complexity of

arrangements.395 These brief examples show that scientific reasoning is also open to the

possibilities put forth by ecotheology, including growing appreciation for the intrinsic relational

quality of the human and appreciation for holism as an important element of human reasoning.

Context in cosmogenesis

The ecotheological vision of the human is rooted in the broader context of cosmogenesis.

As an alternative to the more static cosmos or universe, Thomas Berry uses the term

cosmogenesis to better capture the ongoing unfolding quality of all that is known and unknown.

“In cosmogenesis each fundamental breakthrough evokes a multiplicity of possibilities. In each

new era, creativity explodes to populate the realm with an abundance of novelties, many of

393. “Programmable Robot Swarms,” Wyatt Institute (Harvard University), accessed August 3, 2017,
https://wyss.harvard.edu/technology/programmable-robot-swarms/.
394. Matt J. Rossano, “Artificial Intelligence, Religion, and Community Concern,” Zygon 36, no. 1 (March
2001): 58.
395. Ibid., 59.
175

which do not survive.”396 In this way, cosmogenesis encapsulates not only the matter and energy

of the universe, but the processes and mysteries that propel it through time and space. In The

Great Work, Berry gives another account of the unfolding and dynamic quality of cosmogenesis:

“We now live not so much in a cosmos as in a cosmogenesis; that is, a universe ever coming into

being through an irreversible sequence of transformations moving, in the larger arc of its

development, from a lesser to a great order of complexity and from a lesser to great

consciousness.”397 On this point, Berry takes up the tradition of Pierre Teilhard de Chardin who

developed his understanding of cosmogenesis as early as 1959, writing that the “universe is no

longer a State but a Process. The cosmos has become a Cosmogenesis.”398

The turn to cosmogenesis reimagines the universe story and the human place within it. It

reinforces the relational quality and holistic methodologies described above, making it clearer

that it is impossible to abstract the human from its relationship with all creatures, ecosystems,

processes, and matter. These relationships bind the human to all that gave rise to and sustain our

species, and all that will flow from it. Cosmogenesis is also a revelatory phenomenon as it speaks

to the still-in-progress quality of all that is and points to all that will be. Thomas Berry expands

on this, saying:

We must feel that we are supported by that same power that brought the Earth into being,
that power that spun the galaxies into space, that lit the sun and brought the moon into its
orbit. . . Those same forces are still present; indeed, we might feel their impact at this
time and understand that we are not isolated in the chill of space with the burden of the
future upon us.399

396. Berry, The Universe Story, 114.


397. Thomas Berry, The Great Work: Our Way Into the Future (New York: Bell Tower, 1999), 26.
398. Pierre Teilhard de Chardin, The Future of Man, trans. Norman Denny (London: Collins, 1964), 261.
399. Berry, The Great Work, 174.
176

These unknown—and potentially unknowable—forces point to unpredictable formation,

disintegration, and reformation of matter, relationships, and systems. Cosmogenesis is a constant

process of undoing and redoing, and always points toward dynamic understandings of

relationships. Everything—from subatomic vibration to the expanding edges of the universe—is

in flux. By extrapolation, this includes human biological and communal life, which persists only

through a sensitive balance of ecological and social relationships. Need and desire for finite

resources makes survival and flourishing not about the human, but about our place within this

much broader cosmogenic context.

In an orientation toward cosmogenesis the human becomes far more complex, far more

relational, and far more indeterminate. The human cannot simply be reduced to cognition,

learning, language, or any other common criteria for uniqueness. Instead, the human must be

viewed in terms of the whole, where its constitutive systems and parts are evolutionarily

surprising, but essential aspects of cosmogenesis as we know it. The human is not only a species

in process, but a sensitive one that responds to the biological demands of body and ecosystem,

the making and breaking of bonds with other humans, customs and beliefs, and even our genetic

material. A cosmocentric view of the human encourages attention to all these factors, and their

relation to competing influences throughout the web of all relationships.

These reflections on our context in cosmogenesis make clearer the differences between

the human and humanoid robots and AI. This analysis shows the complexity of the human and

the comparable simplicity of our android analogues. Through this lens, even the most

sophisticated robots and AI are but caricatures of humans, drawn without consideration of the

billions of years of cosmic processes that led to this historic moment. Further, emphasis on

cosmogenesis helps give greater definition to the impact of robots and AI in history. Theologians
177

like Foerst and Herzfeld describe the threat of these technologies in terms of personhood, making

idols for and of ourselves, and loss of human life. Considering cosmogenesis expands these

circles of concern and brings new theological and ethical concerns to light. These include, for

example, the role of robots and AI in patterns of consumption and exploitation, and military

interests in using these technologies for violent or destructive purposes — ambitions which are

incongruent with an ecotheological commitment to the integrity of all life, and cosmogenesis as a

whole.

Embodiment

In the setting of this research project, questions about the human and humanoid robots

and AI are necessarily also questions about embodiment. Researchers experiment with the limits

of human embodiment, make claims about human embodied experience, and perpetuate these

ideas in robots and AI. The views of Moravec, Brooks, Breazeal, and Knight—while often

divergent—present many challenges in this area worthy of critical analysis and correction in

interdisciplinary theological responses. The understanding of human embodiment represented in

their research often reflects unchecked social privilege and little consideration of the global

impact of their work. For example, the preference for maleness and whiteness is evident

throughout robotics and AI research, and related science fiction representations. In addition,

industrial, biomedical, and military applications raise new global ethical challenges that go

largely unaddressed by roboticists and AI researchers. These challenges are discussed more fully

below, but at this moment existing theological resources on embodiment also begin to correct

these shortcomings of robotics and AI research.


178

Though better known for his four-fold typology relating science and theology, Ian

Barbour also contributes to an understanding of embodiment helpful for theological

consideration of robots and AI. Through attention to both the methods and claims of science,

Barbour’s work gives an interdisciplinary perspective that enhances Foerst’s and Herzfeld’s

efforts in this area. Against the backdrop of AI research, he argues for a holistic view of the

human that takes historical theological understandings of embodiment seriously. He also

advocates for a holistic approach that binds together the “religious, aesthetic, moral, and cultural

as well as scientific features of human life.”400 This approach helps avoid reducing the human to

a finite set of features or metaphors, and embraces the complexity of human social and embodied

life. It also holds together the material and spiritual aspects of human existence, and rejects the

materialism at work in mainstream robotics and AI research. He is careful, however, to also find

a middle ground between this rejection of materialism and an equally unsatisfying body-soul

dualism.401 Through these efforts he hopes to develop an understanding of the human that takes

robotics and AI seriously, but also honours the wisdom of the Christian tradition.

Barbour’s constructive work begins with his reclaiming of historical views of the human

relevant for interdisciplinary discourse about AI. He starts with scripture and a re-examination of

psyche and nephesh.402 Here, he argues, the Hebrew term nephesh and its New Testament Greek

counterpart, psyche, point not to the immaterial aspects of the human, but to the totality of

human experience and an affirmation of embodiment. The Pauline “spiritual body,” according to

400. Barbour, 362.


401. Ibid.
402. Barbour is not alone in drawing on these biblical concepts in a theological response to AI. Likewise,
Russell C. Bjork turns to the roots of nephesh to develop his understanding of biblical personhood and its
relationship to theological anthropology amid AI. Bjork, 95-102.
179

Barbour, only further underscores this. It transforms these essential aspects of the human, rather

than rejects them. Such an approach leaves room for both transcendent and immanent

experiences within the singular embodied self.403 These scriptural concepts describe an

integrated human where temporality, materiality, and embodiment are an essential part of what it

means to be human. Other theologians have also worked to reclaim a “positive biblical value of

the human body along with the physical world” and argue for such a multivalent, integrated view

of the human. These include Ted Peters, who also espouses many of the ecological views about

the human discussed above.404 Peters also notes that “Christian creeds affirm ‘resurrection of the

body,’ not immortality of the soul.”405 This affirmation, for Peters, is clear indication of the

importance of embodiment even from the earliest days of Christianity. Reclaiming this aspect of

the tradition helps support contemporary discussions about the meaning of embodiment,

especially in parallel with similar conversations in robotics and AI research.

Sallie McFague brings together many of the concerns described here and treats

embodiment as central to her feminist ecotheology. It is the primary category for understanding

the human, our experiences in the world, and our relationship with God. It is also her starting

point for understanding the divine and even cosmogenesis. Embodiment is her pathway for

dealing with an array of historical issues, ranging from nuclear technologies to global

catastrophic climate change. While she is clear about Christianity’s shortcomings in dealing with

both human bodies and the world as God’s body, she insists on a constructive approach.

Importantly, she finds value in the scriptural record and the life of Jesus of Nazareth in telling

403. Barbour, 363ff.


404. Ted Peters, “The Soul of Trans-Humanism,” Dialog 44, no. 4 (2005): 381.
405. Ibid., 388.
180

new stories about embodiment that forge more faithful and just relationships with the Earth.406

So important is the arc of incarnation, life, death, and resurrection, that she claims Christianity is

a religion “about nothing but embodiment.”407 The centrality of embodiment is further evidenced

in creation stories, the emergence of animal bodies, and resulting sacramental traditions, all of

which have an intimate, physical character. Despite this abundance of theological evidence,

Christianity has a difficult history with human bodies, especially those marginalized because of

gender, race, disability, or sexual orientation. McFague, however, remains hopeful that a feminist

ecotheological understanding of embodiment can correct this course.408 She accomplishes this

correction in two key ways. First, as noted above, she returns to biblical roots, lifting up

examples of God’s affirmation of embodied existence.409 Divine embodiment in Jesus of

Nazareth is the ultimate evidence of this. Scripture clearly describes his embodied life with

strong emphasis on the desire to be with the poor and the outcast—people often excluded for

reasons relating to their own embodied experience.410 Such examples speak not only to the

tolerance of embodied experiences, but their importance to God as part of the collective human

story. Further, Jesus’s own ministry was so often related to earthly matters, underscoring the

importance of embodied life. For example, the miracle of loaves and fishes, water into wine, and

most powerfully in the Last Supper, speak to God’s entry into the most mundane—but

essential—experiences for human bodies.

406. See Sallie McFague, The Body of God: An Ecological Theology (Minneapolis, MN: Fortress Press, 1993),
114, 135.
407. McFague, The Body of God, 163.
408. Ibid., 14-15.
409. Ibid., 131-136.
410. McFague, Blessed are the Consumers, 173; McFague, The Body of God, 170-174.
181

So important is embodiment for McFague that it prompts her to reimagine the language

she uses to describe God. She notes that the words and metaphors people use to describe the

divine are always evolving.411 At her moment in history, ecological crises demand new ways to

talk about experiences of God and God’s presence in the world. She proposes understanding the

world as God’s body as a way forward in response to this context. This metaphor is one she has

been developing for decades and has as its central motifs “became flesh” and “lived among us”

inspired by the famed Johannine passage.412 It strengthens connection to the embodiment of

Christ, the physicality of the resurrection, and ongoing presence in the Eucharist.413 Through

these experiences, God is immanent, close to humanity, and enmeshed in all flesh and matter.

The world as God’s body has implications for theological responses to robots and AI.

Just as in the instance of ecological crisis, this metaphor expands appreciation for the finitude

and sacredness of natural resources, the interconnectedness of all life, and the importance of

diversity for the flourishing of all life. This return to embodiment and emphasis on the

physicality of experiences of the divine contradicts the escapist, transcendental vision some

researchers (e.g., Moravec) have for humans and robots. The world as God’s body compels

theologians to deal with the human body as is, in the here and now.

Encounter with the divine, with the word as God’s body, takes place amid brokenness. In

McFague’s words, “We meet God in the world and especially in the flesh of the world: in

411. McFague, “The World as God’s Body,” 671.


412. McFague, The Body of God, 160. See also McFague, “The World as God’s Body,” 671-672, where she
discusses the historical theological roots of this concept and its usefulness in responding to contextual crises,
including the development of nuclear technologies.
413. McFague, “The World as God’s Body,” 672.
182

feeding the hungry, healing the sick.”414 With this in mind, theologians must reconsider how

robots are developed and used in the world. This work includes advocating for a more complex

understanding of the human and embodied experience, tempering ambitions for perfected or

enhanced bodies, and directing application toward justice and relationship-building efforts —

i.e., changes in both methodology and application. According to postfoundationalist

interdisciplinarity as articulated by Wentzel van Huyssteen, these changes are a move toward

“embodied epistemology” that emphasizes lived experience, especially in community, over

abstract generalities.415 Movement away from generalities toward particularities requires much

better attention to the diversity of human embodied experiences. Major factors that shape these

experiences include gender, race, sexual orientation, disability, and class, which are discussed

more fully below.

The vision of the human outlined above contributes to a new interdisciplinary theological

approach to robots and AI, that responds to both the shortcomings of the efforts outlined in

Chapters Two and Three, and also to the deficiencies of scientific reflections in this area.

Contextual and ecological theology are a tremendous resource at this point and form the

backbone of a renewed understanding of the human that propels interdisciplinary discourse

forward in an age of robots and AI. These resources dismantle the anthropocentrism pervasive

throughout robotics and AI research, and offer an alternative vision that is more faithful to

Christian theological tradition. This vision includes an appreciation of the human as an

intrinsically relational creature, whose lived embodied experience shapes all reasoning and

creation. The holistic view of the human proposed here underscores that the human may never be

414. Sallie McFague, “Intimate Creation: God’s Body, Our Home,” Christian Century 119, no.6 (2002): 43.
415. van Huyssteen, Alone in the World?, 79-86.
183

abstracted from its context, by science nor theology. Consideration of human relationships is

essential, including how we understand ourselves alongside all other creatures, the communities

and social structures we develop, and how this fits together in an infinitely complex global web.

Contextual Awareness

Interdisciplinary discourse about robots and AI shows a partial, yet developing, contextual

awareness. In their treatment of the subject, both scientists and theologians give only peripheral

treatment to social settings, historical factors, and ethical issues affecting their work. The

researchers discussed in Chapter One very often fail to connect their research to broader trends in

society, and how power and privilege informs their work. Though Moravec, Brooks, Breazeal,

and Knight represent impressive diversity in their approaches to robotics and AI, they do little to

unearth biases that limit their own research. Foerst and Herzfeld fare somewhat better, with

Herzfeld especially interested in a broader array of historical and social influences in robotics

and AI research. Their work, however, still misses a more robust consideration of how robots

and AI participate in oppression and liberation across local and global settings. Closer

examination of these relationships helps unearth biases and contributes to more just robotics and

AI research, and theological reflection on it.

Though critiques and insights from contextual theology and other relevant reasoning

strategies are essential for advancing interdisciplinary dialogue, one must proceed with care at

this point for a few reasons. First, the roboticists and AI researchers named in Chapter One did

not anticipate such contextual critiques when carrying out their work. Viewed from within their

reasoning strategy, their work is more-or-less methodologically complete. Bracketing social and

ethical questions is the norm, especially in the era of Moravec and Brooks, and they would not
184

be faulted for doing so by other roboticists and AI researchers. Second, Moravec, Brooks, and

Breazeal all made major contributions to robotics and AI before schools of thought like third-

wave feminist theology and ecotheology emerged. It is therefore difficult to apply such critique

to their work without being unfairly anachronistic. The object, however, remains—to use

insights from contextual and ecological theologies, and other relevant discourse, to reimagine

interdisciplinary theology about robots and AI with a view on ever-more just and inclusive

dialogue.

The researchers discussed in Chapter One display an ambiguous relationship with their

context. Often their work is simultaneously very concerned with the world outside the laboratory

and yet also indifferent to it. Moravec represents this tension throughout his work. For example,

he turns to the world around him in developing his vision of a disembodied, mind-based future.

He sees environments where genes have outlasted their purpose and enslave humans to frail

embodiment and inescapable mortality. There is no greater impetus for his efforts than this

genetically determined finitude. He carries out his research, however, without any serious

reflection on social location, and how his privilege affects those affected by his efforts. Even

when Moravec turns to other reasoning strategies to support his work he does so without

attention to the internal debates and diversity therein. This approach leads to a skewed image of

the human overly preoccupied with the brain’s computational power. Similarly, his suggestion

that robots and AI can be regulated through democracy is unsatisfying. He is optimistic that

humans can agree on regulating robot and AI behaviour, but offers nothing by way of support for

this enthusiasm. He also fails to mention that democracy as exercised by humans is invariably

imperfect, and that humans would disagree about the values and ethics to instill in robots and AI.

Even the example he names—Asimov’s three laws of robots—is problematic by Asimov’s own
185

admission. The collection of stories in I, Robot illustrate ways the laws can be bent or interpreted

differently, leading to chaos and undermining the original spirit of the laws. Further, Moravec,

makes no effort to articulate how these processes will include those who experience different

degrees of social exclusion and are not well represented in existing democratic processes (e.g.,

women, queer people, racialized minorities, and so on).

Cynthia Breazeal and Rodney Brooks have similarly ambiguous relationships with their

historical and social settings. Breazeal’s appreciation of developmental learning indicates clear

interest in broadening the conventional scope of academic robotics. It shows that she wants to

relate her robots with the humans that inspire their development, making the parallels between

them stronger by paying attention to life outside the laboratory. Similarly, Brooks is interested in

inspiration from non-traditional sources, but shows this in markedly different ways. Most

notably, as described in Chapter One, he seeks connection with military and industrial interests.

This requires intimate knowledge of certain social settings, including the nature of military

operations and the challenges facing different types of missions. Many of his products have also

been adapted for home and commercial use.416 Like Moravec, however, both Brooks and

Breazeal leave theological audiences wanting more by way of contextuality. Ultimately, their

work shows little interest in challenging conventional ideas about race, gender, sexual

orientation, disability, and class in robotics and AI research. While Breazeal does try to break

down some patriarchal ideas from early robotics research, this critique is often implicit in her

work, and leaves still many other problematic assumptions intact. For example, with Kismet, she

416. Robots developed at least in part by Brooks through DARPA funding include ones designed for mine-
removal, reconnaissance, bomb disposal, surveillance, unmanned military excursions, retrieving human remains
from the battlefield, as well as robots like the Roomba vacuum cleaner and those deployed in the Deepwater
Horizon oil spill in 2010. See for example, “iRobot Seaglider Collecting Valuable Data in the Gulf of Mexico (press
release),” iRobot, May 25, 2010, accessed October 25, 2017, http://media.irobot.com/press-releases?item=122479.
186

assumes a neurotypical, able-bodied human without any consideration of differences in learning

and development. With Jibo, promotional material refers to the robot as a ‘he’, emphasizing

maleness as normative. Jibo is also shown helping in the kitchen and taking photos in what is

clearly middle-class or upper middle-class settings. These basic examples reveal much about the

assumptions embedded in Breazeal’s work, and ultimately undermine her efforts to transform

robotics into a more accurate mirror of the human. For Brooks, the lack of contextual awareness

comes through asymmetrical valuation of his robots. For example, he sees military robots as

saving lives and protecting soldiers in combat settings, making no remark on their potential to

harm others, fall into ‘enemy’ hands, or perpetuate global conflict. In terms of his industrial

robots, most notably Baxter, he makes no remark about the devaluing of human labour, potential

unemployment, or the role of robotics in accelerating destructive patterns of consumption.417

Heather Knight shows a different relationship with her context than the other roboticists

discussed in Chapter One. Though like Moravec, Brooks, and Breazeal she is very much

interested in the application of her robots outside the laboratory, she spends a great amount of

energy interacting with non-scientists on platforms like Twitter and television. Through these

means, Knight shows a keen understanding of how to disseminate her research beyond the

robotics community. Theologians can learn much from her efforts to translate her research and

captivate people from outside her own reasoning strategy. These methodological moves are

democratizing for robotics and AI research and help make an otherwise elite community

accessible to those who have traditionally been left out (e.g., women, racialized minorities).418

417. “Baxter: Collaborative Robots for Industrial Automation,” Rethink Robotics, accessed April 4, 2018,
http://www.rethinkrobotics.com/baxter/.
418. Knight, “Silicon-based comedy” (video).
187

Overall, however, Knight does not challenge any of the intrinsic biases that are so pervasive in

robotics and AI research and fictional representation of the same. She even reinforces stereotypes

and essentializes an entire nation: “Pittsburgh loves robots so much that I like to think of it as a

little island of Japan in the center of our country.”419

Popular culture is also culpable in portraying robots and AI in ways that often uncritically

repeat human biases and shortcomings. For example, in Steven Spielberg’s A.I. Artificial

Intelligence, the humanoid robot is the prototype of ‘perfection’: white, blue-eyed, able-bodied.

Perhaps it is no coincidence that the robot shares a name—David—with Michelangelo’s famed

Renaissance symbol. Humanoid robots and AI also figure prominently in comic books, and also

repeat similar prejudices. In Alex + Ada, for example, there are clear ideas about what it means

to be human.420 Ada—a humanoid robot so advanced ‘she’ can pass for human—is thin,

conventionally attractive, passes for white, and without any apparent disability. The other

humanoid robots in the series are similarly depicted, except for a single, short and racialized

character who illegally tampers with humanoid robots to give them full sentience and autonomy.

Similarly, in How to Pass as a Human the robot protagonist, ‘his’ human object of affection, and

all other actors appear to be white and able-bodied.421 In both Alex + Ada and How to Pass as a

Human, all evidence suggests that the relationships between human and robot replicate cisgender

heterosexual. These few examples point to assumptions about the human at work in popular

culture featuring robots and AI. Here, the average human is white, male, heterosexual, and able-

419. Heather Knight, “Pittsburgh’s Robot Film Festival: Preview,” Robohub, November 6, 2014, accessed July
31, 2017, http://robohub.org/pittsburghs-robot-film-festival-preview/.
420. Jonathan Luna and Sarah Vaughn, Alex + Ada, 3 vols., (Berkeley, CA: Image Comics, 2014-2015).
421. Nic Kelman, How to Pass as a Human: A guide to assimilation for future androids (Milwaukie, OR: Dark
Horse Books, 2005).
188

bodied. These stories collapse the diversity of human experience into narrow narratives that have

been told time and time again. In leaving out most people from these stories, authors and

filmmakers simply replicate patterns of oppression and do little to advance important

conversations about what it means to be human in an age of robots and AI.

Attending to diversity

As outlined above, academic and public discourse about robots and AI bears the imprints

of its human origins. Where humans do well, robots follow. Similarly, where humans often fail,

robots are not far behind. In terms of developing robots and AI based on the human—or even as

a substitute for them—this means that all-too-human biases continue to show up in even the most

cutting-edge technologies. Practically, this means that robots and AI become yet another

pathway to marginalization, oppression, and injustice. Fortunately, resources exist to help

interdisciplinary theology shape responses to robots and AI. Insights from these areas of inquiry

immeasurably enhance interdisciplinary theological responses to robots and AI, and allow for

necessary critique of the image of the human that dominates robotics and AI research. These

concerns are taken up here as part of a new theological approach to robots and AI, with attention

given to race, gender, sexual orientation, disability, and class. This approach is taken, however,

with an appreciation that this list is not exhaustive, nor can these factors be properly discussed as

separate entities. Intersectionality recognizes that these aspects of human social life invariably

overlap and compound both privilege and oppression. Scholars and activists who experience this

overlap have long advocated for such methods, including Kimberlé Crenshaw who coined the

term intersectional theory.422

422. Kimberlé Crenshaw, “Demarginalizing the intersection of race and sex: A Black feminist critique of
189

Contextual awareness is important for theological discourse on robots and AI for two

straightforward but very important reasons. First, it contributes to a more complete account of

what it means to be human, honouring the complexity and diversity of human experiences. If all

humans are made in the image of God, then researchers must look equally to the experiences of

the marginalized and excluded in developing their understanding of the human. Second, paying

close attention to historical settings and social factors reveals critical shortcomings in the work of

roboticists and AI researchers. This work has proceeded to date largely as though the social

factors named above, and even the entire history of colonialism, do not exist. In turn, this neglect

facilitates research that risks continuing destructive historical patterns. Here, existing theological

resources provide an important corrective to the methodological deficiencies of robotics and AI

research.

The robotics and AI research discussed in this thesis bears the imprints of systemic

racism and white privilege. Though laboratory research is increasingly diversified, the reasoning

strategy still suffers from lack of critical conversation about the relationship between race and

robots. This lack is a vestige from the early days of robotics and AI research, which was

dominated by researchers who enjoyed white privilege in a still-segregated United States. It is

only further entrenched by popular culture representations of robots and AI, as noted above, that

overwhelmingly replicate privileged, and even racist, ideas about humans. Today, robotics and

AI often unintentionally amplify these patterns leading to disastrous outcomes. For example, a

recent beauty contest judged only by AI revealed a clear bias against dark skin out of 6000

antidiscrimination doctrine, feminist theory and antiracist politics,” in University of Chicago Legal Forum, special
issue: Feminism in the Law: Theory, Practice and Criticism (Chicago: University of Chicago Law School, 1989),
139–168.
190

entries from about 100 countries.423 The results indicated that the AI, despite being programmed

to look for so-called universal beauty standards like wrinkle free-skin and symmetry in facial

features, still favoured white people. In an even more disturbing example, in 2015 a Google

photo application errantly tagged black people as gorillas.424 These examples show how easily

human bias slides into the most sophisticated—and even ‘autonomous’—AI.

Theological responses to race and the place of racialized people within the church and

broader society are, of course, as diverse as the experience of race itself. In the context of

Canadian Christianity, given the history of colonization, genocide, and the Indian residential

school system, the contributions of Indigenous theologians are essential. Indigenous followers of

Jesus have little sympathy for the lopsided view of the human advanced by robotics and AI

research, and in related theological responses. The North American Indigenous Institute of

Theological Studies (NAIITS) is one notable movement to indigenize theological study and

discourse in Canada and the United States, and theologians affiliated with the network offer

much for the current conversation about robots and AI.425 NAIITS was born out of a recognition

that mainstream theological education and academics fails Indigenous followers of Jesus and

423. Sam Levin, “A beauty contest was judged by AI and the robots didn’t like dark skin,” The Guardian,
September 8, 2016, accessed June 6, 2017, https://www.theguardian.com/technology/2016/sep/08/artificial-
intelligence-beauty-contest-doesnt-like-black-people. See also Laurie Penny, “Robots are racist and sexist. Just like
the people who created them,” The Guardian, April 20, 2017, accessed July 31, 2017, https://www.theguardian.com/
commentisfree/2017/apr/20/robots-racist-sexist-people-machines-ai-language.
424. Alistair Barr, “Google Mistakenly Tags Black People as ‘Gorilla’s’, Showing Limits of Algorithms,” The
Wall Street Journal, July 1, 2015, accessed August 1, 2017, https://blogs.wsj.com/digits/ 2015/07/01/google-
mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/. See also Amelia Tait, “The rise of racist
robots,” The New Statesman, October 20, 2016, accessed October 11, 2017, https://www.newstatesman.com/
science-tech/future-proof/2016/10/rise-racist-robots; Andrew Heikkila, “Artificial intelligence and racism,” Tech
Crunch, April 15, 2016, accessed October 11, 2017, https://techcrunch.com/2016/04/15/artificial-intelligence-and-
racist/.
425. “An Indigenous Learning Community,” North American Indigenous Institute of Theological Studies,
accessed April 4, 2018, http://www.naiits.com/.
191

perpetuates the colonial structures that lead to their marginalization and oppression. Founding

member and current president Terry LeBlanc, Mi’kmaq-Acadian, finds liberation at the

intersection of pedagogy and mission. He advocates an approach to both teaching and

evangelism that moves continually between “community and classroom.”426 This approach

counters the anti-contextuality that results in such problematic assumptions about the human, as

described above. In dissolving the boundaries between these two spaces, theologians come to

recognize that human experience is not universal, and that there is no such thing as an average

person or an average person’s social world. This movement is clear resistance to the assumed

sameness of human experience that people with privilege often take for granted. LeBlanc sees

unfulfilled ecumenical dreams as part of the reason why Indigenous voices can continue to

withstand totalizing narratives about the human. “Had the Edinburgh 1910 goals been met, its

mission philosophy been realized, we would have far fewer Indigenous peoples bringing their

rich cultural diversity to Christian worship. There would, instead, be a deep and all-consuming

‘sameness’ about us.”427 It is the failure of this theological project, and the failure of this

‘sameness’ to take hold, that challenges the normativity and dominance of white robotics, white

theology, and the intersection of the two.

Understanding and representing race is not the only place where roboticists and AI

researchers fail to capture the diversity of lived human experience. The examples used

throughout this research project point to a strong preference for male, heterosexual, and

cisgender experiences of being human. Reference to Jibo (Breazeal’s robot) and Data (Knight’s

426. Terry LeBlanc and Jennifer LeBlanc, “NAIITS: Contextual mission, Indigenous context,” Missiology 39
no. 1 (Jan 2011): 87.
427. Terry LeBlanc, “Mission: An Indigenous Perspective,” Direction 43 no. 2 (Fall 2014): 153.
192

robot) as ‘he’ and the apparently cisheteronormative love stories of Alex + Ada and How to Pass

as Human are among the countless examples reinforcing such a narrow and unsatisfying

understanding of the human. Other examples underscore the same point, but from a different

perspective. For example, the rise of AI household servants like Apple’s Siri and Google’s

Alexa, reinforce traditional gender roles, with women being helpful, appeasing, and working in

support roles.428 These AI function like digital personal assistants and are designed to facilitate

the daily lives of their users and follow their commands. The combination of these traits, with

female names and voices, reinforces gendered roles in the division of labour. Women exist to

support, rather than lead, innovate, and create.429 In an even more pointed example, Microsoft

introduced—then quickly removed—an AI chatbot from Twitter after ‘she’ learned to tweet

using highly sexualized and misogynistic language. Women working in the industry argue that

such embarrassments could easily be prevented if researchers understood how bias works in AI.

They hold that this requires marked improvement in participation of women and other

traditionally excluded groups in AI research activities.430 The lack of diversity in AI and related

fields might also contribute to homophobic research initiatives. For example, a 2017 Stanford

study claimed success for an AI that could determine a person’s sexual orientation through the

examination of faces.431 News of this paper and its results sparked backlash.432 Researchers

428. While Siri was launched in 2011 with a traditionally feminine voice, Apple later added a male option and
regional accents.
429. Cristina Criddle, “Amazon Echo’s Alexa is yet another virtual assistant reinforcing sexist stereotypes,” The
Telegraph, September 19, 2016, accessed October 28, 2018, http://www.telegraph.co.uk/technology/
2016/09/19/amazon-echos-alexa-is-yet-another-virtual-assistant-reinforcing/.
430. Jessica Bateman, “Sexist robots can be stopped by women who work in AI,” The Guardian, May 29, 2017,
accessed October 28, 2018, https://www.theguardian.com/careers/2017/may/29/sexist-robots-can-be-stopped-by-
women-who-work-in-ai.
431. Yilun Wang and Michal Kosinski, “Deep neural networks are more accurate than humans at detecting
sexual orientation from facial images,” Journal of Personality and Social Psychology (forthcoming).
432. Derek Hawkins, “Researchers use facial recognition tools to predict sexual orientation. LGBT groups
193

clearly had not considered personal safety and privacy in developing this experiment, the results

of which could easily be used to further the persecution of queer people or falsely identify people

as such. In upwards of 70 countries worldwide, homosexuality is criminalized or subject to other

penalizing laws. In eight countries, it is punishable by death. In such political climates, this AI

can do no good. Further to this, the researchers failed to account for fluidity of gender identity,

non-white face recognition, and sexual orientation other than gay and straight. This singular

example shows how easily researchers can reinforce existing patterns of oppression and create

new ways to carry out injustices.

Third and fourth wave feminists, women of colour, and those emerging from

marginalized settings are especially helpful at this point. These areas of inquiry are very attentive

to the link between local context and global phenomenon, and take seriously the interconnected

quality of social forces like race, gender, and class. Feminist theologians and researchers from

related areas have developed understandings of God, society, and gender that offers a helpful

corrective to the problems of representation named above. Reading robots and AI through the

lens of these sources, theologians find invaluable insight into human experiences of embodiment

that further cleave the divide between us and robots. In feminist and closely related discourse,

the primacy of embodied experience is a near-universal theme. Aspects of this include how

women’s bodies are perceived and subjugated, including through socially mandated participation

in patriarchal structures. Cheryl Townsend Gilkes, a sociologist interested in American

Christianity, contributes to critique of these structures through a womanist lens. Tellingly, she

aren’t happy,” The Washington Post, September 12, 2017, accessed October 28, 2018,
https://www.washingtonpost.com/news/morning-mix/wp/2017/09/12/researchers-use-facial-recognition-tools-to-
predict-sexuality-lgbt-groups-arent-happy/?utm_term=.f6112f62e45d.
194

writes, “First of all, the womanist idea is a way of both reading and hearing.”433 This adds a

dimension to traditional theological discourse, which often relies very heavily on the passive,

printed word. She wants to restore the place of hearing in religious life as part of a return to

embodiment and return to the diversity of human experience that makes up the Body of Christ.

Through listening, one gets in touch with how words—and indeed the Word—resonates with

one’s own being and body. This makes listening, oral culture, preaching, relationship, and song

the connective tissue of theological methodology. Gilkes’s approach helps diversify the

theological understanding of the human by moving from the inclusion of metaphorical to real

voices. For something to carry meaning in religious communities it must be embodied. She

encourages her students to enter into this praxis, even calling on them to imagine “one’s hand on

one’s hip” while reading, to help access the necessarily embodied character of religious

reasoning and traditions.434 Such an approach subverts traditional power relationships in

theological reasoning. She notes that it “challenges the privilege we give to written cultural

artifacts over oral cultural artifacts.”435

The methodology of roboticists and AI researchers is already in tune with the kind of

embodied discernment and learning proposed by Gilkes. Robots themselves have bodies and act

in response to the world. Researchers struggle with issues of embodiment, relating to degrees of

freedom or successfully navigating unknown urban environments. Unfortunately, this is where

their appreciation of human embodiment—and embodied discourse—comes to an end.

Researchers like Gilkes encourage theologians interested in robotics and AI to keep dissecting

433. Cheryl Townsend Gilkes, “Womanist Ideas and the Sociological Imagination,” Feminist Studies in
Religion 8, no. 2 (Fall 1992): 149.
434. Ibid.
435. Ibid.
195

these too-simple ideas about embodiment and draw on more human experiences, including the

ways in which race and gender differently shape experiences of embodiment.

Similarly, queer researchers, including theologians and others interested in religious

experience, help correct ideas about gender expression and sexuality in robotics and AI research.

The deconstructive efforts in this area are especially useful in critiquing how we intentionally or

accidentally label robots as ‘male’ or ‘straight.’ For example, Nikki Young of Bucknell

University writes: “One of the most important features of experience as a method is its ability to

both illustrate and dismantle the relationship between normativity and the frameworks of power

operating within and among different groups of people.”436 The normativity of robotics and AI is

quite simply that which already operates in human reasoning, academic discourse, and social

structures. This includes support for gender and sex binaries, and cisheterosexual relationships as

the benchmark against which all else is measured. Queer theory, as articulated by Young, pushes

interdisciplinary theology toward difficult questions—and possibly more troubling answers. She

urges her reader to not only ask who has been left out, but ask about the structures that support

this exclusion.437 In queer experience, this is often about dismantling binaries, not only between

woman and man, female and male, gay and straight, but also between normative and

nonnormative human experience. This is a deliberate dissolving of boundaries that often serve no

purpose than to oppress and exclude. In terms of robots and AI, these categories serve very little

purpose in applications developed to-date. There is no scientific or technological advantage to

calling Jibo a ‘he’ or artistic merit in developing white, male comic book protagonists. This is

436. Thelathia N. Young, “Queering ‘The Human Situation,’” Journal of Feminist Studies in Religion 28, no. 1
(Spring 2012): 128.
437. Ibid.
196

simply human bias replicated in new forms of human expression. The process of queering,

according to Young, exposes these assumptions and the privilege embedded in such binary

classifications. It confronts privilege and makes it no longer possible to claim that one’s own

experience is universal.438

Many of the biomedical applications of robots and AI discussed throughout this project

relate directly to different experiences of embodiment, and how these are understood in terms of

disability and ability. BLEEX exoskeleton and related robots facilitate walking, and BCIs have

the potential to help overcome brain and spinal cord injuries. Telepresence robots have also

helped chronically ill children attend school and interact with their peers.439 Other areas of

growth include robots designed to help children with autism spectrum disorders and to help

people get back into sports and physical activity following injury.440 From the perspective of

many, these are therapeutic applications designed to relieve suffering and facilitate fuller

participation in society. Some researchers, however, are inverting these traditional relationships

and arguing that it is the environment that needs adaptation and modification rather than the

human. This adaptation involves abandoning conventional ways of organizing schools, offices,

and other social spaces, and creating new places tailored to a richer diversity of human

experience. For example, a firm in Denmark hires people with autism, drawing on parallel

strengths like attention to detail and designing offices to better suit their sensory needs. 441 The

438. Ibid., 129.


439. Charlie Sorrel, “Telepresence Robots For Sick Kids Are So Effective, They Even Get Bullied,”
FastCompany, September 20, 2017, accessed November 4, 2018, https://www.fastcompany.com/3063627/
telepresence-robots-for-sick-kids-are-so-effective-they-even-get-bullied.
440. Vanderbilt University, “Interactive Robot Helps Children With Autism” (video), posted March 20, 2013,
accessed November 4, 2017, https://www.youtube.com/watch?v=7T7cIY-MIxc.
441. “Welcome to Specialisterne Denmark,” Specialisterne, accessed 5 November, 2017,
http://dk.specialisterne.com/en/.
197

net result is employment opportunity and social inclusion for those who are often left out of

traditional workspaces. Others, still, decouple human dignity from one’s ability to participate in

labour markets altogether. Notably, Jean Vanier argues that entering relationship and

surrendering your own comfort is the most faithful approach to these issues. The willingness to

listen, and to sacrifice in responding, is the hallmark of authentic relationship. Merely trying to

‘fix’ the human or solve their problems without this connection simply reinforces the shame and

exclusion that many have experienced since birth. His approach moves people on these margins

to the centre, and instead of forcing them to organize their lives around social norms and social

services, their lives become the heart of the community.442

Dutch ethicists and theologian Hans Reinders has also contributed to discussions relating

the human, the imago Dei, and experiences of disability. In Receiving the Gift of Friendship he

remarks on the simplicity, yet difficulty, of claiming that all humans are made in the image of

God. From one perspective, it is theologically intuitive and inclusive. It is democratizing and

connects all humans. The challenge, however, is when theologians probe the assumptions that

are bound up in this claim. Often what it means to be made in the image of God excludes people

with disabilities. This is partially evidenced in Chapter Three and Herzfeld’s analysis of the

substantive, functional, and relational interpretations of the imago Dei. In each of these

instances, the interpretation of the doctrine excludes those who cannot work, have intellectual

impairments, or have restricted embodied experiences. These mainstream interpretations

remarginalize already marginalized people. Despite the good intentions of a generous

interpretation of the imago Dei, “people still feel rejected.”443 Reinders especially challenges the

442. See for example Jean Vanier, Becoming Human (Toronto: House of Anansi Press Inc., 1998), 93ff.
443. Hans Reinders, Receiving the Gift of Friendship: Profound Disability, Theological Anthropology, and
198

place of “selfhood and agency” in theological understandings of what it means to be human.444

Those with profound disabilities often face significant barriers in exercising agency, if they are

able to at all. Again, these insights from theology seriously challenge the assumptions about the

human in robots and AI, but also in theological responses like those of Chapters Two and Three.

These theological and scientific perspectives have a very strong emphasis on agency, with

autonomy often the benchmark of success for robots and AI. Reinders instead pushes the limits

of these understandings of the human and, consequently the image of humans found in robotics

and AI research.

One’s embodied experience of race, gender, sexual orientation, and disability all

influence one’s experience of economic class. These social forces exist in interplay and magnify

either wealth or poverty. In “The Image of God as Technosapiens,” Antje Jackelén writes, “We

should not stop asking how high-tech science can be pursued without widening the gap between

the rich and the poor.”445 While theologians, including Jackelén, often interpret the poor

expansively and evocatively, in the context of robots and AI it is also worth examining this gap

in terms of conventional economic poverty. Even in 2001, Wylie-Kellerman and Batstone

flagged issues relating digital technology to class structures. In their brief article, they discuss

how many of these apparent advances are also elitist, rely on new forms of literacy, and

inaccessible to many people.446 As described elsewhere in the three preceding chapters of this

project, robots and AI often make the comfortable classes more comfortable, while contributing

little to wellbeing of those who are economically marginalized. While some movements

Ethics (Grand Rapids, MI: William B. Eerdmans, 2008), 5.


444. Ibid., 24.
445. Jackelén, 294.
446. Wylie-Kellerman and Batstone, 22.
199

democratize technology (e.g., open source technologies), robots and AI remain largely the

domain of the economically privileged. Analysis in this area is largely directed at the impact of

robots and AI on national economies, labour markets, and changing demographics in workforces.

What remains underdeveloped is study of how these technologies impact economic wellbeing

right from birth. Access to robots and AI in healthcare even in utero, interaction with various

household applications, opportunity to learn engineering and programming at school and so on,

all multiply privilege of already privileged individuals.

While robotics and AI, and related interdisciplinary theological discourse, suffers from a

general lack of contextual awareness, existing theological resources readily resolve many

problems relating to contextuality and lived human experience. Theological reasoning is

increasingly welcoming of understanding the human and human societies in terms of race,

gender, sexual orientation, disability, and class—and the intersection of any combination of

these. Each of these in turn contributes to how robots are developed, the applications imagined

for them, and how theologians respond to their presence among us. As described in this section,

contextual theologians provide helpful concepts and methodologies that remedy many of the

shortcomings of the approaches to robots and AI outlined in Chapters Two and Three. These

insights help interdisciplinary theology better appreciate the diversity of human experience, and

resist colonial and other dominant narratives that insist on totalizing sameness. They also

encourage ever-richer understandings of embodiment, and challenge theologians to hear—

metaphorically and literally—marginalized voices and respond to them fully. Improved

contextual awareness also helps break down dualistic thinking that usually goes without critique

in robotics and AI research, calling into question why we call robots ‘he’ or ‘she’ at all. Finally,

contextual theologians help invert dominant thinking about the relationship between humans and
200

their environments. Mainstream approaches to robotics and AI reinforce the idea that humanoid

robots (and by extension humans) should adapt to their environments. Those with intimate

knowledge of disability, like Jean Vanier, insist that this relationship should be the other way

around. The marginalized move from the periphery of society to the heart of it, and how we

organize our homes, workplaces, and lives grows from there. Contextual awareness keeps focus

on ecological justice, the wellbeing of others, and reinterpreting the Good News in an age of

humanoid robots and AI.447 Orientation to these goals keeps focus on not only robots and AI

themselves, but on the methods used to develop and implement them. Early theological

responses to robots and AI hinted at this work, with Wylie-Kellerman and Batstone noting that

“We make moral choices not only when we decide how to apply technology, but also in

designing the pursuits of our research.”448

Application of Robots and AI

Today there are few lives untouched by military, biomedical, and industrial applications of

robotics and AI. This impact is only strengthened by the consolidation of such research in

massive, global technology companies. Such an environment makes contextual theological

critique especially necessary. There is movement within the robotics and AI community to deal

with the ethical development and use of these technologies, and even appreciation that this work

is necessarily an interdisciplinary undertaking. 449 As with other aspects of this theological

approach to robots and AI, existing resources help round out the efforts of Foerst, Herzfeld, and

447. Jackelén, 294.


448. Wylie-Kellerman and Batstone, 23.
449. “Robot Ethics,” IEEE Robotics & Automation Society, accessed November 10, 2016, http://www.ieee-
ras.org/robot-ethics.
201

others. In this area, ecotheology proves again essential in critiquing contemporary application of

robots and AI.

Industrial and commercial settings

Consumers generally give little thought to the role of robots and AI in industrial

processes and patterns of consumption. End users experience a new car or book, or take a

Tylenol to ease a headache. Behind these and so many other products are assembly lines free of

humans and warehouses filled with Roomba-like robots. The decreasing cost of these

technologies, their ability to do what humans cannot, and increasing range of applications

ensures that robots and AI will become more and more important in global patterns of

consumption. This, of course, has consequences for labour markets and the use of resources,

posing significant challenges for building just societies and encouraging sustainable living.

There is no better case study on this front than American online retailer Amazon.com,

Inc. As of this writing, Amazon is the eight largest retailer in the world, and by revenue the

largest online retailer.450 Like all companies of its scope, Amazon consistently finds new ways to

expand its reach and increase revenue, including online grocery orders, self-publishing, and film

and television subsidiaries. Today, the company boasts more than 300,000 employees around the

world, and its origins as a one-man online book retailer are hardly recognizable.451

450. Lauren Gensler, “The World’s Largest Retailers 2016: Walmart Dominates But Amazon Is Catching Up,”
The Wall Street Journal, May 27, 2016, accessed November 10, 2016. http://www.forbes.com/sites/laurengensler/
2016/05/27/global-2000-worlds-largest-retailers/#21b1c76329a9.
451. “Economic Impact,” Amazon, accessed November 10, 2016, https://www.amazon.com/p/
feature/nsog9ct4onemec9.
202

Amazon also owns its own robotics enterprise, AmazonRobotics, which is a rebranding

of Kiva robotics that Amazon bought in 2012 for $775 million (US).452 Until 2012 Kiva held a

monopoly on the kind of warehouse robot needed by Amazon and its retail competitors. After

buying Kiva, its parent company allowed contracts with competitors to expire, leaving major

retailers searching for robotic solutions and prompting an urgent race to develop replacement

robots.453 Today Amazon uses an estimated 30,000 robots in its warehouses around the world.454

They retrieve merchandise from a sea of warehouse shelves and turn it over to human employees

to package and ship. This mechanization can dramatically reduce the time between receiving a

customer’s order and sending it out. The relative small size of warehouse robots compared to

human employees also allows for more inventory per square meter. The impact of these robots is

significant: “According to an analysis by Deutsche Bank, adding them to one new warehouse

saves $22 million in fulfillment expenses. Bringing the Kivas to the 100 or so distribution centers

that still haven’t implemented the tech would save Amazon a further $2.5 billion.”455 These

technologies act as an accelerant to an already voracious process, best exemplified in the rise of

Cyber Monday, a day invented by online retailers to take advantage of peak consumer spending

following American Thanksgiving.456 The phenomenon is slowly spreading throughout the world

452. Scott Kirsner, “Acquisition puts Amazon rivals in Awkward Spot,” Boston Globe, December 1, 2013,
accessed November 10, 2016, http://www.bostonglobe.com/business/2013/12/01/will-amazon-owned-robot-maker-
sell-tailer-rivals/FON7bVNKvfzS2sHnBHzfLM/story.html.
453. Jon Markman, “Robots, Workers and Amazon,” Forbes, July 18, 2016, accessed November 10, 2016,
http://www.forbes.com/sites/jonmarkman/2016/07/18/robots-workers-and-amazon/#469116db70e7.
454. Kim Bhasin and Patrick Clark, “How Amazon Triggered a Robot Arms Race,” Bloomberg, June 29, 2016,
accessed November 10, 2016, https://www.bloomberg.com/news/articles/2016-06-29/how-amazon-triggered-a-
robot-arms-race.
455. Ibid.
456. “‘Cyber Monday’ Quickly Becoming one of the Biggest Shopping Days of the Year,” Cision PR
Newswire, November 21, 2005, accessed November 11, 2016, http://www.prnewswire.com/news-releases/cyber-
monday-quickly-becoming-one-of-the-biggest-online-shopping-days-of-the-year-55695037.html.
203

and in 2015 online sales stood at $3 billion (US), the highest one-day amount in history.457

Amazon partakes in its very fair share of these revenues, with online shoppers ordering more that

425 items per second from Amazon.com on this day.458 These staggering numbers are made

possible and then amplified through the use of robots and AI. In addition to the warehouse robots

described above, cutting edge AI at work on its retail website constantly makes product

recommendations and is highly effective in getting people to add more to their online shopping

carts. Some reports suggest that as much as 35% of Amazon’s online revenues are a direct result

of these onsite and email recommendation formulas.459 Further, in its bid to compete with brick

and mortar retailers, Amazon is also working to reduce the time between the purchase of

consumer goods and the gratification of receiving them. The PrimeAir delivery service is set to

use drones to deliver packages in under half an hour.460

This roboticization of consumption perpetuates injustice at a global level and accelerates

unsustainable practices widely critiqued in ecotheology, civil society, and elsewhere. Beyond the

straightforward examples described above, these technologies also usually require conflict

minerals, mined unjustly or illegally in places of conflict. The human and ecological cost is

457. “Cyber Monday Surpasses 3 Billion in Total Digital Sales to Rank as Heaviest Online Spending Day in
History,” comScore, December 2, 2015, accessed November 11, 2016, https://www.comscore.com/ Insights/Press-
Releases/2015/12/Cyber-Monday-Surpasses-3-Billion-in-Total-Digital-Sales-to-Rank-as-Heaviest-US-Online-
Spending-Day-in-History. A similar phenomenon exists in China for Singles Day, a day celebrating being single
turned into mass-marketing opportunity. Chinese online retailer Alibaba regularly posts record sales on this day. See
for example, “Singles Day: Alibaba posts record sales,” BBC News, November 11, 2016, accessed November 11,
2016, http://www.bbc.com/news/37946470.
458. “Meet the robots making Amazon even faster (video report),” CNET News, November 20, 2014, accessed
November 11, 2016, https://www.youtube.com/watch?v=UtBa9yVZBJM.
459. Ian MacKenzie, Chris Meyer, and Steve Noble, “How retailers can keep up with consumers,” McKinsey &
Company, October 2013, accessed November 11, 2016, http://www.mckinsey.com/industries/retail/our-insights/
how-retailers-can-keep-up-with-consumers.
460. “Amazon Prime Air,” Amazon, accessed November 10, 2016,
https://www.amazon.com/b?node=8037720011.
204

enormous. Ecotheologians have long warned against these destructive patterns. Sallie McFague

offers pointed critique of these patterns of consumption and the systems that support them in

wealthy and powerful countries. She argues that it is not enough to simply consume mindfully or

ethically in the world of Amazon.com. Instead, she advocates transformation of how our

societies are organized and how we think about these structures. She writes, “changing from an

SUV to a Prius is not enough—we may have to reconsider the use of automobiles altogether.”461

Thomas Berry also understood the toxic link between global ecological crises and the habits of

North Americans: “Our entire society is caught in a closed cycle of production and consumption

that can go on until the natural resources are exhausted or until the poisons inserted into the

environment are fed back into the system.”462 As a corrective, he encourages the consideration of

the whole Earth community in developing and using technologies. In other words, “human

technologies should function in an integral relationship with earth technologies.”463 Earth

technologies are the processes resulting from the accumulated wisdom of billions of years of

cosmogenesis, and in these lie superb insight into the workings of ecosystems, the Earth, and

even the universe. Things humans design, therefore, should be in concert rather than competition

with this intrinsic and sacred wisdom. In this view, humans will insert robots and AI into these

systems only in ways that preserve the integrity of cosmogenesis and promote its flourishing.

This leaves no room for the rapacious consumerism so idolized by Amazon, and western culture

in general.

461. McFague, Blessed are the Consumers, ix.


462. Berry, The Dream of the Earth, 57.
463. Ibid., 65.
205

Thomas Berry proposes yet another change in perspective helpful at this moment. As

described above, robots and AI are an accelerant to already unsustainable consumption, always

at the expense of others. This consumption fuels dissatisfaction and disappointment, ending in

only more rapacious consumption of finite resources. Against this, Berry calls for reenchantment

as an antidote to such destructive ways of living. Reenchantment is a change in the way humans

relate to the world that brings about humility and appreciation for our connectedness to all of

cosmogenesis. It reorients scientific and technological ambitions away from consumption and

competition, to sustainability and flourishing. As Berry writes, “This reenchantment with the

earth as a living reality is the condition for our rescue of the earth from the impending

destruction that we are imposing upon it. To carry this out effectively, we must now, in a sense,

reinvent the human as a species within the community of life species.”464 Such a transformation

is essential for robotics and AI if it is going to contribute meaningfully to the resolving

ecological and social crises, and the sustaining of life on Earth. This radical shift, for example,

could include moving away from addressing the perceived shortcomings of individual humans

(e.g., certain biomedical applications, rendering the human mind immortal) to addressing our

collective failures as a global society (e.g., using robots to clean up oceans, using translation AI

in peacebuilding and education efforts). Just and life-giving application of robots and AI require

a new imagination, one that is separated from military ambitions and the inertia of capitalism. An

ecotheological hermeneutic grounded in the vision of the human and cosmogenesis outlined

above contributes significantly to this decoupling and reenchantment.

464. Berry, The Dream of the Earth, 21.


206

Berry and McFague are not alone in their critique of the impact of such technologies.

Noted French multidisciplinary thinker Jacques Ellul was an early critic of societies configured

according to the values represented by Amazon and similar enterprises. Berry even finds an ally

in Ellul, noting with appreciation that he spoke out against the “Imposition of a technosphere on

the biosphere . . . with its progressive devitalization and dehumanization of life.” 465 Writing in

the decades just following the birth of contemporary robotics and AI research, Ellul foreshadows

many of the concerns raised in theological responses to robots and AI. Foremost he is concerned

about the gap in a technological society between those who do the creating and the “mass of

human beings who make up this society and who are merely those who have no choice but to use

technology.”466 This wedge between designer and user is a fundamental concern about class

distinctions, with those having little or no power subject to the desires of the few with the most

capital. It only strengthens distinctions used to exclude and marginalize people, especially along

the lines of race, gender and sexual orientation, disability, and class. The consumers are

“alienated by the surfeit of technologies,” and their lives wholly and unquestionably shaped by

this saturated technoculture.467 He argues that technology too easily falls to “political

temptation” and that these are tangled with the ends of ambitions of a technological society

itself.468 He further argues that technology, like science and other reasoning strategies, is

responsible for its own self critique. Those in the former category—the creators of technology,

the drivers of the technological society—are in need of transformative experiences. This

465. Ibid., 59.


466. Jacques Ellul, “Technology and the Gospel,” International Review of Mission 66, no. 262 (1977): 110.
467. Ibid.
468. Ibid., 111.
207

transformation starts with the deconstructive process of self critique, especially reflecting on

“their achievements as these touch on social, political, economic, and above all, human

contexts.”469

Central to Ellul’s efforts is the argument that meaning should not be found in science,

technology, or the technological society. He points to science and the crumbling myth of

progress as evidence of the risks of placing too much weight on meaning being confined to any

human enterprise. Counter to thinkers like Berry and McFague, Ellul emphasizes the

transcendence of God as the most theologically viable way forward. From his perspective, it is

necessary to go beyond—to literally transcend—the differences between science and theology,

and to find meaning apart from the oppressive and destructive forces of the technique. This is the

locus of hope: hope for transformation of the human, society, and the technology that binds

everything together.470 Ellul frames this transformation largely in terms of liberty, especially

from the systems that force the technological society on passive consumers. In this view, God is

“above all, Liberator.”471 The military-industrial complex is perhaps the most important of these

oppressive systems in Ellul’s work. Long before Singer and Herzfeld, he noted—and critiqued—

the symbiotic relationship of technology with industrial and military interests. Though he

questions the strength of causality in this cycle, he argues that the relationship between war and

technology is well established and that the “army is the main customer” of technology.472

469. Ibid., 113.


470. Ellul, The Technological Society, 415.
471. Ellul, “Technology and the Gospel,” 116.
472. Ibid., 115.
208

Some researchers are already making progress in responding to the issues raised by

thinkers like Berry, McFague, and Ellul, including a group of French researchers who recently

committed themselves to the use of robots in the study of ecology. In “Robots in Ecology:

Welcome to the Machine,” they highlight many advantages of using robots over human

researchers.473 For example, robots can function in almost any environment ranging from the

surface of Mars to inside the human body to the bottom of the Mariana Trench. Also, with thanks

to improvements of biomimicry, robots can increasingly go undetected in any number of settings

and situations. These advantages help carry out sensitive and difficult tasks, including non-

intrusive research of vulnerable and rare species, working more harmoniously with Earth

technologies.

The scale of the industrial, consumer problem demands a proportional response. As Berry

notes, “sustainable progress must be progress for the entire earth community.”474 Such an

orientation to cosmogenesis requires a move “away from our conquering attitude to a more

evocative attitude.” 475Against seemingly unstoppable pressure to consume, discard, and repeat,

Christian tradition offers an alternative way forward. In the breaking of bread, and the miracle of

loaves and fishes, Jesus points to the importance of community in consumption (e.g., Matthew

26:17-30, Mark 6:31-44). These are simple meals, born out of local resources and customs. They

473. Along with the enthusiasm these researchers have for novel applications of robotics and AI, they also
cultivate a critical voice about the use of robots in military application and the deficiency of current legal
instruments in dealing with the use of robots even in the field of ecology. D. Grémillet et al., “Robots in Ecology:
Welcome to the Machine,” Open Journal of Ecology 2 (2012): 49-57. Legal scholars have long anticipated these
questions. Phil McNally and Sohail Inayatullah addressed the rights of robots from a number of perspectives,
including how different worldviews shaped by culture will impact legal interpretations of robots among us. Phil
McNally and Sohail Inayatullah, “The Rights of Robots: Technology, culture and law in the 21 st century,” Futures
April 1998: 119-136.
474. Berry, Dream of the Earth, 66. Emphasis original.
475. Ibid., 60.
209

strengthen relational bonds and are democratizing in their equal treatment of all who gather. The

meal is a process of treasuring what we already have, and finding miracles in the mundane. The

abundance lies not in meal itself, but in the transformative power to nourish, reconcile, and build

community. These are the ambitions of Christian consumption.

The concerns outlined above relate directly to existing problems, especially the

ecological crisis. These new uses for robots and AI are imaginatively infinite. For example, there

is likely a role for these technologies in safer, more efficient recycling that allows for better

sorting of discarded goods and perhaps the extraction of recyclable materials from hazardous

situations. This process could delay material ending up in landfills, make profits to further fund

sustainable projects and protect people from dangerous work. Robots and AI may also have a

role to play in the clean up of extreme environments, like finding and removing garbage from

remote parts of the Earth’s oceans. Increasingly, people turn to so-called smart homes for

entertainment and streamlining of digital lives. Such technologies can also help optimize

consumption of electricity, gas, and oil in lighting and heating buildings through improved AI.

The possibilities are without end. Through truly interdisciplinary dialogue, theologians and

ethicists can nudge robotics and AI toward these life-preserving and just applications. Only

through listening to the needs of the world can roboticists truly come to fair and life-giving

practices.

Biomedical applications

From nanobots invisible to the unaided human eye to the behemoth da Vinci surgical

system, robots and AI are appearing everywhere in advanced modern medical practice.

Increasingly, health care professionals use robot and AI applications in all manner of procedures
210

and treatments, including therapeutic treatments (e.g., BLEEX exoskeleton described in Chapter

Three), diagnosis, surgery, and more. Robots even appear in pharmaceutical laboratories to carry

out the impossibly precise tasks of chemistry and biochemistry required to develop new drugs.

The applications discussed here are not robots and AI unto themselves, but derivative products.

Like the Amazon robots described above, they rely directly on robotics and AI research and

point to future developments in this area. Each new frontier in the biomedical use of robots and

AI raises important ethical questions related to the distribution of resources and the boundaries

between therapy and enhancement.

The da Vinci surgical system is one of the best known and widely used biomedical

robots. The system debuted in 2000 as the heir of a number of technologies dating to at least the

early 1980s.476 Several breakthroughs make the da Vinci system possible, including joystick

control laparoscopic surgery, successful use of robots in prostrate surgery, and “human-machine

interfaces and haptics.”477 Today it is used in a number of routine surgeries, especially in the

treatment of prostate cancers and some gynecological conditions. Proponents emphasize the

benefits of robot-assisted surgery. It allows for greater precision and flexibility than surgery

performed by unmediated human hands. The system moves seamlessly without trembling and

has more degrees of freedom than the joints in human hands. According to the website of

Intuitive Surgical (the parent company of the da Vinci system), there are currently more than

476. As technologies become safer and more widely accepted, they also become less remarkable and worthy of
ethical debate. For example, Antje Jackelén names pacemakers as one such controversial-but-now-accepted
technology. She argues that technologies that merge human and machine exist on a continuum and their social
acceptance is likewise a gradual phenomenon. Jackelén, 290.
477. “Intuitive Surgical Company History,” Intuitive Surgical, Inc., accessed November 10, 2016,
http://www.intuitivesurgical.com/company/history/.
211

3,800 units installed worldwide. Of these, only 182 are outside the United States, Europe, and

Asia.

Advocates of the system hold that it is a clear and empirical improvement over

conventional surgical techniques. Advantages include fewer infections, less intrusive procedures,

and smaller wounds. Surgeons also cite increased “precision and accuracy” among the benefits

of robot-assisted surgery. Fifteen years of peer-reviewed literature tells a less confident story. A

May 2016 study showed that the perception of robot-assisted surgery is optimistically inflated

compared with reality.478 For example, while it is verifiable that the da Vinci system does

improve precision and accuracy, the study debated or dismissed the other claims named above.

The da Vinci system is far from alone on the robotics frontier of modern medicine. For

many researchers, nanobots—robots and components scaled to micrometers (.000000001m)—

are the most intriguing path for the prevention and treatment of illness ranging from cancers to

depression. They hope, for example, that these microscopic robots can one-day target cancerous

cells and save healthy ones (something not possible in therapies like chemotherapy and

radiation), or deliver pharmaceuticals exactly where needed in the brain thereby reducing side

effects and peaks and valleys of normally titrated medications. In theory, these technologies

could enhance diagnostic capabilities, dramatically reduce the instances of invasive surgery, and

make drugs far more effective and far less damaging. Should these nanobot therapies come to

fruition, medicine as widely practised in the world would fundamentally change.

478. A. Ahmad et al., “Robotic surgery: current perceptions and the clinical evidence,” Surgical Endoscopy 31
no. 1 (January 2017): 255-263.
212

Another horizon for biomedical application of robots and AI lies with the human brain.

For some decades now, researchers have worked on brain-computer interfaces (BCIs) or brain-

machine interfaces (BMIs).479 These devices allow for communication between brain and device

(either implanted or resting on the skull) with a view of circumventing the brain or outsourcing

some of its function. In current iterations, BCIs are far from Moravec’s dream of Mind, but the

potential remains that these technologies are the first steps toward futures where the human body

is redundant. Interdisciplinarian Michael L. Spezio echoes this possibility, noting that a BMI is a

“form of transcendence that will allow us to realize heretofore unknown aspects of our humanity

and a sense that eventually BMI will eclipse humanity altogether.”480

Current research focusses on therapeutic applications including the restoration of vision

or hearing lost through accident or congenital defect, the restoration of movement and sensation

following spinal cord injury, and allowing those with some kinds of disability to better interact

with and manipulate their environment through mind-control (e.g., turn off lights through

thought).481 Though researchers have been working on these applications for decades, there

remain technological and ethical barriers to their success. On the technological side, non-

invasive BCIs (i.e., the type that read electrical signals produced by the brain through the skull)

struggle with weak or difficult to interpret signals, whereas invasive BCIs (i.e., the type

surgically implanted and in direct contact with neurons) struggle with the body rejecting a

479. BCI research began in earnest in the 1970s at the University of California with financial support from the
National Science Foundation and a grant from the Defense Advanced Research Projects Agency (DARPA), an
agency of the United States Department of Defense. These technologies emerged as a result of military funding.
Armin Krishnan, Military Neuroscience and the Coming Age of Neurowarfare, (New York: Routledge, 2017), 61.
480. Michael L. Spezio, “Brain and Machine: Minding the Transhuman Future,” Dialog 44, no. 4 (Winter
2005): 375.
481. See Spezio for some emerging applications of BMIs and information about their funding and development
through DARPA. Spezio, 376.
213

foreign object and the buildup of scar tissue around the implantation site. On the ethical side,

researchers still must address informed consent when working with patients with brain injuries

that have affected reasoning, memory, speech, or cognition. These technologies also present the

problem of distinguishing between therapy and enhancement. Though definitions vary, ethicists

and other researchers generally define therapeutic use in part by the relief of suffering and the

treatment of disorder or disease. The challenge is that these experiences are intensely personal

and partly socially constructed. The Deaf community is an interesting case study. Many in fact

celebrate their distinct subculture, and want to protect it from medicalization. From this

perspective, there is no suffering to alleviate and no disorder in need of a cure. Mental illness is

another interesting consideration. Many views relating to mental health and disorder, including

the distinction between therapy and enhancement, are culturally-conditioned. For example, in

Canada and the United States, mental illness is highly medicalized with a multi-billion dollar

pharmaceutical industry supporting this view. One by-product of this approach is a strong

impetus to pathologize human behaviour, which more readily allows for certain kinds of medical

and therapeutic interventions. The Diagnostic and Statistical Manual of Mental Disorders (DSM-

5) is an excellent example of a cultural propensity to quantify and classify a complex range of

symptoms and behaviours.482 This classification has become a significant cultural export that

some have questioned as unhelpful and even damaging. The British Psychological Society, for

one, recently published an article arguing that ignoring or downplaying cultural differences in

482. The fifth edition of the DSM was published in 2013 and received by psychiatrists with some degree of
controversy including regarding the influence of the pharmaceutical industry over those most closely involved in the
development of both the fourth and fifth editions. See L. Cosgrove and S. Krimsky, “A Comparison of DSM-IV
and DSM-5 Panel Members’ Financial Associations with Industry: A Pernicious Problem Persists,” PLoS Medicine
9, no. 3, accessed August 14, 2017. https://doi.org/10.1371/journal.pmed.1001190.
214

this area “raises the possibility that globalising notions of psychiatric illness may cause more

harm than good.”483 For example, much globally applied research in this area emerges from

contexts like the United States, Canada, and the United Kingdom, leaving out much of the so-

called Global South. Further, psychological and psychiatric research in these contexts often

excludes significant swaths of the population like pregnant women, women altogether, or

Indigenous communities, and instead focuses on readily available populations of university

students. Western approaches to mental illness have led to the introduction of phenomenon like

posttraumatic stress disorder and clinical depression to places that previously had no such

category for experiences matching these diagnoses. The imposition of these categories squeezes

out alternative interpretations of mental states and behaviours, which are sometimes welcomed

or treated as spiritual events rather than mental disorders.484 Such instances further blur the

boundaries between disorder and mere difference in experience, and therapy and enhancement.

This makes the ethical application of BCIs (and future related robotics and AI technologies) even

more nuanced than many experts will allow.

An early theological response to robots in hospitals raised another set of ethical

questions. Writing in 1997, John Valentino remarked on his experience with a hospital robot

used to dispense meals and medications—often rather imperfectly. His concerns were unrelated

to the distinction between therapy and enhancement, or about privilege in using a robot to

dispense meals, or the possibility of deadly robot error. Rather, Valentino was worried about

483. Ross White, “The globalisation of mental illness,” The Psychologist 26, no. 3 (March 2013): 182-185,
accessed August 14, 2017, https://thepsychologist.bps.org.uk/volume-26/edition-3.
484. See for example, Malidoma Patrice Somé, Of Water and Spirit: Ritual, Magic and Initiation in the Life of
an African Shaman (New York: Tarcher/Putnam, 2004); Ethan Watters, Crazy Like Us: The Globalization of the
American Psyche (New York: Free Press, 2010).
215

pastoral relationships. He sees the delivery robot as a clear move in dehumanizing care in clinical

settings, further reducing human touch in an already unsympathetic environment.485 He also

questions introducing robots as cost-saving measures when humans also need decent and safe

work. Others have worried that robots will replace humans in the workforce, but so far these

concerns have not borne much fruit as economies adapt and workers turn to new skills and

trades.486

Michael L. Spezio, a researcher with training in psychology, neuroscience, and theology,

suggests that turning to a relational interpretation of the imago Dei is the way forward in dealing

with the ethical use of technologies like BCIs. Critically, for Spezio, this means surrendering

some of the power of these technologies and entering meaningful relationship with those who

stand to benefit or suffer from them. This means no longer developing technologies according to

the agendas of the rich and powerful, but “acknowledging the limits of one’s imagination, giving

up sole reliance upon constructing distant others, and actively seeking out face-to-face dialogue

with others whose histories and futures are most affected by the deliberations.”487

Emphasis on relationship guides robotics and AI research toward these ethical

considerations. This focus on the relational quality of the human puts embodied experience

above other concerns, which is critical for people whose bodies and experiences are traditionally

excluded from robotics and AI research. Relationship is a dynamic experience, responding to

infinite equilibria and changes in the systems. This dynamism means that the ethical application

485. John Valentino, “My Hospital Has a Robot,” Journal of Pastoral Care 51, no. 1: 117.
486. See for example, Robert D. Atkinson and John Wu, “False Alarmism: Technological Disruption and the
U.S. Labor Market, 1850-2015,” ITIF: Information Technology and Innovation Foundation, May 8, 2017, accessed
January 23, 2018, https://itif.org/publications/2017/05/08/false-alarmism-technological-disruption-and-us-labor-
market-1850-2015.
487. Spezio, 379.
216

of robots and AI is not a static, achievable goal. Rather, it too is relational and must be cultivated

into a practice. This practice is strengthened through ever-broader consideration of the day-to-

day lives of the people affected by robots and AI and their interpretation of wholeness and

flourishing. Putting relationship at the centre of biomedical applications of robots and AI has

further consequences. It insists on mutuality between researcher and the beneficiaries of these

technologies. This mutuality is characterized by discernment and respect. Without reciprocity of

respect, the relationship dissolves, and robotics and AI are merely imposed on a consuming

public.

Military applications

Military interests fuel robotics and AI research. Biomedical applications, and the use of

robotics and AI in industrial and commercial settings are largely by-products of military

investment in these technologies. Regardless of how innocuous, or even benevolent, robots and

AI seem, they are always related to concerns about national security, defence, and global

geopolitics. The trend toward the robotization of national defence will only grow as robotics and

AI progresses around the world, and along with it open source technologies, a new arms race,

and black markets to support illicit trade.

Theological responses to these rising phenomena have been slow, but are indeed

emerging as faith communities come to realize the potential impact of military interest in

robotics and AI. Perhaps most importantly, 2015 saw a worldwide interfaith declaration calling

for a ban on fully autonomous weapons.488 As of January 2016, the declaration currently has

488. PaxChristi International, “Interfaith Declaration in support of a Ban on Fully Autonomous Weapons,”
accessed November 17, 2015, http://blogs.paxvoorvrede.nl/wp-content/uploads/2014/08/Signatories-list_April-2-
2015.pdf.
217

more than 140 individual and corporate signatories including the World Council of Churches and

representatives of its membership. While it is promising that autonomous weapons receive this

kind of global, faith-based critique, the statement reveals the work that lies ahead. The statement,

like all others of its genre, is non-binding and has no policy implications. There is limited

representation from Canada with none of its largest denominations being signatories. Statements

like these only highlight part of a broader, far more complex picture. This text and other efforts

from the WCC focus only on autonomous weapons, which is but one very small aspect of

military interest in robotics and AI. The related enterprises of war, national security, surveillance

and intelligence, demands much more than simply replacing soldiers with robots. Drone

technology is increasingly important for surveillance, exoskeletons and robots like Big Dog help

reduce human burden and permit longer and more difficult missions, robots and related

technologies can be put to use in reconnaissance missions, and even carrying out acts of torture

to extract information from apparent enemies. The applications are unlimited and go far beyond

the straightforward autonomous weapons addressed in the WCC statement. This whole

constellation of research activity is an object of theological concern. Further, the churches must

take care to link the rise of robots and AI to other social and historical phenomenon. These

technologies do not emerge in a vacuum, but are directly related to other global patterns of

dominance, occupation, lingering colonialism, and so on.

The world is now in a new arms race, one that is not clearly dominated by the United

States, or even the so-called west. In this worldwide quest to develop robots and AI for military

and related use, different goals and assumptions clash. A panel hosted by the Commission of the

Bishops’ Conferences of the European Union (COMECE) dissected some of these differences.

Dr. Niklas Schörnig from the Peace Research Institute Frankfurt noted that some countries,
218

including Russia, only want to discuss international legislation once autonomous weapon

systems have been developed. In contrast, countries like the United Kingdom are reticent to

admit interest in developing such weapons at all. Schörnig also noted that there are internal

debates in Germany about whether to call the systems autonomous or highly-automated. These

factors—among countless others—delay movement toward international treaties, laws, or even

guidelines.489

The Campaign to Stop Killer Robots is good example of how churches and theologians

can respond to these uses of robots and AI. Launched in 2013, the campaign brings together a

growing number of non-governmental organizations from all over the world. According to its

website it is an “international coalition that is working to pre-emptively ban fully autonomous

weapons.”490 The coalition focusses on national laws, the implementation of United Nation

recommendations, and international treaties as essential mechanisms for halting advancement

toward so-called killer robots. The campaign calls for the implementation of recommendations

found in a 2013 UN report, including establishing moratoriums, commitment to international

humanitarian law, and transparency regarding the development and testing of autonomous

weapon systems.491 This effort makes theological and ethical responses to robots and AI timely

489. Niklas Schörnig, “The Dangers of Lethal Autonomous Weapons Systems,” (lecture, COMECE, Brussels,
November 19, 2015).
490. According to its website the campaign is a “global coalition of 64 international, regional, and national non-
governmental organizations (NGOs) in 28 countries that calls for a preemptive ban on fully autonomous weapons.”
Most of these organizations come from the civil society sector, but in Canada membership includes Project
Ploughshares, an operating division of The Canadian Council of Churches. “About Us”, Campaign to Stop Killer
Robots, accessed November 17, 2015, http://www.stopkillerrobots.org/about-us.
491. “The Solution,” Campaign to Stop Killer Robots, accessed January 7, 2016,
http://www.stopkillerrobots.org/the-solution.
219

and also immensely practical. It draws on the contextual and global experience of the churches,

and carves out space for religious contributions in one of the most important issues of our time.

The goals of the Campaign to Stop Killer Robots campaign bear parallels to the Arms

Trade Treaty (ATT), which came into force at the end of 2014, and related civil society activity.

According to the United Nations Office for Disarmament Affairs website, the ATT seeks to

“regulate the international trade in conventional weapons . . . and work to prevent the diversion

of arms and ammunition.”492 The development and ratification of this treaty shows how difficult

it is to counter the momentum of military interests. It took decades to shape and to date not all

United Nations member states have ratified the treaty.493 As of early 2016, 79 countries have

acceded to or ratified the treaty, with another 130 having signed. Canada is a notable holdout and

the only North Atlantic Treaty Organization (NATO) member to not sign the ATT. Full

ratification and robust implementation remain in the future. Illegal trade undermines the

intention of the treaty and there are endless examples of non-state actors, militant groups, and

guerilla warfare hostile towards efforts to regulate arms trade globally. The regulation of robots

and AI faces the same challenges, but also the new problems associated with virtual trade and

transmission, hacking, and cyberwarfare.

Increasing accessibility of robots and AI challenge the development and ratification of

international laws and treaties. As fast as legislation and policy keeps pace, people and new

technologies subvert these efforts. At every moment, growing digital literacy, open source

492. United Nations Office for Disarmament Affairs, “The Arms Trade Treaty,” accessed 17 November 2015,
http://www.un.org/disarmament/ATT/.
493. For more on the ATT see Advocates for International Development, “A Short Guide to the Arms Trade
Treaty,” accessed 7 January 2016, http://www.a4id.org/sites/default/files/user/
Guide%20to%20Arms%20Trade%20Treaty.pdf
220

technologies, and the falling cost of 3D printing compound these challenges. Robots and AI are

not the sole jurisdiction of roboticists and AI researchers. Stories like those of retiree Maynard

Hill launching an unmanned aerial vehicle from Newfoundland that reached Ireland,494 or anti-

immigrant groups in the United States using drones to patrol the border with Mexico, will

become more and more common.495 Such challenges make detailed theological and ethical

engagement with the applications of robotics and AI in biomedical, industrial, and military

settings all the more pressing.

These three areas of application—industrial and commercial, biomedical, and military—

show how widespread and diverse life with robots and AI will be. It also shows how much

theologians must consider when dealing with these technologies, especially on a global scale.

Industrial and commercial applications introduce a range of theological and ethical challenges,

which include the role of robots and AI in global climate change and unsustainable patterns of

consumption. These applications also bring into the conversation the rights of human workers,

their safety, and entitlement to decent work. Biomedical applications bring to light still more

theological and ethical concerns. Robots and AI like the da Vinci system point out imbalances in

access to health care, and devices like BCIs point to increasingly blurred lines between

therapeutic and enhancement applications. These, in turn, raise questions about how we build

communities and social structures where diversity of embodied experience is honoured. Military

applications of robots and AI point to still more historical challenges. From a theological

perspective, these instantiations present a crisis for justice and peace on an unprecedented global

494. Frank Wicks, “Legend Pilots a Radio-Controlled Model Airplane Across the Atlantic Ocean,” Progressive
Engineer, 2011, accessed January 7, 2016, http://www.progressiveengineer.com/profiles/maynardHill.htm.
495. Tim Murphy, “The Meltdown of the Anti-Immigration Minuteman Militia,” Mother Jones, August 4, 2014,
accessed January 7, 2016, http://www.motherjones.com/politics/2014/08/minuteman-movement-border-crisis-
simcox.
221

scale. These applications also make clear that interdisciplinary theological responses to robots

and AI must move into the spheres of policy development and advocacy. All together, these

three areas show the scale of the global robotics and AI enterprise and the necessity for more and

more concentrated theological efforts in this area, across a range of subdisciplines and

methodologies.

Method

Theological inquiry into robots and AI is about both the content and claims of research and the

methods used to carry it out. It is impossible to extract theological and ethical questions about

robots and AI from the epistemological and methodological assumptions that give rise to the

them. As a result of the work carried out in the preceding chapters, two more methodological

concerns emerge: the selection and use of sources in both scientific and theological reflection on

robots and AI, and attention to intradisciplinary diversity within robotics research. These

considerations strengthen interdisciplinary theology relating to robots and AI. They help identify

and deconstruct biases, find spaces where essential contributions are absent, and forge new

connections between science and theology. This work complements the rest of the work of

Chapter Four to form a novel theological approach to robotics and AI.

Sources

This project’s Introduction illustrated the breadth and depth of interest in robots and AI.

Contributions from writers, mathematicians, philosophers, poets, inventors, performers, and so

on all serve as testament to the compelling quality of these objects of both science and science

fiction. This range of interest leads to an impressive diversity of sources that complicate

theological responses to robots and AI. The roboticists and theologians discussed to this point
222

show varied relationship with this diversity of sources available to them. For example, Moravec

draws on anthropology, history, census data, science fiction, and more. In a similar collage of

inspiration, Brooks found creative impulse in science fiction, nature, and American military

interests. Like Brooks, Breazeal cites science fiction as a positive influence in her work,

especially her childhood viewings of Star Wars. Later, her interest in how humans grow and

learn and become autonomous agents in the world was enhanced by her parenting experiences.

Finally, like her academic predecessors, Knight embraces an array of sources in her own research

ranging from performance art to web technologies.

In this indiscriminate mixing of inspiration, the four roboticists have much in common.

They all show eagerness to find inspiration from outside traditional academic spheres. This

creative engagement is promising for interdisciplinary theology as it shows an already developed

ability to appreciate insights from other reasoning strategies. It also shows that roboticists and AI

researchers appreciate that their personality and interests influence their research and its

methodology. In this jigsaw puzzle of sources there are also warning signs. There is a clear lack

of open discussion about the selection and use of sources. None of the researchers discussed in

Chapter One address the biases or limitations at work in their finding of inspiration for their

research projects. This ad-hoc approach leaves theologians without consistent patterns for

interdisciplinary reasoning and susceptible to replicating the biases of roboticists and AI

researchers.

Chapters Two and Three show that theologians have an equally ambiguous relationship

with sources in their work on robots and AI. Anne Foerst, for example, mixes traditional

theological sources with insights from other areas, including sociology, Jewish oral history, and
223

semiotics. Like Breazeal, lived experience has a profound impact on Foerst’s work, including her

time spent working with Cog in Brooks’s laboratory. She also emphasizes the affective aspect of

her research, illustrated with her powerful anecdote about the handshake between Harvey Cox

and Cog.496 Like Foerst, Noreen Herzfeld shows an interest in playing with the boundaries of

conventional theological inquiry. In her work, Buddhism holds some influence in her

consideration of human relationships with technology, as does contemporary examples from the

gaming industry and military interests. Though her analysis is generally more contextually

sensitive than Foerst’s, both miss the opportunity to address important methodological questions

relating to sources in their theological research.

Drawing boundaries

Sources may be bound by any number of criteria, including society of origin, historical

era, type of robot, how the research is disseminated, and so on. With some justification, the

options are plentiful. Defining boundaries gives theological discourse about robots and AI

increased focus and better ability to interact directly with robotics and AI research. This mapping

of sources also gives clarity about one’s own limitations. This methodological step is equally

important for roboticists, theologians, and the project at hand.

This thesis deals primarily with North American contexts, the author’s own academic

setting and home culture. Canada and the United States are also very fertile grounds for robotics

496. Other experiments have since made this point anew. Recall, for example, the experiment where a robot
dinosaur was ‘abused’ showing that people often have a similar sympathetic response to watching it get strangled as
they would if it were a human. See Choi, “Brain Scans Show Humans Feel for Robots.” This focus on the affective
also recalls Moravec and Brooks’s emphasis on perception as discussed in Chapter One. These illustrations point to
the importance of human emotional patterns in shaping how we will respond to and interact with robots, even from a
theological perspective.
224

and AI research, especially with American military funding driving so much activity in this area.

This focus has a limiting influence. Postfoundationalist interdisciplinary theology always

balances local context with universal intent. This means that theological reasoning must honour

the particularities of its origins, while focusing on global concerns. The concerns for ethics and

social justice that motivate this thesis are certainly global in view. The restriction to North

American-based thinkers, however, dampens these global concerns and limits access to other

perspectives. For example, this thesis deals primarily with English-language sources, which

dramatically limits access to parts of the world highly active in robotics and AI research (e.g.,

South Korea, Japan, Iran). Not only does one lose access to this research, but also to the social

and theological perspectives emerging from these places.

Limiting sources can also have a positive effect on theological reasoning. This project

focused as much as possible on existing robots and AI, especially those modelled after or built

for hybridity with humans. This focus set aside entire areas of robotics and AI research equally

worthy of theological investigation, including robots modelled after other animals and science

fiction representations of robots and AI. The focus on humanoid robots and AI (e.g., Cog,

chatbots) or those designed for hybridity with humans (e.g., BLEEX exoskeleton, BCIs/BMIs)

allows for more direct response to questions about the human and its status in this world.

Speculation about the future of robots and AI is also historically unreliable, and can potentially

distract from the impact of technologies already in use or development. Similarly, in their focus

on hypothetical situations or exaggerated fears and aspirations, science fiction and other

representations can distract theological energy away from issues emerging from current

application of robots and AI. These limiting factors, therefore, help narrow the discussion to a

manageable scope and leave open clearly defined avenues for further research.
225

The roboticists in Chapter One form an academic lineage, with one influencing the work

of the next. As a student, Brooks helped Moravec with his Ph.D. research at Stanford. In turn,

Brooks would supervise Breazeal’s doctoral research at MIT, who supervised Knight’s master’s

work at the same institution. These relationships bring to light many important internal debates

in robotics and AI, including diverging views on intelligence, embodiment, and human learning.

Comparing one roboticist to the next illustrates how dissent has advanced robotics and AI

research. For example, recall that Brooks diverged from Moravec’s emphasis on the brain and

upended conventional ideas about intelligence in robotics and AI research. Such intentional

pairings are one way theologians can better understand the robotics and AI landscape and

appreciate the nuances therein.

Interdisciplinary pursuits encourage theologians to interact with new sources and

methods. In turn, these sources and methods demand new kinds of theological literacy and

expertise. In terms of robotics and AI, new and social media are especially important in

understanding current trends in research. Blogs, social media, popular culture, and the World

Wide Web complement and even displace traditional academic discourse about robots and AI.

When theologians enter these spaces, they take on the additional task of critiquing these new

forms of discourse. While web-based technologies are rightly praised as democratizing, positive

forces, they are not without their inherent challenges. The anonymity of online spaces

contributes to virulent behaviour, especially towards women and other marginalized people.

More and more news organizations are moving toward “real name only” policies to help curb

this tide of abuse. These platforms are powerful tools to share robotics and research to many

people, however censorship and privacy remain real concerns. Variously throughout the world

social media platforms are either banned or monitored, seriously compromising their equalizing
226

potential. Well known examples include the banning of Facebook in China, Iran, and North

Korea—places where civil rights and liberties already suffer. The United States Department of

Homeland Security has also recently made moves to collect social media information from

immigrants and others. The rise of Facebook, especially in its acquisition of widely used mobile

apps and integration with other web technologies, raises important questions about privacy in an

apparently post-privacy era. Theologians must continually probe what data is being collected and

how this will be funneled into AI applications without informed consent.

Consideration of the selection and use of sources helps achieve methodological aims

important for postfoundationalist approaches to interdisciplinarity. Allowing for unconventional

sources like Twitter and YouTube helps shape an approach based on common experiences and

day-to-day life. This approach helps shift the discourse from purely academic to one that bridges

the spaces between these specialized conversations and the impact of robots on everyday life.

Mapping sources also helps develop a more comprehensive approach to interdisciplinary study

of robots and AI. It provides yet another axis for analysis, and is an important tool for identifying

gaps in related theological discourse. Further, it helps theologians understand where their

interdisciplinary skills need improvement, especially in terms of developing facility with new

and social media. Working in these spaces exposes theological critique of robots to ideas and

methodologies not found elsewhere.

Expansive consideration of sources is a responsible theological practice. It contributes to

the fullest understanding of robotics and AI, and contributes to more reliable information about

their application and ongoing development. This methodological step helps theologians make

choices about how to pursue their interdisciplinary interests and carves a well-justified way
227

forward. Theological reasoning only stands to benefit from the rich selection of resources

available on robots and AI, but critique of the selection and use of sources must be ongoing. Just

as the task of optimal understanding cannot be contained to one human, or even one reasoning

strategy, all sources offer only a partial contribution to theological responses to robotics and AI.

Intradisciplinary diversity

The relationship between scientific and theological approaches to robots and AI is the

primary focus of this research project. Real and perceived friction between these two reasoning

strategies also motivates Foerst, Herzfeld, and others in developing theological responses to

these new technologies. This work is almost entirely interdisciplinary in effort. The diversity

within robotics and AI, among roboticists and AI researchers, however, is also deserving of

theological attention. The four roboticists studied in Chapter One diverge in important ways,

including their methodologies, claims, and goals. Carefully considered, these differences prompt

theological debate that is not well represented in the approaches discussed in Chapter Two and

Chapter Three. The analysis carried out in Chapter One attends to some aspects of this

intradisciplinary diversity, including differences in methodology, views on posthuman futures,

and motivations for robotics research. This work continues here with consideration of the

purpose of robots and contrasting understandings of intelligence, both of which are important

features for this novel theological response to robots and AI.

The four roboticists discussed in Chapter One have very different views on the purposes

of robot and AI. For some, robots and AI should develop their own reasons for existing,

independent of any human goal or aim. For others, robots and AI are much more closely linked

with human ambitions and our day-to-day historical realities. In such views, human and robot
228

futures are intimately intertwined, to the point of forming hybridities between human and

machine. Still others believe that robots and AI are simply tools to facilitate human social

activities. Closer analysis of these differences bring to light divergent assumptions about the role

of robots in society, human enhancement and medical therapies, peacebuilding and warfare.

The purposes Moravec imagines for his research are well outlined in Chapter One. He

stands apart from Brooks, Breazeal, and Knight in his speculation about human futures,

especially the role of robots and AI in evolutionary history. For Moravec, robots and AI of today

ultimately lead to the successor species of tomorrow. Again, unlike the other roboticists

discussed in Chapter One, Moravec is willing to speculate about these far-off futures in thought

experiments detailing human enhancement far beyond anything one has ever seen. In direct

contrast with these transcendental ambitions, Heather Knight is concerned with very practical

applications of her research. These, too, are detailed in Chapter One. Pedagogy is of primary

importance to Knight, who sees her fun and friendly projects as a gateway to more intensive

robotics and AI research. Her efforts are sympathy building, with education and the building up

of a new generation of roboticists as a core goal.

In both Brooks and Breazeal, robots and AI are rooted in the here and now. Their purpose

lies in meeting the needs of our current historical and social circumstances. Speculation and

posthuman futures are of little concern to them and other researchers who follow in this

approach. Robots and AI are always subservient to humans and researchers working in this area

want to embed their projects in existing social structures. In views like this, robots and AI do not

usurp anything essential about the human, but simply facilitate more comfortable and safe living

for the privileged few. Through companionship, household service, industrial applications, and
229

warfare, robots are a lubricant for existing social practices. Breazeal’s latest robot, Jibo,

embodies these goals. It helps families take photos, teaches children to read, and helps in the

kitchen. Companionship and collaboration are its primary modes of operation. These seemingly

friendly purposes warrant further theological scrutiny. At first blush, such applications of robots

and AI seem benign. They are not violent or destructive, they are instructional and helpful. A

closer look, however, shows that these kinds of robots only stand to make the comfortable

classes even more comfortable. With tasks aimed at easing day-to-day activities, those who

cannot afford such devices are left out not only of the support they offer, but also from the

accompanying technological literacy. The increasingly roboticized world imagined by

researchers like Breazeal and Brooks brings with it serious likelihood of creating new class

divides or widening existing ones.

The stated and unstated purposes of robotics and AI research looms large in theological

responses. The differences emerging in attending to intradisciplinary diversity reveal the

complexity of human futures with robots. The ambitions revealed in this kind of analysis show

just how incompatible much of robotics and AI research is with the commitments of contextual

and ecological theology. Whether roboticists are like Knight and interested in art and

entertainment, or like Moravec and interested in building a successor species, there is little

evidence of commitment to justice and ending oppression. This means theologians must continue

cultivating their own purposes for robotics and AI research, a process already begun here and in

other projects discussed in the Conclusion to this research project.

As detailed in Chapter One, roboticists and AI researchers enjoy lively debate not only

about the definition of AI, but the nature of human intelligence itself. These contrasting views
230

reveal much about values and biases at work in robotics and AI research, and are of significant

consequence for theological responses. Attending to intradisciplinary diversity in this area

reveals much about how roboticists understand the human and its embodied existence. While

theologians have started to address these differences, they often easily adopt scientific ideas in

this area without grappling with the significance of these differences for their theological

reasoning.497 Specifically, understandings of human intelligence that rely too much on

computational power are theologically inadequate. They reduce the range of human experience

to that which can be calculated. This leaves out important forms of human expression and

dramatically narrows the range of people considered intelligent. These views link human worth

and dignity to the ability to perform certain tasks or achieve certain goals. The implications of

this approach are significant. First, if intelligence is correlated to computation then it can be

enhanced through greater speed or processing power, opening way for BCIs and other hybrid

technologies to improve human intelligence through artificial means. Second, this correlation

makes human intelligence the benchmark for all other kinds of intelligence. It fits in well with

Moravec’s scheme where humans are at the apex of creation, implicitly advancing an

anthropocentric view of life on Earth. Third, if intelligence is measurable in this way, it allows

for too easy comparison with robot intelligence. It brings robot and AI activity into parallel with

a very narrow range of human experience, thus overemphasizing the similarities between the

two.

Scientific researchers do offer views on intelligence helpful for theological consideration

of robots and AI. As described in Chapter One, not all are committed to a purely substantive or

497. Defining human intelligence and associated theological problems are not unique to Christian theology and
its response to robots and AI. Azriel Rosenfeld, a Jewish scholar, wrestled with this issue in his 1996 article,
“Religion and the Robot.” Rosenfeld, 15.
231

functional view of intelligence. Notably, in setting aside the human brain as a model for AI,

Rodney Brooks made his approach more inclusive of the diversity of human experience. His

approach diminishes the importance of certain kinds of intelligence and honours that which is

found elsewhere, especially in non-human animals. This cultivates an appreciation for

differences in intelligence and experience that make up human self-understanding. His work

challenges technical imagination about the importance of certain features in developing AI, and

expands possibilities for theological engagement. In such an approach, robotics’ view of

intelligence is not entirely incompatible with the values of contextual and ecological theologies.

Such a move potentially honours the broad scope of lived human experience, including

differences in intelligence. It also diffuses overly anthropocentric approaches to AI in its interest

in non-human animals and their success in evolutionary history.

Wentzel van Huyssteen’s most notable case-study in postfoundationalism—his 2004

Gifford lectures on human uniqueness—provides a postfoundationalist understanding of

intelligence that his helpful for interdisciplinary dialogue at this point. For van Huyssteen,

human distinctiveness rests with our experience of embodied intelligence.498 This is necessarily

the result of chaotic and indeterminate events, starting with the birth of the universe and flowing

forward from the emergence of life on Earth. “Human intelligence should indeed be seen as the

product of a long and complex process of biological evolution.”499 This is the starting—but not

end—point for theological understandings of intelligence.500 These biological, and even cosmic,

origins are the substrate of human intelligence, but evolution has brought us to the point where

498. van Huyssteen, Alone in the World?, 112.


499. Ibid., 37.
500. Ibid.
232

we can transcend these beginnings. The social realm, of which scientific reasoning is a

magnificent part, leads us to craft something new of our intelligence. This is where reasoning

strategies like theology enter and help us make sense of where we are now and where we can go

with this uniquely and cosmically crafted intelligence. This broadens interdisciplinary

understandings and encourages much deeper reflection on what it means to use “our particular

human ability to cope intelligently with an intelligible world.”501

A theological response

Attention to intradisciplinary diversity within robotics and AI research enhances

theological responses to robots and AI. It enables theologians to see this reasoning strategy with

greater clarity and detail, allowing for better interaction with the claims and methods of

theological reasoning. Importantly, it helps dissolve the image of robotics and AI research as a

monolithic whole, helping theologians respond to the complexity and diversity of this global

enterprise. Many of the debates internal to robotics and AI warrant the same consideration as

ones that shape theological research communities. Attention to these differences, like in the

analysis found in Chapter One, also helps bring theological assumptions to the foreground. For

example, the diversity of understandings of intelligence and how they relate to understandings of

the human is of concern for both roboticists and theologians. Noreen Herzfeld understands this

concern well, and responds to it with extraordinary care in her own body of work.

Intradisciplinary study calls for improved theological literacy about robots and AI.

Understanding the methods and applications of robotics is a significant task requiring dedication

501. Ibid., 76.


233

to learning the language, methods, and values of other reasoning strategies. The more

theologians learn, the more they can enjoy unmediated access to robotics and AI research.

William Sims Bainbridge’s work on AI is an excellent example of how theologians can bridge

scientific and theological reasoning in this way. In Artificial Intelligence and Models of Religious

Cognition he demonstrates to religious studies scholars and theologians how to incorporate

robotics and AI research into work aimed at another audience by showing the complexity and

issues at stake in developing AI models of religious communities.502 Though his account is from

a religious studies perspective, its methodology and insights are also compelling for

interdisciplinary theological inquiry. He shows how theologians and others can turn to primary

sources from robotics and AI and develop insights from them, while at the same time finding a

distinctively Christian voice. He sets the example for this kind of work in taking significant care

in understanding technical concepts in AI and interpreting them for his readers. In doing so he

reveals the multivalent character of AI research and how it might be variously interpreted by

theological communities. While Bainbridge’s efforts are useful for the purposes of the current

research project, they are, of course, not without their own shortcomings. For example, Adam

Drozdek criticizes Bainbridge for falling short of mirroring the complexity of lived social and

religious life. Such critique only further underscores how difficult it is for AI to replicate

intelligence and how the social dimension of human embodied intelligence complicates things to

a near unimaginable degree.503 Approaches like Bainbridge’s are a useful methodological guide

for theologians interested in robots and AI. He deals with robotics and AI research at a level of

502. William Sims Bainbridge, God from the Machine: Artificial Intelligence Models of Religious Cognition
(Lanham, MD: AltaMira Press, 2006).
503. Adam Drozdek, “God from the Machine: Artificial Intelligence Models of Religious Cognition,”
Perspectives on Science and Christian Faith 59, no. 1 (2007): 81-82.
234

detail rarely seen in theological literature, including analysis of the differences between rule-

based reasoning and neural networks, and how different AI setups emphasize the role of other

agents in cognition.504 His work also covers broad ground ranging from assumptions and values

about the mind, the human, society, and more.

Lessons from Bainbridge illustrate some of the most basic challenges of modelling

human intelligence through AI and robotics.505 Human assumptions in these technologies are

inescapable and require awareness and sensitivity on behalf of those developing these

technologies. The examples discussed above in the sections on contextual awareness and military

applications of robots and AI illustrate these problems very well. Perhaps no greater challenge

emerges than teaching AI how to recognize people. Here, AI would have to learn, for example,

to distinguish between any number of religious and cultural signifiers. For example, the

difference between a burkini and a woman preparing for a scuba dive, or a hijab and a high-

fashion headwrap. These problems take on ultimate importance in military settings, where

distinguishing between friend and foe is essential. Should an AI, for example, be programmed to

never harm children, this clearly opens way for the increased use of children to directly interfere

with enemy robots and AI. In the never-ending competitiveness of militarization, these

technologies will continually be foiled by enemy counter technologies and the ingenuity of

humans themselves.

The methodological concerns named here and addressed throughout this research project

highlight important shortcomings in both scientific and theological thinking about robots and AI.

504. Bainbridge, 2, 77-83.


505. See for example Bainbridge, “Segregation,” in God from the Machine (Lanham, MD: AltaMira Press,
2006): 17-36.
235

Theological immersion in these methodological issues enhances interdisciplinary dialogue on

this front and brings theological reasoning into more comprehensive contact with robotics and AI

research. It also gives contour to important ethical issues, and contributes to better understanding

of biomedical, industrial, and military applications of robots and AI. The dovetailing of method

and content in this way is essential for a theological response that is both sufficiently

comprehensive and oriented toward social justice.

Conclusion

The constructive efforts of this chapter form the basis for a new interdisciplinary theological

approach to robots and AI. It both deconstructs and draws on the strengths of contributions from

six central figures—Moravec, Brooks, Breazeal, Knight, Foerst, and Herzfeld—in parallel with

insight from elsewhere in the Christian tradition, especially ecotheology. Each of the six key

figures featured contributed to the overall theological response to robots and AI. Comparison of

their work, illustrations from relevant robotics and AI research, and critical engagement with

contextual concerns and postfoundationalist interdisciplinary round out these efforts. This

constructive approach revealed four areas needing much greater theological attention—the

human, contextual awareness, application of robots and AI, and methodology.

In terms of robotics and AI, this also means continuing in the ways named above (e.g.,

the Campaign to Stop Killer Robots) and finding venues for critical engagement with all aspects

of robotics and AI. The challenge, of course, is not to halt progress that genuinely enhances the

well-being of the earth and its inhabitants, but to transform existing work and research

communities for the sake of building up a more just contribution to the world. Internal to

theological reasoning, many thinkers have advanced ideas about how communities can develop
236

criteria to help shape faithful relationships with technology. As discussed above, Noreen

Herzfeld appreciates the Quaker practice of developing questions for communal discernment.

Thomas Berry is similarly invested in communal action and sets out an “agenda for an ecological

age,” which details essential efforts in responding to ecological crises with technology. These

efforts relate to human technological achievement and include understanding the scope of

changes required, taking care of our technologies’ environmental impacts, and developing

technology with bioregional sensitivity in mind.506 The introduction to The Dream of the Earth

names four fundamental questions that Berry asks his reader to consider about all technologies:

How should humans live upon the earth in a mutually enhancing relationship? How can there be

progress shared by all the components of the planet? Can there be any true and lasting progress if

it is not shared on this comprehensive scale? Must legitimate human development necessarily

degrade the natural world?507

In 2001, before Foerst and Herzfeld produced their major works, Bill Wylie-Kellerman

and David Batstone advocated for critical engagement with and resistance to digital

technologies, including the refusal to adopt them altogether. They noted that the Christian

tradition has a long history in this vein: “It became a tactic of nonviolent resistance to be out of

sync.”508 One of the ambitions of such resistance is to “establish leverage against dominant

elites,” with Amish communities standing out as a notable example of this in practice.509

Though it is an enormous challenge to discern in community what technologies and methods

506. Berry, The Dream of the Earth, pp.65-69.


507. Ibid., iii.
508. Wylie-Kellerman and Batstone, 22.
509. Ibid., 24.
237

warrant theological critique, the Amish show that discernment, public participation, and

democracy help in these efforts.510 They have taken the power away from what Ellul calls the

technological society and crafted something distinctive for themselves. Admittedly, such firm

boundaries around community vis-à-vis technology is unpalatable (and even untenable) for most.

The Netherlands, however, offers an alternative way to resist “established elites” and

democratize technological research. Starting in the 1970s, university researchers and students

began to establish “science shops,” which are small research facilities often affiliated with

universities, but take their research agendas from the public through civil society expertise and

the NGO sector.511 This inverts many of the traditional power relationships in technological

research and responds more directly to the day-to-day concerns of people affected by advances in

technology. The model has proved enormously effective and soon spread to neighbouring

countries, and throughout the world, a number of community based research centres at Canadian

universities.512 This is one clear venue for contributing theological and other perspectives to the

development of robots and AI. Here, those forgotten by the mainstream can insert their voices

and advocate for more democratic and diverse research.

As discussed above, understanding the human brings together technological and

theological reasoning in shared concerns. In this dovetailing of interests, roboticists and

theologians engage in important debates about what it means to be human, what humans should

value, and how we should relate to the rest of the world. Theologians, however, stand apart in

510. Ibid., 27.


511. “History,” Living Knowledge: The International Science Shop Network, accessed April 2, 2018,
http://www.livingknowledge.org/science-shops/about-science-shops/history-of-science-shops/.
512. “Global Partners,” Living Knowledge: The International Science Shop Network, accessed January 24,
2018, http://www.livingknowledge.org/contact/global-partners/.
238

their interest in what it means to be made in the image of God in an age of robots and AI. While

Foerst and especially Herzfeld made early efforts in responding to these uniquely theological

concerns, they left work for future researchers. The efforts above, drawing on Berry, Barbour,

McFague, and others, show that existing theological resources stand ready to begin to fill in

these gaps. These contributions help respond to concerns raised in other theological approaches

to robots and AI, and point the way toward a more comprehensive and contextually satisfying

interdisciplinary theology. Emphasis on relationality underscores the profound remaining

differences between humans and robots. While eager roboticists are inclined to celebrate

proximity to human intelligence and social life, insight from ecotheology on this front widens the

gap. Understanding the human in terms of intrinsic relationality clarifies the distinction between

human and robot, and challenges conventional notions of success in AI research. Expanding this

appreciation of relationality to the interconnectedness of all life, and even all of cosmogenesis is

another important move for this theological approach to robots and AI. Doing so expands the

circle of concerns and makes clear the values of robotics and AI research. The contingency of the

human makes concern for other humans, especially the excluded and oppressed, central to

theological responses to robots and AI. Further to this, it means that development and

applications of robots and AI can only be called just if they take the experiences of the

marginalized seriously. So far, robotics and AI have a mixed record on this front. Biomedical

applications show significant promise for developing therapeutic technologies that start with

these experiences. For example, Paro, a therapeutic robot seal is designed for use with elderly

people, espescially those being treated for dementia. The seal has a calming effect and can help

relieve some symptoms of mental illness. Similarly, telepresence robots increasingly help

chronically ill children participate more fully in the social and academic life of conventional
239

schooling. Such technologies point to hopeful futures, where robotics and AI work with other

social, scientific, and theological advances for improved wellbeing of the world’s marginalized.

Military and industrial applications do much to undermine these potentially positive uses of

robots and AI through their disregard for the finitude of natural resources and value of human

life. As described above, industrial applications like Amazon disregard any insight from those

with little power to change cycles of consumption, or their participation in them. Retailers like

Amazon, especially when facilitated by robotics and AI, increase and normalize demand for

cheap consumer goods that perpetuate unfair labour practices and the violation of human rights.

Looking at military interest in robots and AI, these technologies, for example, put the lives of

children increasingly at risk as they may more easily evade recognition software. Concern for

embodiment strengthens the importance of accounting for the experiences of the marginalized,

including the elderly, children, and economically impoverished. These examples indicate that

robotics and AI research (so far) is largely uninterested in the diversity of human embodied

experience. Contributions from feminist and ecological theologians are indispensable at this

point. They show clearly that roboticists and AI researchers (so far) have a very thin

understanding of embodiment, one which can hardly be compared to the human experiences they

seek to replicate.

The approach to robotics and AI outlined here shows significant concern for ethics,

justice, and application of these technologies. While this concern is not well developed in

theological literature to date, the examples noted here combined with these new creative efforts

point to growing ecclesial and theological concern for robotics and AI research. This small, but

important contribution, points to how to respond justly to the rise of robots and AI in our world.

This final call, to discern what resistance means in an increasingly roboticized world, draws on
240

the courage to live differently already so well rehearsed in some elements within Christian

tradition.
Conclusion

Introduction

The seeds of imagination for humanoid robots and AI date back centuries in human history. For

as long as humans thought creatively about their status in the world—including their relationship

with gods and God—they have sought to create a counterpart for themselves. The achievements

of twentieth and twenty-first centuries have permitted this desire to come to its fullest expression

yet, where robots and AI are ever-present in many lives. The landscape changes daily with new

technologies that only some months or years ago seemed unattainable. Yet, robots and AI remain

a pale imitation of humans and their social lives, and close analysis reveals that the lofty goals of

robotics and AI research tend to recede at the same rate as science and technology advance

towards them. The results of scientific efforts in this area are ambiguous. While certain by-

products enhance the quality of life of mostly privileged people, and AI has facilitated

unprecedented international and intercultural communication, robots and AI are also a

destructive force.

The account of robots, proto-robots, and the early history of AI research in the

Introduction underscores the longstanding and widespread character of many questions facing

roboticists and AI researchers. Analysis from four key roboticists helps develop the

contemporary articulation of these questions and better describes the challenges facing

theologians interested in this area of science and technology. The discussion of Hans Moravec,

Rodney Brooks, Cynthia Breazeal, and Heather Knight shows how untidy, and yet fruitful,

robotics and AI research is as an area for interdisciplinary theological inquiry. Examination of

theological responses to-date, especially from Anne Foerst and Noreen Herzfeld, shows that

241
242

much work lies ahead in developing robust and cohesive dialogue vis-à-vis humanoid robots and

AI. The chapters devoted to these approaches outline the ways in which this area of

interdisciplinary theology is still underdeveloped; organizing and critiquing these approaches

prepared the way for a new theological treatment rooted in contextual and eco-theology. This

approach (developed in Chapter Four) responds to the work already done by theologians like

Foerst and Herzfeld, engages with contemporary robotics and AI research, but is only partially

expressed in existing theological literature. These efforts are indispensable for furthering

academic, ecclesial, and civil society responses to robots and AI.

At its core, this research project is concerned with how increasingly humanlike robots

and AI will change people, their social structures, and societies. These are global-yet-contextual

concerns about relationships, self-understanding, social justice, and the wellbeing of all of

cosmogenesis. This requires debate and clarity about critical methodological issues and

directions for future research. These guiding concerns push the implications of this project

beyond the exclusively academic. Theological engagement with robots and AI is an issue for the

whole church—from theological training facilities, to parishes and congregations, to church

agencies and ecumenical organisations. In this spirit, theological responses must be widespread

and diverse, taking cues from other areas of contextual theology and movements for social

justice. The key findings leading to this, implications of this work, limitations of this study, and

directions for future research are outlined in the sections that follow.

Key Findings

The stated goals of this research project are: to argue for the necessity of sustained theological

research into issues in and approaches to contemporary robotics and AI research, collate and
243

organize a to-date splintered body of theological literature on robotics and AI, and develop a

novel theological response to robotics and AI by addressing lacunae in existing theological

literature. This novel approach expands the understanding of the human already developed in

newer expressions of western theology, including ecological and contextual theologies. Each

chapter builds toward these stated goals.

The Introduction describes the long history of human interest in creating a mechanical

counterpart, including its roots in science, literature, and ancient mythology. Though humanoid

robots and AI are only now getting sustained theological attention, these deep roots show it is an

area of interest that is longstanding, widespread, and influential. The historical survey ends with

brief remarks about the contemporary setting of robotics and AI research, including its impact on

military, industrial, and biomedical life in these early decades of the twenty-first century. Such

an overview makes clear that robots and AI will have significant impact on individual lives and

the societies in which they are involved. It also stresses the importance for theologians to begin a

robust engagement with these objects of science and technology, including critical

interdisciplinary dialogue and responses developed from Christian ethics and social justice.

Understanding the history of robots and AI, their role in human culture, and the most recent

developments and applications are all critical for comprehensive theological responses.

The introductory chapter also made small steps in organizing sources related to theology

and robots and identified some early moments of contact between the two, including, for

example, the religious context of the fable about Descartes and a brief discussion of Alan

Turing’s response to a theological objection. Such information indicates how skewed perception

of robotics and AI are a problem for both theologians and scientific researchers. Bridging any
244

perceived or actual gulf between science and theology on this front requires such historical

inquiry dating back much further than the real onset of AI research in the 1950s.

From a methodological perspective, this chapter implements the kind of contextual

awareness advocated by postfoundationalism and later developed in Chapter Four. Part of this

effort is attention to the historical and social conditions that prompted research into robots and

AI and the technologies leading to them. It also includes paying attention to non-theological

sources and insights that influence theological discussions of robots and AI. In addition, the

approach includes steps toward identifying the political, commercial, and social interests that

shape this area of research today. Such analysis is underdeveloped in the approaches discussed in

Chapters Two and Three.

Other elements of postfoundationalism are at work in this research project even from this

introductory chapter. The historical survey is a protracted argument that robots and AI are a

transversal moment, which according to postfoundationalism is where the concerns, questions,

and methodologies of science and theology intersect. In this chapter and throughout, science and

theology speak for themselves, without the supposition that one is ultimately triumphant over the

other. It takes seriously the insight and wisdom embedded in each reasoning strategy and draws

on the strengths of both theology and science to develop an interdisciplinary response. Finally,

perhaps the clearest expression of postfoundationalism in the Introduction is the central

assumption that theology can make a meaningful contribution to scientific reasoning about

robots and AI.

Chapter One continues in contributing to the main goals of this research project. Through

the discussion of Hans Moravec, Rodney Brooks, Cynthia Breazeal, and Heather Knight, the
245

necessity for sustained theological research into robots and AI becomes even clearer. Their work

illustrates just some of the ways robotics and AI is influential and extraordinarily well-resourced

in western society, and how these technologies stand to change humans and their social

structures. These influential researchers also make it quite clear that robots and AI are about

much more than pure technical achievement. Analysis of their work shows intriguing internal

debates about methods, purpose, and values that are essential for developing interdisciplinary

dialogue on this front. They also debated the nature of human intelligence, dove into the fields of

entertainment and social media, and embedded their robots and AI with culturally-conditioned

biases. Perhaps most importantly, this account of Moravec, Brooks, Breazeal, and Knight (in

combination with the content of Chapter Four) shows how universal and profound the impact of

robots and AI soon will be.

Mapping the robotics and AI landscape, with a close-up on four researchers, offers

important insights that will advance theological work in this area. It brings into sharper relief

some important internal disputes, and gives rise to an understanding of robotics and AI that is

more nuanced than anything found in theological scholarship to date. Such efforts contribute to

the overall argument for sustained theological engagement with robots and AI and helps set the

stage for better organizing theological discourse in the future.

Chapter One also contributes to the stated methodological aims of this research project

and shows many hallmarks of postfoundationalist interdisciplinarity. In particular, this chapter

follows the postfoundationalist directive to take the methods and claims of science seriously and

in doing so gives much closer and critical scrutiny to the work of major figures in robotics than is

available elsewhere in theological writing. Chapter One also makes an effort to translate robotics

and AI research for theological audiences, highlighting the claims and themes that will be most
246

captivating for theologians. The close-up provided in Chapter One shows robotics and AI to be a

far more diverse area of inquiry than is described by most theologians working in this area. This

helps create bridges between the reasoning strategies and show possibilities for ongoing

theological engagement.

Chapter Two focusses on one of two major approaches to robotics and AI developed in

existing theological literature. Anne Foerst’s work is instrumental in giving shape to this

approach and represents the earliest concentrated effort to respond theologically to robots and

AI. Analysis of her work helped identify core features of this perspective, including

understanding robots in terms of human self-discovery and the appropriation of some scientific

interpretations of embodiment. She also emphasized the narrative quality of robots, in particular

the role of robots as symbols joining two previously discrete spheres. These features form one

major theological treatment of robots and AI, which is previously undefined in theological

literature. The efforts of Chapter Two show that there is already some cohesion and organization

to theological discourse and that this can be used to further the dialogue.

Chapter Three followed a similar strategy as Chapter Two. This chapter brought together

and analyzed another major theological treatment of robots and AI, this time rooted in the work

of Noreen Herzfeld. The central features of this approach include emphasis on context, history

and ethics, as well as the importance of the link between understanding human intelligence and

theological interpretations of the imago Dei. Of secondary importance in this treatment are the

role of science fiction and other representations, and virtual and actual hybridity as a tool for

critical analysis. The critique and analysis of Herzfeld’s work shows it to hold much promise for

developing dialogue between science and theology in this area. She advances the discourse

beyond what is found in Chapter Two in her attention to ethical questions and taking context
247

seriously. She also shows unique appreciation for the interrelatedness of robotics research and

biomedical and military applications. These are layers of nuance essential for robust

interdisciplinary interaction. These findings also show that there is already much progress in

contemporary theology in responding to the difficult challenges posed by increasingly humanlike

robots and AI.

Chapter Four stands apart from the others in both its aims and methods. It

contributes mostly to the third stated goal of this thesis—to sketch the elements of a more

complete theological response to robotics and AI by identifying gaps in the existing theological

literature and indicating how they might be addressed. It also differs in its more constructive

approach. Here, new insights were added to the conversation with a view of building up a

stronger dialogue with the claims and aims of the science of robots and AI. One section of this

chapter also contributed to the case for theological inquiry into robots and AI. That is, further

detail about biomedical, industrial, and military application of robots and AI strengthens the

argument for this kind of research, which is developed partially throughout the rest of the thesis.

The constructive efforts of Chapter Four show that there are ample existing theological

resources available to theologians interested in robotics and AI. Though theological writing

explicitly about robots and AI is still quite sparse, Chapter Four demonstrates that established

areas of theology stand to make an important contribution to this new interdisciplinary dialogue.

Of central importance to these efforts are ecotheologians, who have argued extensively and

powerfully for a new way of understanding the human in an age of ecological crises. This

includes dismantling an anthropocentrism that allowed for too-easy comparison between robots

and humans, and replacing it with a view of the human based on holism, relationality, and

embodiment. Such a move breaks down the false equivalencies between humans and robots seen
248

in Foerst and elsewhere. It develops a more complete picture of the human and reveals many

ways in which robots and AI (as the technology stands today) are not much like us at all.

Incorporating insights from ecotheology also served as an important corrective to Noreen

Herzfeld’s ideas about relationality and embodiment. Though she shows good interest in the

historical and social settings that give rise to her research, and attention to ethical concerns, her

ideas about relationality and embodiment are heavily influenced by scientific concepts and need

further development from other reasoning strategies, especially Christian theology.

The organising efforts of the preceding chapters allowed for scrutiny in Chapter Four of

the contextual awareness of robotics and AI researchers, and theologians in dialogue with them.

These efforts revealed much work ahead in theological critique of assumptions about race, class,

gender and other aspects of human social life embedded in robotics and AI research. Importantly,

this section makes explicit what is usually not acknowledged by roboticists and AI researchers. It

exposes biases and underscores how they help perpetuate patterns of power and oppression.

Again, theological tradition rescues interdisciplinary efforts at this point. Existing theological

resources, especially from contextual and ecological theologians, can fill in the gaps left by

roboticists and theological commentary to-date. Applying such theological resources advances

interdisciplinary dialogue about robots by updating it with contemporary theological concerns

largely unforeseen by Foerst, Herzfeld, and others. This serves to also orient theological

discussion toward practical consequences of robots and AI, especially as relates to social justice

concerns well developed in many Christian traditions in Canada, the United States, and

elsewhere.

Chapter Four also identified theological concerns associated with biomedical, industrial,

and military applications of robots and AI. These contemporary applications were in large part a
249

motivating factor for the entire research project and help underscore the theological urgency of

this kind of interdisciplinary research. Examples discussed here—ranging from Amazon, to

surgical robots, to the astonishing breadth of military applications—make clear that contextual

theology ignores robots and AI at its own peril. Focus on these three areas also helps develop a

novel approach to robots and AI that is not seen elsewhere in theological literature. While some

thinkers, notably Herzfeld, see the growing impact of military interests in this area, general

theological attention to these kinds of applications is limited. Importantly, the emphasis on

biomedical, industrial, and military use of robots and AI found in Chapter Four underscores that,

from a Christian vantage point, consideration of ethical questions is inextricable from other

theological questions about robots and AI. On the whole, this discussion suggests how the

relationship between robotics and AI research and longstanding ethical concerns from the

Christian tradition can be clarified.

Finally, Chapter Four attends to more methodological issues for interdisciplinary

theology interested in robots and AI in order to clearly acknowledge that dialogue on this front

requires equally careful work with both content and method. Roboticists and AI researchers

bring to theology new methods that are not quite like others within the broad scope of science-

religion dialogue. These compel theologians to attend to not only their own interdisciplinary

approaches, but to understand and respond to how robotics and AI research is carried out and

promoted. Through a more detailed discussion of the selection and use of sources, and

intradisciplinary diversity, this section shows how underdeveloped existing theological literature

is at the level of methodological engagement with robotics and AI research. Constructively, this

same section provides building blocks based on a postfoundationalist and contextual approach to

remedy this shortcoming. Existing theological literature does not give such close attention to
250

methodological issues and as a result fails to fully grasp the complexity of interdisciplinary

dialogue with contemporary robotics and AI research. This methodological work, paired with the

ethical concerns and other constructive work developed in Chapter Four, forms the nucleus of a

novel theological treatment of robots and AI. Consideration of all these aspects is essential in

developing meaningful theological responses fit for an increasingly roboticized world.

Implications

The primary findings discussed above each contributed to the goals of this research project. This

thesis, then, represents a rare extended theological discussion of robotics and AI that can be used

to strengthen other related efforts and propel further interdisciplinary discourse. The work

carried out to this point seeks to improve the overall quality of theological conversations about

robots and AI, and—importantly—lead to more just development and application of related

technologies. The implications of this work, therefore, touch not only on academic research in

this area, but work for social justice central to the life of many Christian traditions in Canada, the

United States, and elsewhere. In this light, this work should help change not only specialized

conversations about robots and AI but also lead to more public debate about the impact of robots

and AI around the world. It also helps break down the esoteric quality of some theological work

in this area, and bring it more directly into contact with robotics and AI research and public

debate relating to it. The considerations and claims of the preceding chapters will help

theologians and churches further develop responses to policies, treaties, and laws concerning

robots and AI.

An important implication of this project is that it contributes to a growing body of

theological reflection on robots and AI. Just as this project is only possible because Anne Foerst
251

and Noreen Herzfeld were vanguards in the field, this present work will make future research

projects increasingly viable. Others have already taken up cues from their work, including Amy

Michelle DeBaets’s 2012 Ph.D. dissertation, “The Robot as Person: Robotic Futurism and A

Theology of Human Ethical Responsibility Among Humanoid Machines.”513 Her work delves

into the very difficult questions of robotic personhood and the future morality of robots. While

she also makes efforts to organize the budding dialogue between theology and science in this

area (including references to many of the authors named here), it is more focussed on robot

futures, including apocalyptic contexts, and the primary question of robot personhood.

Theological momentum around robots and AI is growing. The current project, and ones

like DeBaets’s, are a clear signal that new generations of scholars are eager to carve out space for

robots and AI within science-theology dialogue.514 This activity is a crucial contribution to

overall inertia within theological training facilities to take a closer look at these technologies and

their impact on various aspects of theological reasoning. At the Toronto School of Theology

there are some early signals that theological scholars might take up these questions more

seriously. For example, the focus of the 2013 Emmanuel College alumni days was “The Rise of

the Post-Human: Where is the Gospel?” This event provided opportunity for community

discernment on posthuman futures, which included discussion of some speculative technologies

513. Amy Michelle DeBaets, “The Robot as Person: Robotic Futurism and A Theology of Human Ethical
Responsibility Among Humanoid Machines” (Ph.D diss., Emory University, 2012), accessed March 19, 2018,
https://legacy-etd.library.emory.edu/view/record/pid/emory:bp4jb.
514. Other projects touch on related issues, including the need for a cyborg theology based on Donna
Haraway’s work. See for example Scott Midson’s doctoral research at The University of Manchester. “President’s
Doctoral Scholar Award,” The University of Manchester, accessed July 30, 2017,
http://www.presidentsaward.manchester.ac.uk/current/facultyofhumanities/scottmidson/.
252

like those proposed by Hans Moravec.515 In early 2015, Regis College hosted an evening lecture

with Bill Ryan, S.J. “Will Robots or People Matter More in 2050?”516 He discussed the

“explosion of robots replacing workers” and asked “whether such new socio-economic problems

can really be solved in our narrow growth-for-profit model.”517 These two events are exciting for

several reasons. First, it shows willingness for established theological schools to accommodate

robots and AI on their research agendas. It also shows very good understanding that such

reasoning must take place not only as part of a regular theological curriculum, but also in

dialogue with people practising congregational ministry. Finally, the themes of these events

show a clear understanding of the links among theology, robots and AI, and daily lived

experience. Concern for workers, for example, stress that theological consideration of robots and

AI should also consider the impact of these technologies beyond the scope of intellectual

curiosity.

Research projects like this one are the building blocks for developing an entire new area

of interdisciplinary theology. As the scholarship matures, it will be increasingly viable for

theological colleges and seminaries to take on robots and AI in a systematic and meaningful way.

Such efforts will invariably lead to increased capacity to deal with many of the ethical and social

justice concerns raised throughout this thesis. One school is taking these new methodological

challenges seriously and enhancing literacy about robots and AI by ‘adopting’ one of its own.

515. “The Rise of the Post-Human: Where is the Gospel?,” Emmanuel College, accessed February 8, 2016,
http://www.haltonpres.org/wp-content/uploads/2013/03/2013-Emm-Days-Flyer-for-Conferences.pdf.
516. “Will People or Robots Matter More in 2050?,” Regis College, accessed November 18, 2015,
http://www.regiscollege.ca/files/bill_ryan_flyer.jpg.
517. Ibid. See also Justin McCurry, “Japanese company replaces office workers with artificial intelligence,” The
Guardian, January 5, 1027, accessed October 11, 2017, https://www.theguardian.com/technology/2017/jan/05/
japanese-company-replaces-office-workers-artificial-intelligence-ai-fukoku-mutual-life-insurance
253

Southern Evangelical Seminary and Bible College (SES) in North Carolina acquired a

humanoid robot in early 2014 relying on financial support from a private donor.518 According to

the SES website, the NAO robot (the same kind used by Heather Knight) is named DAVID for

“digitally advanced virtual intelligence device.” The name is also a nod to Michelangelo’s

sculpture—a representation of strength, youth, and the perfected male form. Representatives

from the school say the robot was purchased to explore the relationship between humans and

robots.519 Such an investment is a strong commitment to the theological importance of robots and

AI and to robust, hands-on interdisciplinarity. Within the scope of human-robot relations, SES

sees DAVID as useful for exploring questions like: Should robots do our jobs? Should they care

for humans in a hospital or nursing home setting? Will care like this take away the human touch

and ultimately become a violation of ethics on a human level? What are the ethical limits of

using robots?520 Through this novel project, the seminary mirrors many of the methodological

moves made by contemporary roboticists, especially those in the newer generations like Heather

Knight. SES takes robots out of the laboratory and turns to alternative platforms for

disseminating its findings, including its YouTube channel. Such work provides an important

complement to this research project, which in most ways follows a more conventional academic

route.521

518. Michael Schulson, “What robot theology can tell us about ourselves,” Religion Dispatches, September 1,
2014, accessed October 11, 2017, http://religiondispatches.org/automata/.
519. “Why did this seminary purchase a robot?,” Christianity Today, February 2014, accessed November 17,
2015, http://www.christianitytoday.com/le/2014/february-online-only/why-did-this-seminary-purchase-robot.html.
520. “Ethics of Emerging Technologies,” Southern Evangelical Seminary, accessed November 17, 2015,
http://ses.edu/ses.edu/about-us/eet.
521. On the European front, the OPTIC network is doing interesting work in relating Christian theology, ethics,
and tradition with robotics, AI, and related technologies. Their approach is to engage directly with those in
technology industries, making their work interdisciplinary and practical. “Homepage,” OPTIC, accessed January 22,
2018, http://optictechnology.org/index.php/fr.
254

One of the most significant implications of this dissertation is that it shines light on the

power imbalances at work in robotics and AI research and its application, a consideration that is

otherwise largely absent from the theological treatment of robots and AI. This point is partially

expressed throughout the thesis, but is most fully articulated in Chapter Four in the sections on

contextual awareness and the human. In describing and highlighting the role of sexism, racism,

and other structural injustices in robotics and AI, this project makes it impossible to go forward

in the same uncritical way. Now that there is theological evidence of these concerns, articulated

in theological writing, interdisciplinary theology must take them into the fold. Clearly no

theologian can take up all the concerns raised here, deal with an increasing body of literature and

rapidly changing technologies, and attend to ethical and methodological questions.

Contextuality, however, must be taken up by the interdisciplinary dialogue as a whole and

theologians cannot ignore questions of justice and ethics when they are clearly connected with

their research. This is a deconstructive task. It requires taking apart existing texts on robotics and

AI—both from scientific and theological researchers—and examining them for biases. It

involves challenging myths about the benefits and successes of robots and AI. Finally, it

demands critique from the perspective of contextual and eco-theological traditions. This is

simultaneously a constructive task. It means forming new connections within existing bodies of

research and drawing on existing resources. It challenges theologians to set aside their own

privileges and take the contributions of those marginalized from and by robotics and AI research

much more seriously. Finally, it is advocacy for robotics and AI research that accounts for the

richness, imperfection, and diversity of human experience.

An extension of this is that parachurch organizations, denominations, and church-related

organizations will play a great role in advancing theological reflection on robots and AI, as many
255

such bodies are already well-equipped to deal with many of the challenges named above.

Drawing on existing advocacy efforts within the church will strengthen the overall theological

response to these technologies and help bring it into the realm of praxis. A few case studies

illustrate how this work is already emerging in some parts of the church. For example, the World

Council of Churches (WCC) is undertaking work at the global level related to the military use of

robotics and AI. The general assembly, at its 2013 meeting in Busan, called for a pre-emptive

ban on autonomous weapons in its statement, “The Way of Just Peace.”522 WCC attention to

autonomous weapons is very important. First, it highlights the truly global impact of robotics and

AI research, especially emphasizing how the resources of a few can affect the lives of many. It

also shows that this is a concern for the whole church, not just its academic institutions and

theological training centres. This engagement shows the important role of ecumenical

organisations and other church bodies in contributing to the overall theological dialogue about

robots and AI. Its work is directed toward existing and developing international standards and

laws, which is largely missing from academic theological texts, and engagement with broader

civil society, non-governmental organizations, and other international bodies. Finally, the

WCC’s work in this area takes up the both the theological and ethical dimensions of robotics and

AI, which is an important feature of the approach developed in Chapter Four. For example,

included in its self-proclaimed goals is the desire to “strengthen the moral threshold against

delegating machines to kill people.”523

522. “Statement on the Way of Just Peace,” World Council of Churches, November 8, 2013, accessed January 7,
2016, http://www.oikoumene.org/en/resources/documents/assembly/2013-busan/adopted-documents-statements/the-
way-of-just-peace.
523. “Killer Robots? Moral questions pervade UN conference,” World Council of Churches, April 23, 2015,
accessed November 17, 2015, https://www.oikoumene.org/en/press-centre/news/killer-robots-moral-questions-
pervade-un-conference.
256

This research project will help develop a Canadian contextual response to robots and AI.

Though global in scope, the theological, political, and scientific climate of different places and

spaces require nuance. Here in Canada, the United Church, among other mainline

denominations, has not yet considered robots and AI with any great depth, though there are some

hints that robots and AI are starting to infiltrate popular consciousness of church-goers. For

example, a 2015 United Church Observer article discussed the ethics of robotics with a futuristic

spin. It touched on household and military use of robots, and raised questions about whether

these robots could ever be programmed to act ethically.524 Hopefully such small efforts, along

with the more concentrated and in-depth research here, will help launch denominational and

contextual dialogue on this important object of theological inquiry. The silence on robots is part

of a broader reluctance to address distinctly digital aspects of life in the twenty-first century,

including social media, virtual reality, mobile technologies, among others. Even when the ethical

and theological links are clear, Canadian denominational offices and ecumenical organizations

seem unwilling or unable to address these pressing issues directly. For example, mining justice

is a core concern for KAIROS: Canadian Ecumenical Justice Initiatives, a coalition of ten

churches and church organisations, yet the ecumenical group has yet to develop a strong critique

of the link between western digital culture and exploitative and destructive mining practices.525

Several of their campaigns even urge supporters to make short videos on their smart phones to

speak up for mining justice, without any acknowledgement of the irony of making such videos

524. Florida pastor Christopher Benek has also written about the problem of robots replacing humans in the
work force. He makes some foundational suggestions for preparing for this inevitability from the church’s
perspective, including working toward fiscal health for individuals, and returning to and strengthening traditional
spiritual practices of the church. See for example, Christopher Benek, “Churches Need to Prepare for When Robots
Take People’s Jobs,” christopherbenek.com, October 19, 2015, accessed November 18, 2015,
http://www.christopherbenek.com/?p=4452.
525. See for example “Open for Justice,” KAIROS Canada, accessed January 7, 2016,
http://www.kairoscanada.org/what-we-do/ecological-justice/mining-justice-open-for-justice/.
257

on devices built as a result of rapacious mining practices. Such missed opportunities are

especially lamentable given that organizations like KAIROS are already well equipped for

conversations about racism, privilege, heterosexism, ecology, and ecological justice among other

things. Building up a generation of Christian theologians, policy experts, and activists who take

robots and AI seriously, along with other aspects of digital culture, will help churches address

these issues in a more timely and direct manner. Work like that carried out in this research

project is instrumental in this effort.

Limitations

This research project gives much attention to the gaps in existing theological literature on robots

and AI. Naturally, the present work also has gaps of its own that can propel future research or

invite contributions from other theological perspectives. Hopefully any such limitations only

inspire more theological debate about robots and AI, rather than hinder it. This study is limited

by its near-complete reliance on English language sources emerging from just a few national

contexts. This limits the perspective of the work and access to other social and cultural views on

robots and AI. While this project is mostly concerned with the North American context and its

implications there, other views offer helpful insights and contribute to a more global

understanding of the impact of robotics and AI. The privilege of the author—white, able-bodied,

English speaking, western, educated, middle-class, and so on—limits the scope of the discussion,

especially when it comes to questions of power and privilege. Though rapid improvement of AI

like Google Translate help take down some of these barriers, this is only rather superficial and

the range of approaches to robotics and AI research represented here is still quite limited.
258

Adding layers of contextual awareness to theological discourse about robots and AI is

one of the major methodological priorities of this project. This was well-applied to researchers

emerging from the author’s own context. The next step, however, is to work for evermore

expanding interest in lived embodied experience that considers cultures and research traditions

other than the author’s own. For example, Iran, Japan, and South Korea are all worldwide leaders

in robotics and AI research, and are truly distinct in their histories and contemporary social

structures, including religious landscapes.526 These differences become even more intriguing for

theologians when they start to mix with the distinctive religious traditions of each of these

places. For example, when Shi’a Islam meets a strong emphasis on engineering and computer

science, inventors build robots that answer the adhan.527 Attention to these different milieus will

help reveal culturally-conditioned biases and, therefore, enhance theological critique of robotics

and AI research from Canada and the United States.

Some limitations of this research project are well beyond the author’s control. Most

importantly, for apparent reasons of competition and national security, there are unknowns about

robots and AI. For example, in the United States some robotics and AI research is buried in so-

called black budgets (i.e., ones that are not publicly disclosed) or through the covert activities of

the Central Intelligence Agency, and “not officially discussed by [the president] or Congress.”528

526. Some such discourse is emerging in North America. See for example Takeshi Kimura, “Hybridity of
Robotics and Mahayana Buddhism: The Mahayana Buddhist Philosophy of Robot in the Case of Masahiro Mori,”
American Academy of Religion, accessed January 7, 2016, https://www.aarweb.org/sites/default/files/pdfs/
Annual_Meeting/2015/2015AMProgramBookSessions.pdf; Tim Hornyak, “Korean machine-gun robots start DMZ
duty,” July 14, 2010, CNET, accessed January 7, 2016, http://www.cnet.com/news/korean-machine-gun-robots-
start-dmz-duty/.
527. See for example, “Iranian teacher uses hi-tech robot to encourage prayers,” Euronews, February 25, 2014,
accessed July 30, 2017. https://www.youtube.com/watch?v=8rmgRAarN5s.
528. Gordon Clark, “Killer Robots: Attack Drones Take a Heavy Civilian Toll,” Sojourners Magazine 38, no. 8
(2009): 12.
259

Though researchers can make guesses about the scope of these activities, it is difficult to say

what is going on, when, and for what purposes. A spirit of competitiveness and desire for

increasing profits also build walls of secrecy around robotics and AI research. This limits access

for theologians and opens way for potentially unhelpful speculation. For example, Rodney

Brooks founded Rethink Robotics (formerly Heartland Robotics) and secured $57 million U.S.

dollars in funding before announcing a single project. The mystery consumed the robotics

community for several years before Rethink Robotics introduced Baxter, a two-armed industrial

robot in 2012.529 Google is also getting into the robotics and AI game, and the secretive culture

that often goes with it. In 2014, the internet technology giant purchased eight established and

lucrative robotics companies, including Boston Dynamics—a leader in securing military

contracts. Given Google’s dominance in AI through its search engine and self-driving cars, the

acquisitions are likely not insignificant.530 In the short years following the purchase of these

robotics companies, Google has not been forthcoming about its goals for its robotics program or

how this might dovetail with its existing AI and platforms. Though theologians might make

every good effort to follow trends in robotics and AI and understand their implications, it is

impossible to access the full breadth and depth of military and commercial investment in robots

and AI.

A final important limitation of this study is the time-sensitive character of robotics and AI

research. Between the time of submitting the proposal and defending this project, there have

529. Evan Ackerman, “Heartland Robotics Now Rethink Robotics, Still Developing Mystery Robot,” IEEE
Spectrum: Technology, Engineering and Science News, June 19, 2012, accessed January 7, 2016,
http://spectrum.ieee.org/automaton/robotics/industrial-robots/heartland-robotics-now-rethink-robotics-still-
developing-mystery-robot.
530. Conner Forrest, “Google and robots: The real reasons behind the shopping spree,” Tech Republic, March
5, 2014, accessed January 7, 2016, http://www.techrepublic.com/article/google-and-robots-the-real-reasons-behind-
the-shopping-spree/.
260

been significant changes in both robotics and AI research, and how theologians are dealing with

it. Just five years ago there was scant theological and even popular literature about robots and AI.

In contrast, today it is inescapable on a weekly basis with every outlet ranging from Time and

Fast Company to The Guardian and New York Times devoting regular coverage to humanoid

robots and AI and related ethical and social issues. This proliferation makes it increasingly

difficult to keep current with advances in this area while writing a sustained research project with

a longer-term view. The growing and broadening interest in robots and AI makes it increasingly

difficult for any theologian to carry out broad surveys as is found here and in those like DeBaets

or Foerst.

Directions for Future Research

Future theological research about robots and AI should build upon both the key findings of this

project and its limitations. This includes dovetailing insights and questions from Christian ethics

with the broader theological and methodological questions that have been discussed throughout

the thesis. The impact of robots and AI in the world require such cross-disciplinary pollination,

especially according to the principles of contemporary contextual theology and

postfoundationalist interdisciplinarity. Ultimately this means theologians must continually

acknowledge and entertain debates about robot ‘behaviour’ and the application of related

technologies in the world. This turn to the theological-ethical dimension of robotics and AI

research means that future research should increasingly take into account the public life of these

technologies. This means that it is not the purview of the academy alone to shape Christian

responses to robots and AI, but also the churches collectively. Theologians should then road-test

their research and develop through unconventional means. The example of SES discussed above

may very well be a harbinger of a new way of doing theology that finally meets roboticists and
261

AI researchers on their own terms. Researchers like Heather Knight will very much serve as

inspiration as theologians work to develop research methods that deal with the complexity of

these fascinating and fast-changing objects of science and technology. Her work in social media,

internet technologies, popular culture, and playful approach challenge theologians to go in new

directions. These boundary-stretching means must be pursued if theologians want to develop a

truly interdisciplinary approach to robots and AI.

Ultimately future research will take up the approach developed in Chapter Four critique it

and expand upon it. Such research can take any number of paths, but a few emerge as especially

pressing. First is the challenge to deepen contextual awareness throughout all theological

reflection on robots and AI. This means addressing the privilege and biases one brings to a

research project, and being mindful of how this affects its trajectory and eventual outcomes.

Factors such as race, class, gender, and so on must increasingly come to the fore of thinking

about robots and how they are located in different historical places and spaces. Even though this

area of interdisciplinary theology is relatively new, spanning barely a generation, theologians

have missed opportunities to bring hard-won insights from contextual and ecological theologies

to their thinking about robotics and AI. Essential to these efforts is ongoing work in the area of

embodiment and what this means in light of increasingly humanlike robots and AI. Much more

work needs to be done in drawing on the experiences of those who experience marginalization or

oppression because of race, gender, sexual orientation, disability, and class. A much richer sense

of embodiment will both improve this interdisciplinary work and more faithfully speak to the

body of Christ. Second is to continue the course developed in the section on the human and bring

to it new insights from ecotheology and other sources. Classic texts in this area, notably from

thinkers like Thomas Berry and Sallie McFague, oriented the discourse toward a holistic,
262

relational, and cosmocentric view of the human. This helped counter many of the problematic

assumptions about the human laced throughout the preceding chapters, but could be developed

further using emerging scholars and their contributions to ecotheology. Third, future research in

this area will involve the simple yet growing task of monitoring the application of robots and AI

in biomedical, industrial, and military settings. Theologians must continue to push deeper into

these fields, turning to publications and platforms endemic to roboticists and AI researchers, and

remain committed to increasing their own literacy in this area. It is only through such

commitment to understanding our interlocutors on this point can theologians develop an

interdisciplinary response suitable for their own communities. This can take any number of

forms, including direct contact with local roboticist, keeping up with popular writing about

robots and AI as much as possible, and developing familiarity with some of the best internet-

based sources, like the Institute of Electrical and Electronics Engineers website and social media

presence.531

Conclusion

At the end of this project a new theological landscape comes into sight. It is one that draws on

the good work of researchers like Anne Foerst, Noreen Herzfeld, and other prescient theologians

who understood the significance of robots and AI long before they enter the theological

mainstream. This dissertation reimagined their collective work and organized in a way to propel

interdisciplinary dialogue forward and strengthen the foundation for future researcher projects.

This process shed new light on existing research, illuminating its gaps and shortcomings, which

will only fuel more creative work in this area. From these early efforts emerges a new way of

531. “Homepage,” IEEE: Advancing Technology for Humanity, accessed January 22, 2018,
https://www.ieee.org/index.html.
263

thinking about robots and AI. This novel approach constructively draws on existing theological

tradition to understand what it means to be human in an age of increasingly humanlike robots

and AI. It breaks down unhelpful assumptions about the human carried throughout theological

literature in the area, and replaces them with insights from ecotheology and its strong tradition of

holistic and relational anthropology grounded in cosmogenesis. Such an approach also draws on

insights from contextual and ecological theology to develop a more nuanced interdisciplinary

approach to robots and AI, one that is committed to social justice. Hopefully, these efforts will

pave a theological way forward filled with creative and liberating responses to the increasing

presence of robots and AI among us.

For now, we are in the middle of a great theological and pastoral task. Theologians must

catch up with advances in robotics and AI research, and hold in tension both resisting and

embracing these new technologies. They must simultaneously critique these technologies

according to principles emerging from Christian theology and ethics, and seek ways that they can

contribute to the building up of the peaceful and just Reign of God. Increasingly humanlike

robots and AI call the churches into a new process of spiritual discernment. As Anne Foerst

noted, robots are stories that we tell about ourselves. What remains, then, is the difficult task of

judging well what kinds of stories we should seek out. It is clear that robots and AI stand to

simply repeat the human narratives of conquest, empire, oppression, and persecution, and that

the stories they will tell will be nothing new and nothing worth hoping for. Theologians must

answer the call to write new stories about robots and AI, to push researchers and public dialogue

toward justice and liberation, and to never lose sight of Gospel imperatives. With this at the core

of interdisciplinary theology in this area, robots and AI can instead become stories about serving
264

the marginalized, surrendering power, strengthening communities, and serving the aims of

justice so clearly articulated throughout the Christian tradition.


Bibliography
Ackerman, Evan. “Heartland Robotics Now Rethink Robotics, Still Developing Mystery
Robot.” IEEE Spectrum: Technology, Engineering and Science News, June 19, 2012.
Accessed January 7, 2016. http://spectrum.ieee.org/automaton/robotics/industrial-
robots/heartland-robotics-now-rethink-robotics-still-developing-mystery-robot.

Adams, B., Cynthia Breazeal, Rodney A. Brooks, and Brian Scassellati, “Humanoid
Robots: A New Kind of Tool”, IEEE Intelligent Systems and Their Applications: Special
Issue on Humanoid Robotics 15, no. 4 (July/August 2000): 25-31.

Advocates for International Development. “A Short Guide to the Arms Trade Treaty.”
Accessed 7 January 2016. http://www.a4id.org/sites/default/files/user/
Guide%20to%20Arms%20Trade%20Treaty.pdf.
Agawu, Emefa Addo and P. W. Singer. “Robocops are here: It’s time to create rules for how
police can use them.” Vox, September 6, 2016. Accessed November 18, 2016.
http://www.vox.com/2016/8/9/12412080/robocops-police-rules-engagement-bomb-
disposal-dallas.

Ahmad, A., Z. F. Ahmad, J. D. Carleton, and A. Agarwala. “Robotic surgery: current


perceptions and the clinical evidence.” Surgical Endoscopy 31 no. 1 (January 2017):
255-263.

Amazon. “Amazon Prime Air.” Accessed November 10, 2016. https://www.amazon.com/


b?node=8037720011.
________. “Economic Impact.” Accessed November 10, 2016.
https://www.amazon.com/p/feature/nsog9ct4onemec9.
Anderson, Michael L. “Why Is AI So Scary?” Artificial Intelligence 169, no. 2 (2005): 201-208.

Asimov, Isaac. I, Robot. London: Dobson, 1950.

Asimov, Isaac, and Karen A. Frenkel. Robots: Machines in Man’s Image. New York: Harmony
Books, 1985.

Atkinson, Robert D. and John Wu, “False Alarmism: Technological Disruption and the U.S.
Labor Market, 1850-2015.” ITIF: Information Technology and Innovation Foundation,
May 8, 2017. Accessed January 23, 2018. https://itif.org/publications/2017/05/ 08/false-
alarmism-technological-disruption-and-us-labor-market-1850-2015.

Baillie, Jean-Christophe. “Why AlphaGo is not AI.” IEEE Spectrum: Technology,


Engineering, and Science News, March 17, 2016. Accessed July 31, 2017.
http://spectrum.ieee.org/ automaton/robotics/artificial-intelligence/why-alphago-is-not-ai.

Bainbridge, William Sims. God from the Machine: Artificial Intelligence Models of Religious
Cognition. Lanham, MD: AltaMira Press, 2006.

265
266

Barbour, Ian G. “Neuroscience, Artificial Intelligence, and Human Nature: Theological and
Philosophical Reflections.” Zygon 34, no. 3 (1999): 361-398.

Bar-Cohen, Yoseph, and Cynthia Breazeal. Biologically Inspired Intelligent Robots. Bellingham,
WA: SPIE Press, 2003.

Barker, Sean. “Edmund Furse’s ‘The Theology of Robots’.” New Blackfriars 68, no. 801 (1987):
41-43.

Barnard, David T. “God in the Machine: What Robots Teach Us About Humanity and God.”
Perspectives on Science and Christian Faith 58, no. 4 (2006): 325-22.

Barnes, Elizabeth B. “Gattaca and A.I.: Artificial Intelligence: Views of Salvation in an Age of
Genetic Engineering.” Review & Expositor 99, no. 1 (2002): 59-70.

Barnes, Michael, and Florian Jetsch, eds. Human-Robot Interactions in Future Military
Operations. Burlington, VT: Ashgate, 2012.

Barr, Alistair. “Google Mistakenly Tags Black People as ‘Gorilla’s’, Showing Limits of
Algorithms.” The Wall Street Journal, July 1, 2015. Accessed August 1, 2017.
https://blogs.wsj.com/digits/ 2015/07/ 01/google-mistakenly-tags-black-people-as-
gorillas-showing-limits-of-algorithms/.

Barth, Karl. Church Dogmatics III/2, trans. J. W. Edwards, O. Bussey, and Harold
Knight. Edinburgh: T. and T. Clark, 1958.

Bateman, Jessica. “Sexist robots can be stopped by women who work in AI.” The Guardian,
May 29, 2017. Accessed October 28, 2018. https://www.theguardian.com/careers/2017/
may/29/sexist-robots-can-be-stopped-by-women-who-work-in-ai.

BBC News. “Singles Day: Alibaba posts record sales,” BBC News: Asia, November 11, 2016.
Accessed November 11, 2016, http://www.bbc.com/news/37946470.
Benek, Christopher. “Churches Need to Prepare for When Robots Take People’s Jobs.”
christopherbenek.com, October 19, 2015. Accessed November 18, 2015.
http://www.christopherbenek.com/?p=4452.

Berry, Thomas. The Christian Future and the Fate of Earth, edited by Mary Evelyn Tucker and
John Grim. Maryknoll, NY: Orbis Books, 2009.

________. “Christianity in an Emerging Universe.” In Light Burdens, Heavy


Blessings: Challenges of Church and Culture in the Post Vatican II Era, edited by
Mary Heather MacKinnon SSND, Moni McIntyre and Mary Ellen Sheehan IHM, 361-
369. Quincey, IL; Franciscan Press, 2000.

________. “Christianity’s Role in the Earth Project.” In Christianity and Ecology:


Seeking the Well-Being of Earth and Humans, edited by Dieter T. Hessel and
Rosemary Radford Ruether, 127-134. Cambridge, MA: Harvard University Press,
2000.
267

________. “Classical Western Spirituality and the American Experience.” Cross Currents 31,
no. 4 (Winter 1981-1982): 388-399.

________. “Contemporary Spirituality: The Journey of the Human Community.”


Cross Currents 24, no. 2-3 (Summer-Fall 1974): 172-183.

________. The Dream of the Earth. San Francisco: Sierra Club Books, 1988.

________. “Ecological Geography.” In Worldviews and Ecology: Religion, Philosophy, and the
Environment, edited by Mary Evelyn Tucker and John Grim, 228-237. Maryknoll, NY:
Orbis Books, 1994.

________. The Great Work: Our Way into the Future. New York: Bell Tower, 1999.

________. “Our Future on Earth: Where Do We Go From Here?” In Thomas Berry and
the New Cosmology, edited by Anne Lonergan and Caroline Richard, 103-106. Mystic
Court, CT: Twenty-Third Publications, 1987.

________. The Sacred Universe: Earth, Spirituality, and Religion in the Twenty-First Century,
edited by Mary Evelyn Tucker. New York: Columbia University Press, 2009.

________. “The Spirit of the Earth.” In Liberating Life: Contemporary Approaches to


Ecological Theology, edited by William Birch, William Eakin, and Jay B. McDaniel,
151-158. Maryknoll, NY: Orbis Books, 1990.

________. “The Story and the Dream: The Next Stage in the Evolutionary Epic.” In The Epic of
Evolution: Science and Religion in Dialogue, edited by James B. Miller, 209-217.
Upper Saddle River, NJ. Prentice-Hall, 2004.

________. Thomas Berry: Selected Writings on the Earth Community, edited by Mary Evelyn
Tucker and John Grim. Modern Spiritual Masters Series. Maryknoll, NY: Orbis Books,
2014.

________. “The Universe Story; Its Religious Significance.” In The Greening of America: God,
Environment and the Good Life, edited by John E, Carroll and Paul Brockelman, 208-
218. Hanover: University Press of New England, 1997.

________. “The Universe, the University, and the Ecozoic Age.” In Doors of Understanding:
Conversations in Global Spirituality in Honor of Ewert Cousins, edited by Steven Chase,
79-96. Quincy, IL: Franciscan Press, 1997.

Berry, Thomas, and Thomas Clark, Befriending the Earth: A Theology of Reconciliation
Between Humans and the Earth. Mystic, CT: Twenty-Third Publications, 1991.

Berry, Thomas, and Brian Swimme. The Universe Story: From the Primordial Flaring Forth to
the Ecozoic Era, a Celebration of the Unfolding of the Cosmos. San Francisco:
HarperSanFrancisco, 1992.
268

Berry, Wendell. Life is a Miracle. Washington, DC: Counterpoint, 2000.

Bhasin, Kim and Patrick Clark. “How Amazon Triggered a Robot Arms Race.” Bloomberg, June
29, 2016. Accessed November 10, 2016. https://www.bloomberg.com/news/articles/
2016-06-29/how-amazon-triggered-a-robot-arms-race.

Bieler, Des. “Chess Champion Refuses to Defend Titles in Saudi Arabia to Protest
Treatment of Women.” The Washington Post, December 28, 2017. Accessed March 15,
2018. https://www.washingtonpost.com/news/early-lead/wp/2017/12/28/chess-champion-
refuses-to-defend-titles-in-saudi-arabia-to-protest-treatment-of-women/.

Birrell, Ian. “3D-printed prosthetic limbs: the next revolution in medicine.” The Guardian,
February 19, 2017. Accessed November 24, 2017. https://www.theguardian.com/
technology/2017/feb/19/3d-printed-prosthetic-limbs-revolution-in-medicine.

Bjork, Russell C. “Artificial Intelligence and the Soul.” Perspectives on Science and Christian
Faith 60, no. 2 (2008): 95-102.

Boden, Margaret A. “Artificial Intelligence and Human Dignity.” In Nature’s Imagination: The
Frontiers of Scientific Vision, edited by J. Cornwell, 148-160. Oxford: Oxford University
Press, 1995.

________. “Ethical Issues of AI and Biotechnology.” In Creative Creatures: Values and Ethical
Issues in Theology, Science and Technology, edited by Ulf Görman, Willem Drees, and
Hubert Meisinger. 123-133. London: T&T Clark, 2005.

________. “Wonder and Understanding.” Zygon 20, no. 4 (1985): 391-400.

Boyle, Kristy, “Homepage,” Karakuri Info. Last modified January 14, 2008. Accessed April
5, 2018. http://www.karakuri.info/.

Breazeal, Cynthia L. Designing Social Robots. Cambridge, MA: MIT Press, 2002.

________. “Emotion and Sociable Humanoid Robots.” International Journal of Human


Computer Interaction 59 (2003): 119-155.

________. “Social Interactions in HRI: The Robot View.” IEEE Transactions in Systems,
Man, and Cybernetics, Part C 32, no. 2 (2003): 181-186.

Breazeal, Cynthia, Paul L. Harris, David DeSteno, Jacqueline M. Kory Westlund, Leah
Dickens, and Sooyeon Jeong. “Young Children Treat Robots as Informants.” Topics in
Cognitive Science 8, no. 2 (March 4, 2016): 481-491.

Brock, Brian. Christian Ethics in a Technological Age. Grand Rapids, MI: Wm. B. Eerdmans,
2010.
269

Brooking, Emerson T. and P. W. Singer. “War Goes Viral: How social media is being
weaponized across the world.” The Atlantic, November 2016. Accessed December 16,
2016. http://www.theatlantic.com/magazine/archive/2016/11/war- goes-viral/501125/.

Brooks, Rodney Allen. Cambrian Intelligence. Cambridge, MA: MIT Press, 1999.

________. “The Cog Project.” Journal of the Robotics Society of Japan 15, no. 7 (October
1997): 968-970.

________ . “From Earwigs to Humans.” Robotics and Autonomous Systems 20, nos. 2-4 (June
1997): 291-304.

________. “Elephants Don’t Play Chess.” Robotics and Autonomous Systems 6 (1990): 3-15.

________. Flesh and Machines. New York, NY: Pantheon Books, 2002.

________. “Integrated Systems Based on Behaviors.” SIGART Bulletin 2, no. 4 (August 1991):
46-50.

________. “Intelligence Without Representation.” Artificial Intelligence Journal 47 (1991):


139-159.

________. “Model-Based Three Dimensional Interpretations of Two Dimensional


Images.” IEEE Pattern Analysis and Machine Intelligence (March 1983): 140-150.

________. “New Approaches to Robotics.” Science 253 (September 1991): 1227-1232.

________. “Planning Collision Free Motions for Pick and Place Operations.” International
Journal of Robotics Research 2, no. 4 (December 1983): 19-44.

________. “A Robot that Walks; Emergent Behavior from a Carefully Evolved Network.”
Neural Computation 1, no. 2 (Summer 1989): 253-262.

Brooks, R. A. and A. M. Flynn. “Fast, Cheap and Out of Control: A Robot Invasion of the Solar
System.” Journal of the British Interplanetary Society (October 1989): 478-485.

Brooks, Rodney Allen, and P. Maes, eds. Artificial Life IV: Proceedings of the Fourth
International Workshop on the Synthesis and Simulation of Living Systems.
Cambridge, MA: MIT Press, 1994.

Brooks, Rodney Allen, and Rosalind W. Picard. “Living Machines: Can Robots Become
Human?” In Place for Truth, 195-215. Downers Grove, IL: IVP Books, 2010.

Brooks, Rodney Allen and L.A. Stein. “Building Brains for Bodies.” Autonomous Robots 1, no.
1 (November 1994): 7-25.
270

Brooks, Rodney Allen, L. Aryananda, A. Edsinger, P. Fitzpatrick, C. Kemp, U.-M. O’Reilly, E.


Torres-Jara, P. Varshavskaya and J. Weber. “Sensing and Manipulating Built-For-Human
Environments.” International Journal of Humanoid Robotics 1, no. 1 (March 2004):
1-28.

Brooks, Rodney Allen., Cynthia Breazeal, Matthew Marjanovic, Brian Scassellati, and
Matthew Williamson. “The Cog Project: Building a Humanoid Robot.” In Computation
for Metaphors, Analogy, and Agents, edited by C. Nehaniv, 52-87. New York: Springer,
1999.

Brown, Warren S., Nancey Murphy, and H. Newton Malony, eds. Whatever Happened to the
Soul?: Scientific and Theological Portraits of Human Nature. Minneapolis, MN: Fortress
Press, 1998.

Buber, Martin. I and Thou. Translated by Walter Kaufmann. New York: Charles Scribner’s
Sons, 1970.

Butler, Samuel. Erewhon: or, Over the Range. London: Trübner & Co., 1872.

Calo, Christopher James, Nicholas Hunt-Bull, Lundy Lewis, and Ted Metzler. “Ethical
Implications of Using the Paro Robot, with a Focus on Dementia Patient Care.”
Workshop, Association for the Advancement of Artificial Intelligence, San Francisco,
2011.
Campaign to Stop Killer Robots. “About Us.” Accessed November 17, 2015.
http://www.stopkillerrobots.org/about-us.
________. “The Solution.” Accessed January 7, 2016. http://www.stopkillerrobots.org/the-
solution.
Čapek, Karel. R.U.R. (Rossum’s Universal Robots) and War with the Newts. London:
Gollancz, 2001.

Carnegie Mellon University. “Robotic Reality.” Accessed March 9, 2013.


http://www.cmu.edu/homepage/computing/2013/winter/robotic-reality.shtml.
Chattaway, Peter T. “I, Robot: Despite Steven Spielberg’s Reputation for Producing Warm
Fuzzies, A.I. Is Bleak.” Christianity Today 45, no. 10 (2001): 67-68.

Choi, Charles, Q. “Brain Scans Show Humans Feel for Robots: A Show of Affection or
Violence toward Either Robots or Humans Causes Similar Changes in the Brain.” IEEE
Spectrum: Technology, Engineering, and Science News, April 24, 2013. Accessed April
24, 2013. http://spectrum.ieee.org/robotics/artificial-intelligence/brain-scans-show-
humans-feel-for-robots.

Christian, Brian. The Most Human-Human: What Talking with Computers Teaches Us About
What It Means to Be Alive. New York: Doubleday, 2011.
271

Christianity Today. “Why did this seminary purchase a robot?” Christianity Today: News
& Reporting, February 2014. Accessed November 17, 2015.
http://www.christianitytoday.com/ le/2014/february-online-only/why-did-this-
seminary-purchase-robot.html.
Chung, Emily. “Wolves on B.C.’s islands, mainland genetically different.” CBC News, June 10,
2014. Accessed July 31, 2017. http://www.cbc.ca/news/technology/wolves-on-b-c-s-
islands-mainland-genetically-different-1.2669964.

Cision PR Newswire. “‘Cyber Monday’ Quickly Becoming one of the Biggest Shopping Days
of the Year.” Cision PR Newswire News Releases, November 21, 2005. Accessed
November 11, 2016. http://www.prnewswire.com/news-releases/cyber-monday-quickly-
becoming-one-of-the-biggest-online-shopping-days-of-the-year-55695037.html.
Clark, Gordon S. “Killer Robots: Attack Drones Take a Heavy Civilian Toll.” Sojourners
Magazine 38, no. 8 (2009): 12-22.

Clayton, Philip. “Emergence from Physics to Theology: Toward a Panoramic View.” Zygon 41,
no. 3 (2006): 675-687.

Clough, William R. “Natural Intelligence and Intelligible Design: Toward Harmonious


Integration.” Journal of Interdisciplinary Studies 22, no. 1-2 (2010): 134-154.

Clynes, Manfred, and Nathan Kline. “Cyborgs and Space.” Astronautics (September 1960): 26-7.

Cohen, John. Human Robots in Myth and Science. London: Allen & Unwin, Ltd., 1966.

CNET News. “Meet the robots making Amazon even faster (video report).” CNET News,
posted November 20, 2014. Accessed November 11, 2016,
https://www.youtube.com/watch?v=UtBa9yVZBJM.
Crenshaw, Kimberlé, “Demarginalizing the Intersection of Race and Sex: A Black
feminist critique of antidiscrimination doctrine, feminist theory and antiracist
politics.” University of Chicago Legal Forum 1989, no. 1 (1989): 139-167.
Criddle, Cristina. “Amazon Echo’s Alexa is yet another virtual assistant reinforcing sexist
stereotypes.” The Telegraph, September 19, 2016. http://www.telegraph.co.uk/
technology/2016/09/19/amazon-echos-alexa-is-yet- another-virtual-assistant-reinforcing/.

comScore. “Cyber Monday Surpasses 3 Billion in Total Digital Sales to Rank as Heaviest
Online Spending Day in History,” comScore Press Releases, December 2, 2015.
Accessed November 11, 2016. https://www.comscore.com/Insights/Press-
Releases/2015/12/Cyber-Monday-Surpasses-3-Billion-in-Total-Digital-Sales-to-Rank-as-
Heaviest-US-Online-Spending-Day-in-History.
Cosgrove, L. and S. Krimsky. “A Comparison of DSM-IV and DSM-5 Panel Members’
Financial Associations with Industry: A Pernicious Problem Persists.” PLoS
Medicine 9, no. 3. Accessed August 14, 2017. https://doi.org/10.1371/
journal.pmed.1001190.
272

Cutruzzula, Kara. “Engineering Community With Social Roboticist Heather Knight.”


Magenta, June 14, 2017. Accessed March 19, 2018. https://magenta.as/engineering-
community-with-social-roboticist-heather-knight-725e21a1c107.
Cyborg Cabaret. “Cyborg Cabaret: Passion, Terror & Interdependence.” Accessed March
9, 2013. http://cyborgcabaret.org/program.html.
Danielson, Peter. Artificial Morality: Virtuous Robots for Virtual Games. New York: Routledge,
1992.

Davies, Caroline. “Enigma Codebreaker Alan Turing Receives Royal Pardon.” The
Guardian, December 24, 2013. Accessed December 13, 2016.
https://www.theguardian.com/science/2013/dec/24/ enigma-codebreaker-alan-turing-
royal-pardon.

da Vinci Surgery. “da Vinci Surgery: Minimally Invasive Surgery.” Accessed October 11, 2013.
http://www.davincisurgery.com/.
DeBaets, Amy Michelle. “The Robot as Person: Robotic Futurism and A Theology of Human
Ethical Responsibility Among Humanoid Machines.” Ph.D. diss., Emory University,
2010. Accessed March 19, 2018. https://legacy-etd.library.emory.edu/view/record/
pid/emory:bp4jb.
Debusschere, Barbara and Sara Vanderkerkhove. “Op komst seksrbots en heel veel
vragen.” DeMorgen, December 23, 2016. Accessed October 11, 2017.
https://www.demorgen.be/ wetenschap/op-komst-seksrobots-en-heel-veel-vragen-
b08d93b0/.

DeHaan, Robert F. “Robotics: Darwinism, Intelligent Design, and Genesis.” Perspectives on


Science and Christian Faith 52, no. 4 (2000): 231-232.

DeLashmutt, Michael W. “A Better Life through Information Technology? The Techno-


Theological Eschatology of Posthuman Speculative Science.” Zygon 41, no. 2 (2006):
267-287.

Drozdek, Adam. “God from the Machine: Artificial Intelligence Models of Religious Cognition.”
Perspectives on Science and Christian Faith 59, no. 1 (2007): 81-82.

Durbin, William A., Jr. “Ramifications of Artificial Intelligence.” Christianity Today 30, no. 6
(1986): 48-49.

Eaton, Heather, ed. The Intellectual Journey of Thomas Berry: Imagining the
Earth Community. Lanham, MD: Lexington Books, 2015.

Einstein, Albert. “Religion and Science,” in The World as I See It. Translated by Alan Harris.
San Diego, CA: The Book Tree, 2007.

Ellul, Jacques. “The Ethics of Nonpower.” In Ethics in an Age of Pervasive Technology, 204-
212. Boulder, CO: Westview Press, 1980.
273

________. The New Demons. Translated by C. Edward Hopkin. New York: The Seabury Press,
1973.

________. The Technological Bluff. Grand Rapids, MI: Eerdmans, 1990.

________. “Technology and the Gospel.” International Review of Mission 66, no. 262 (1977):
109-117.

________. The Technological Society. Translated by John Wilkinson. New York: Vintage
Books, 1964.

________. “A Theological Reflection on Nuclear Developments: The Limits of Science,


Technology, and Power.” In Waging Peace, 114-120. San Francisco: Harper & Row,
1982.

Ellul, Jacques, and Joachim Neugroschel. The Technological System. New York: Continuum,
1980.

Emerson, Allen, and Cheryl Forbes. “Living in a World with Thinking Machines: Intelligence
Will Not Separate People from Machines.” Christianity Today 28, no. 2 (1984): 14-18.

Ekso Bionics. “Homepage.” Accessed August 8, 2015. http://intl.eksobionics.com/.


Emmanuel College. “The Rise of the Post-Human: Where is the Gospel?” Accessed
February 8, 2016. http://www.haltonpres.org/wp-content/uploads/2013/03/2013-Emm-
Days-Flyer-for-Conferences.pdf.
Euronews. “Iranian teacher uses hi-tech robot to encourage prayers.” Euronews YouTube,
February 25, 2014. Accessed July 30, 2017. https://www.youtube.com/
watch?v=8rmgRAarN5s.
Ford, Martin. The Rise of Robots: Technology and the Threat of a Jobless Future. New York:
Basic Books, 2015.
Flynn, A. M., R. A. Brooks, W. M. Wells III and D. S. Barrett “Intelligence for Miniature
Robots.” International Journal of Sensors and Actuators 20 (December 1989): 187–196.

Foerst, Anne. “Artificial Intelligence: Walking the Boundary.” Zygon 31, no. 4 (1996): 681-693.

________. “Artificial Sociability: From Embodied AI toward New Understandings of


Personhood.” Technology in Society 21, no. 4 (1999): 373-386.

________. “Cog, a Humanoid Robot, and the Question of the Image of God.” Zygon 33, no. 1
(1998): 91-111.

________. “Commander Data: A Candidate for Harvard Divinity School?” In Religion in a


Secular City, 263-281. Harrisburg, PA: Trinity Press Intl, 2001.

________. “Embodied AI, Creation, and Cog.” Zygon 33, no. 3 (1998): 455-461.
274

________. God in the Machine: What Robots Teach Us About Humanity and God. New York:
Dutton, 2004.

Foerst, Anne, and Harvey Cox. “Religion and Technology: A New Phase.” Bulletin of Science,
Technology & Society 17, no. 2-3 (1997): 53-60.

Foerst, Anne, and Rodney Lawrence Petersen. “Identity, Formation, Dignity: The Impacts of
Artificial Intelligence Upon Jewish and Christian Understandings of Personhood.” In
Theological Literacy for the Twenty-First Century, 68-92. Grand Rapids, MI: Eerdmans,
2002.

Forrest, Conner. “Google and robots: The real reasons behind the shopping spree.” Tech
Republic, March 5, 2014. Accessed January 7, 2016. http://www.techrepublic.com/
article/google-and-robots-the-real-reasons-behind-the-shopping-spree/.

Fraser, Giles. “A Computer has Passed the Turing Test for Humanity – Should we be
worried?” The Guardian, June 13, 2014. Accessed August 21, 2014.
http://www.theguardian.com/commentisfree/belief/2014/jun/13/computer-turing-test-
humanity.

Furse, Edmund. “The Theology of Robots.” New Blackfriars 67, no. 795 (1986): 377-386.

Garcia, Robert K. “Artificial Intelligence and Personhood.” In Cutting-Edge Bioethics: A


Christian Exploration of Technology and Trends, edited by John Frederic Kilner, C.
Christopher Hook, and Diane B. Uustal, 39-51. Grand Rapids, MI: Eerdmans, 2002.

Gensler, Lauren. “The World’s Largest Retailers 2016: Walmart Dominates But Amazon Is
Catching Up.” The Wall Street Journal, May 27, 2016. Accessed November 10, 2016.
http://www.forbes.com/sites/laurengensler/2016/05/27/global-2000-worlds-largest-
retailers/#21b1c76329a9.

Geraci, Robert M. “Apocalyptic AI: Religion and the Promise of Artificial Intelligence.” Journal
of the American Academy of Religion 76, no. 1 (2008): 138-166.

________. Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence and Virtual
Reality. Oxford: Oxford University Press, 2010.
________. “Cultural Prestige: Popular Science Robotics as Religion-Science Hybrid.” In
Reconfigurations, 43-58. Vienna: Lit Verlag, 2008.

________. “The Popular Appeal of Apocalyptic AI.” Zygon 45, no. 4 (2010): 1003-1020.

________. “Robots and the Sacred in Science and Science Fiction: Theological Implications of
Artificial Intelligence.” Zygon 42, no. 4 (2007): 961-980.

________. “Spiritual Robots: Religion and Our Scientific View of the Natural World.” Theology
and Science 4, no. 3 (2006): 229-246.
275

Gerhart, Mary. “Cog Is to Us as We Are to God: A Response to Anne Foerst.” Zygon 33, no. 2
(1998): 262-269.

Gilkes, Cheryl Townsend. “Womanist Ideas and the Sociological Imagination.” Feminist Studies
in Religion 8, no. 2 (Fall 1992): 147-151.

Gordon, G., C. Breazeal, and S. Engel. “Can Children Catch Curiosity from a Social Robot?” In
Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot
Interaction, 91-98. New York: Association for Computing Machinery, 2015.

Görman, Ulf, Willem B. Drees, and Hubert Meisinger. Creative Creatures: Values and Ethical
Issues in Theology, Science and Technology. New York: T & T Clark International, 2005.

Green, Erin, “A Primer in Interdisciplinarity: J. Wentzel van Huyssteen and the Post-
Foundational Approach.” Toronto Journal of Theology 27, no. 1 (2011): 27-36.

________. “J. Wentzel van Huyssteen and Interdisciplinarity: Is the Postfoundationalist


Approach to Rationality and Intellectual Viable Way of Relating Theology and the
Natural Sciences?” Master’s thesis, University of St. Michael’s College, 2007.

Greenfield, Rebecca. “How ‘Star Wars’ Influenced Jibo, The First Robot for Families.” Fast
Company, July 7, 2014. Accessed October 22, 2017. https://www.fastcompany.com/
3033167/how-star-wars-influenced-jibo-the-first-robot-for-families.

Greenman, Jeffrey P., Noah Toly, and Read Mercer Schuchardt. Understanding Jacques Ellul.
Eugene, OR: Cascade Books, 2012.

Grémillet, D., W. Puech., Garçon, V., Boulinier, T. and Maho, Y. “Robots in Ecology:
Welcome to the machine.” Open Journal of Ecology 2 (2012): 49-57

Grinbaum, Alexei. “The Nanotechnological Golem.” NanoEthics 4, no. 3 (2010): 191-198.

Hanson, Gayle. “The Violent World of Video Games.” Insight on the News (June 28, 1999): 15.
Hall, W. David. “Does Creation Equal Nature? Confronting the Christian Confusion About
Ecology and Cosmology.” Journal of the American Academy of Religion 73, no. 3
(September 2005): 781-812.

Harding, Sandra, ed. The Postcolonial Science and Technology Reader. Durham, NC: Duke
University Press, 2011.

Harvard University. “Programmable Robot Swarms.” Wyatt Institute. Accessed August 3, 2017.
https://wyss.harvard.edu/technology/programmable-robot-swarms/.
Harvey, Adam and Heather Knight. “Anti-Paparazzi Fashion.” Conference paper, Proceedings of
the International Symposium on Wearable Computing (ISWC ’09), Linz, Austria,
September 2009. http://www.marilynmonrobot.com/wp-content/uploads/2009/05/
iswc_2009_antipap.pdf.
276

Haraway, Donna J. “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the


Late Twentieth Century.” In Simians, Cyborgs and Women: The Reinvention of Nature,
149-181. New York: Routledge, 1991

Hassler, Susan. “Marvin Minsky’s Legacy of Students and Ideas.” IEEE Spectrum:
Technology, Engineering, and Science News, February 18, 2016. Accessed
February 18, 2016. http://spectrum.ieee.org/computing/software/marvin-minskys-legacy-
of-students-and-ideas.

Haught, John F. “In Search of a God for Evolution: Paul Tillich and Pierre Teilhard de Chardin.”
Zygon 37, no. 3 (2002): 539-554.

Hawkins, Derek. “Researchers use facial recognition tools to predict sexual orientation. LGBT
groups aren’t happy.” The Washington Post, September 12, 2017. Accessed October 28,
2018. https://www.washingtonpost.com/news/morning-mix/wp/2017/09/12/researchers-
use-facial-recognition-tools-to-predict-sexuality-lgbt-groups-arent-
happy/?utm_term=.f6112f62e45d.

Hefner, Philip J. “The Evolution of the Created Co-Creator.” Currents in Theology and Mission
15, no. 6 (1988): 512-525.

________. The Human Factor: Evolution, Culture, and Religion. Minneapolis, MN: Fortress
Press, 1993.

________. “Technology and Human Becoming.” Zygon 37, no. 3 (2002): 655-665.

________. Technology and Human Becoming. Minneapolis, MN: Fortress Press, 2003.

Hehn, Johannes. “Zum terminus ‘Bild Gottes’.” In Festschrift Eduard Sachau zum
siebzigsten Geburtstag, edited by Gotthold Weil and Eduard Sachau, 36-52. Berlin: G.
Reimer, 1915.
Heikkila, Andrew. “Artificial intelligence and racism.” Tech Crunch, April 15, 2016.
Accessed October 11, 2017. https://techcrunch.com/2016/04/15/artificial-intelligence-
and-racist/.

Helmreich, Stefan. Silicon Second Nature: Culturing Artificial Life in a Digital World.
Berkeley: University of California Press, 1998.

Heltzel, Peter. “Cog and the Creativity of God.” Journal of Faith and Science Exchange 2
(1998): 21-29.

Herzfeld, Noreen. “Bodies Matter: A New Fad and a Fallacy in the Name of Science.” In God’s
Action in Nature’s World: Essays in Honour of Robert John Russell, edited by Ted Peters
and Nathan Hallanger. 225-233. Burlington, VT: Ashgate, 2006.

________. “Co-creator or co-creator?: The Problem with Artificial Intelligence.” In


Creative Creatures: Values and Ethical Issues in Theology, Science, and Technology,
277

edited by Ulf Görman, Willem B. Drees, and Hubert Meisinger, 45-52. London: T&T
Clark, 2005.

________. “Creating in Our Own Image: Artificial Intelligence and the Image of God.” Zygon
37, no. 2 (2002): 303-316.

________. “Cybernetic Immortality Versus Christian Resurrection.” In Resurrection, 192-201.


Grand Rapids, MI: Eerdmans, 2002.

________. “Empathetic Computers: The Problem of Confusing Persons and Things.” Dialog 54,
no.1 (Spring 2015): 34-39.

________. “The End of Faith? Science and Theology as Process.” Dialog 46, no. 3 (2007): 288-
293.

________. “Habits of the High-tech Heart: Living Virtuously in the Information Age.” The
Christian Century 119, no. 21 (October 2002): 9-22.

________. In Our Image: Artificial Intelligence and the Human Spirit. Minneapolis, MN:
Fortress Press, 2002.

________. “Living in Cyberspace: Video Games, Facebook, and the Image of God.” Journal of
Dharma 36, no. 2 (2011): 149-156.

________. “A New Member of the Family? The Continuum of Being, Artificial Intelligence, and
the Image of God.” Theology and Science 5, no. 3 (2007): 235-247.

________. “Outsourced Memory: Computers and Conversation.” Perspectives on Science


and Christian Faith 65, no.3 (Sep 2013): 179-186.

________. Technology and Religion: Remaining Human in a Co-created World. West


Conshohocken, PA: Templeton Press, 2009.

________. “Terminator or Super Mario: Human/Computer Hybrids, Actual and Virtual.” Dialog
44, no. 4 (2005): 347-353.

________. “Video Shootout: The Games Kids Play.” Christian Century 121, no. 9 (2004): 22-23.

________. “Wall or Window? The Prospects for Spiritual Experience in Cyberspace.” CTNS
Bulletin 21, no. 3 (2001): 3-9.

________. “‘Your Cell Will Teach You Everything’: Old Wisdom, Modern Science, and the Art
of Attention.” Buddhist - Christian Studies 29 (2009): 83-88.

Hexham, Irving. “Learning to Live with Robots.” Christian Century 97, no. 19 (1980): 574-578.

Hirai, K., Hirose, M., Haikawa, Y., Takenaka, T. “The Development of Honda Humanoid
Robot.” In Proceedings. 1998 IEEE International Conference on Robotics and
Automation. Leuven, Belgium (1998).
278

Howell, Nancy R. “Uniqueness in Context.” American Journal of Theology & Philosophy 28, no.
3 (2007): 364-377.

Hornyak, Tim. “Korean machine-gun robots start DMZ duty.” CNET, July 14, 2010.
Accessed January 7, 2016. http://www.cnet.com/news/korean-machine-gun-robots-start-
dmz-duty/.
Illing, Sean. “The rise of AI is sparking an international arms race.” Vox, September 13, 2017.
Accessed March 27, 2018. https://www.vox.com/world/2017/9/13/16287892/elon-musk-
putin-artificial-intelligence-war accessed.
Institute of Electrical and Electronics Engineers. “Robot Ethics.” IEEE Robotics &
Automation Society. Accessed November 10, 2016. http://www.ieee-ras.org/robot-ethics.
Institute of Electrical and Electronics Engineers. “Advancing Technology for Humanity,”
IEEE. Accessed January 22, 2018, https://www.ieee.org/index.html.
Intuitive Surgical, Inc. “Intuitive Surgical Company History.” Accessed November 10, 2016.
http://www.intuitivesurgical.com/company/history/.
Intuitive Surgical, Inc. “Intuitive Surgical Investor FAQ.” Accessed November 19, 2016.
http://phx.corporate-ir.net/phoenix.zhtml?c=122359&p=irol-faq#22324.
iRobot. “iRobot Seaglider Collecting Valuable Data in the Gulf of Mexico.” iRobot Press
Room, May 25, 2010. Accessed October 25, 2017. http://media.irobot.com/press-
releases?item=122479.
Jackelén, Antje. “The Image of God as Techno Sapiens.” Zygon 37, no. 2 (2002): 289-302.

Jackson, Maggie. Distracted: The Erosion of Attention and the Coming Dark Age. New York:
Prometheus, 2008.

Jordan, John M. Robots. Cambridge, MA: The MIT Press, 2016.

Joy, Bill. “Why the Future Doesn’t Need Us.” Ethics & Medicine 17, no. 1 (Spring 2001): 13-36.

KAIROS: Canadian Ecumenical Justice Initiatives. “Open for Justice.” What We Do.
Accessed January 7, 2016. http://www.kairoscanada.org/what-we-do/ecological-
justice/mining-justice-open-for-justice/.
Kaiser, Christopher B. “How Can a Theological Understanding of Humanity Enrich Artificial
Intelligence Work.” Asbury Theological Journal 44, no. 2 (1989): 61-75.

Kaufman, Gordon. “Re-Conceiving God and Humanity in Light of Today’s Evolutionary-


Ecological Consciousness.” Zygon 36, no. 2 (June 2001): 335-348.

Kelly, Kevin. “Nerd Theology.” Technology in Society 21, no. 4 (1999): 387-392.

Kelman, Nic. How to Pass as Human: A guide to assimilation for future androids. Milwaukie,
OR: Dark Horse Books, 2015.
279

Kelsey, David H. “Personal Bodies: A Theological Anthropological Proposal.” In Personal


Identity in Theological Perspective, edited by Michael Horton and Richard Lints, 139-
158. Grand Rapids, MI: Eerdmans, 2006.

________. “Spiritual Machines, Personal Bodies, and God: Theological Education and
Theological Anthropology.” Teaching Theology & Religion 5, no. 1 (2002): 2-9.

King, Barbara J. “Primates and Religion: A Biological Anthropologist’s Response to J. Wentzel


van Huyssteen’s Alone in the World?” Zygon 43, no. 2 (2008): 451-466.

Kirsner, Scott. “Acquisition puts Amazon rivals in Awkward Spot.” Boston Globe,
December 1, 2013. Accessed November 10, 2016. http://www.bostonglobe.com/
business/2013/12/01/will-amazon-owned-robot-maker-sell-tailer-
rivals/FON7bVNKvfzS2sHnBHzfLM/story.html.

Knight, Heather. “Co-Parenting with Tele-Presence Robots.” Medium, June 10, 2017.
Accessed March 19, 2018. https://medium.com/@heatherknight/co-parenting-with-tele-
presence-robots-5ad828e709b6.
________. “Design and Policy Considerations of Human-Robot Partnerships.” Conference
paper, The Robots are Coming: Series on Civilian Robotics, Brookings Institution,
Washington, July 29, 2014. https://www.brookings.edu/research/how-humans-respond-
to-robots-building-public-policy-through-good-design/.
________. “Eight Lessons learned about Non-verbal Interactions through Robot
Theater.” Conference paper, Social Robotics. Berlin-Heidelberg, 2011.
________. “Marilyn Monrobot.” Accessed March 19, 2018.
http://www.marilynmonrobot.com.
________. “Pittsburgh’s Robot Film Festival: Preview.” Robohub, November 6, 2014.
Accessed July 31, 2017. http://robohub.org/pittsburghs-robot-film-festival-
preview/.

________. “A Short History of Artificial Intelligence, A Student Perspective: CSAIL/NSF.”


Student essay, Cambridge, MIT, 2006. http://projects.csail.mit.edu/films/aifilms/
AIFilms.html.
________. “Silicon Based Comedy” (video lecture). TED Talks, December 2010. Accessed
March 9, 2013. http://www.ted.com/talks/ heather_knight_silicon_based_comedy.html.
Knight, Heather and Reid Simmons. “Estimating Human Interest and Attention via Gaze
Analysis.” Conference paper, Proceedings International Conference on Robotics and
Automation (ICRA ‘13), Karlsruhe, Germany, May 2013.
http://www.marilynmonrobot.com/wp-content/uploads/2009/05/icra_2013_gaze.pdf.
________. “An Intelligent Design Interface for Dancers to Teach Robots.” Conference paper,
International Conference on Robot and Human Communication (Ro-Man ‘17), Lisbon,
280

2017. http://www.marilynmonrobot.com/wp-content/uploads/2017/07/
roman17_hknight_kiwis.pdf.
Krishnan, Armin. Killer Robots: Legality and Ethicality of Autonomous Weapons. Burlington,
VT: Ashgate, 2009.

________. Military Neuroscience and the Coming Age of Neurowarfare. New York:
Routledge, 2017.

Kull, Anne. “Cyborg Embodiment and the Incarnation.” Currents in Theology and Mission 28,
no. 3-4 (2001): 279-284.

________. “The Cyborg as an Interpretation of Culture-Nature.” Zygon 36, no. 1 (2001): 49-56.

________, ed. The Human Being at the Intersection of Science, Religion and Medicine. Tartu:
Tartu University Press, 2001.

________. “Mutations of Nature, Technology, and the Western Sacred.” Zygon 41, no. 4 (2006):
785-791.

________. “Speaking Cyborg: Technoculture and Technonature.” Zygon 37, no. 2 (2002):
279-287.

________. “Speaking of God in the New World Order, Inc.” In Naming and Thinking God in
Europe Today: Theology in Global Dialogue, edited by Norbert Hintersteiner, 51-68.
Amsterdam: Rodopi, 2007.

________. “Cyborg Embodiment and the Incarnation.” Currents in Theology and Mission 28, no.
3-4 (August 2001): 279-284.

Kurzweil, Ray. The Age of Spiritual Machines: When computers exceed human intelligence.
New York: Penguin Books, 1999.

Lach, Rémi. “Les humains pourraient épouser des robots en 2050.” LeSoir: Geeko,
December 22, 2016. Accessed October 11, 2017. http://geeko.lesoir.be/2016/
12/22/les-humains-pourraient-epouser-des-robots-en-2050/.

LeBlanc, Terry. “Mission: An Indigenous Perspective.” Direction 43, no. 2 (Fall 2014): 152-165.

LeBlanc, Terry and Jennifer LeBlanc. “NAIITS: Contextual mission, Indigenous context.”
Missiology 39, no. 1 (January 2011): 87-100.

Legato, Marianne J. “Manipulating the Genome, Enhancing Humans, and Creating Robots to
Keep Us Company: Thoughts on What’s “Human”.” Gender Medicine 4, no. 3 (2007):
185-186.

Lemke, Steve W. “Artificial Intelligence: A Modern Approach.” Southwestern Journal of


Theology 40, no. 1 (1997): 104-22.
281

Lenski, R. E. “Twice as Natural.” Nature 414, no. 6861 (2001): 255-255.

Lewis, Lundy, Ted Metzler, and Linda Cook. “Results of a Pilot Study with a Robot
Instructor for Group Exercise at a Senior Living Community.” Workshop, Practical
Application of Agents and Multi-Agent Systems, Seville, 2016.
Levey, David. Love and Sex with Robots: The Evolution of Human-Robot Relationships. New
York: Harper, 2008.

Levin, Sam. “A beauty contest was judged by AI and the robots didn’t like dark skin.” The
Guardian, September 8, 2016. Accessed June 6, 2017. https://www.theguardian.com/
technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people.

Lillegard, Norman. “No Good News for Data.” Cross Currents 44:1 (1994): 28-42.

Lin, Patrick, Keith Abney, and George A. Bekely, eds. Robot Ethics: The Ethical and Social
Implications of Robotics. Cambridge, MA: MIT Press, 2012.
Lin, Patrick, Keith Abney, and Ryan Jenkins, eds. Robot Ethics 2.0: From Autonomous Cars to
Artificial Intelligence. Oxford: Oxford University Press, 2017.

Living Knowledge. “Homepage.” Living Knowledge: The International Science Shop


Network. Accessed January 24, 2018. http://www.livingknowledge.org/.
Loebner Prize. “The First Turing Test.” Home Page of the Loebner Prize in Artificial
Intelligence. Accessed October 11, 2013. http://www.loebner.net/Prizef/loebner-
prize.html.
Lombardo, Stanley, trans., Metamorphoses. Indianapolis/Cambridge: Hackett Publishing
Company, Inc., 2010.

Luger, George. Artificial Intelligence: Structures and Strategies for Complex Problem
Solving, 5th ed. Reading, MA: Addison-Wesley, 2005.

Luna, Jonathan and Sarah Vaughn. Alex + Ada, 3 vols. Berkeley, CA: Image Comics, 2014-
2015.

Luntz, Stephen. “Is Technology Replacing God?” Australasian Science 23, no. 3 (2002): 24-25.

MacKenzie, Ian, Chris Meyer, and Steve Noble. “How retailers can keep up with consumers.”
McKinsey & Company, October 2013. Accessed November 11, 2016.
http://www.mckinsey.com/industries/retail/our-insights/how-retailers-can-keep-up-with-
consumers.

Markman, Jon. “Robots, Workers and Amazon,” Forbes, July 18, 2016. Accessed November 10,
2016. http://www.forbes.com/sites/jonmarkman/2016/07/18/robots-workers-and-amazon/
#469116db70e7.

Matthews, Clifford N., Mary Evelyn Tucker, and Philip Hefner, eds. When Worlds Converge:
282

What Science and Religion Tell Us About the Story of the Universe and Our Place in It.
Chicago, IL: Open Court, 2002.

Mazlish, Bruce. “The Fourth Discontinuity.” Technology and Culture 8, no.1 (January 1967):
1-15.

McCarthy, John. “Ascribing Mental Qualities to Machines.” In Philosophical Perspectives in


Artificial Intelligence, 1-20. Brighton, Sussex: Harvester Press, 1979.

McCurry, Justin. “Japanese company replaces office workers with artificial intelligence.”
The Guardian, January 5, 2017. Accessed October 11, 2017.
https://www.theguardian.com/technology/2017/jan/05/japanese-company- replaces-
office-workers-artificial-intelligence-ai-fukoku-mutual-life-insurance.

McFague, Sallie. Blessed are the Consumers: Climate Change and the Practice of
Restraint. Minneapolis, MN: Fortress Press, 2013.

________. The Body of God: An Ecological Theology. Minneapolis, MN: Fortress Press, 1993.

________. “The Body of the World: Our Body, Ourselves.” In Without Nature?: A new
condition for theology, edited by David Albertson and Cabell King, 221-238. New
York: Fordham University Press, 2009.

________. “Cosmology and Christianity: Implications of the Common Creation Story for
Theology.” In Theology at the End of Modernity, 19-40. Philadelphia: Trinity, 1991.

________. “Epilogue: Human Dignity and the Integrity of Creation.” In Theology That Matters,
199-212. Minneapolis, MN: Fortress Press, 2006.

________. “Human Beings, Embodiment, and Our Home the Earth.” In Reconstructing
Christian Theology, 141-169. Minneapolis, MN: Fortress Press, 1994.

________. Models of God: Theology for an Ecological, Nuclear Age. Minneapolis, MN: Fortress
Press, 1987.

________. “The World as God’s Body.” Christian Century 105, no. 22 (1988): 671-673.

McNally, Phil, and Sohail Inayatullah. “The Rights of Robots.” Futures 20, no. 2 (1988):
119-136.

Metzler, Theodore. “Can Agent-Based Simulation Improve Dialogue between Science and
Theology?” Journal of Artificial Societies and Social Simulation 5, no. 1 (2002).
Accessed March 19, 2018. http://jasss.soc.surrey.ac.uk/5/1/5.html.
________. “And the Robot Asked ‘What Do You Say I Am?’ Can Artificial Intelligence Help
Theologians and Scientists Understand Free Moral Agency?” Journal of Faith and
Science Exchange 4, (2000): 37-48.

Minsky, Marvin. The Emotion Machine: Commonsense Thinking, AI, and the Future of the
283

Human Mind. New York: Simon & Schuster, 2006.

________. “Steps Toward Artificial Intelligence.” Proceedings of the International Radio


Institute (1961): 8-30.

________. Society of Mind. London: Simon & Schuster, 1986.

Moravec, Hans. Mind Children: The Future of Robot and Human Intelligence. Cambridge, MA:
Harvard University Press, 1988.

________. “Mobile Robots and General Intelligence.” In Undersea Teleoperators and


Intelligent Autonomous Vehicles, edited by Norman Doelling and Elizabeth
Harding 181-199. Cambridge, MA: MIT Press, 1987.

________. “Rise of the Robots.” Scientific American, (December 1999): 124-135.

________. Robot: Mere Machine to Transcendent Mind. Oxford: Oxford University Press,
1999.

________. “Robots, After All.” Communication of the ACM (October 2003): 90-97.

________. “Robots: Re-evolving Minds at 107 Times Nature’s Speed.” Cerebrum 3, no. 2
(Spring 2001): 34-49.

________. “Robots that Rove.” Newsletter of the Special Interest Group on


Bioinformatics, Computational Biology, and Biomedical Informatics (SIGBIO) 7, no. 2
(June 1985): 13-15.

________. “Robots: Shaping the Future.” Asiaweek, (August 10, 1994): 30-35.

________. “The Rovers.” In Robotics, edited by Marvin Minsky, 123-145. New York:
Doubleday, 1985.

________. “The Universal Robot.” In The World of 2044: Technological Development and the
Future of Society, edited by Charles Sheffield, Marcello Alonso, and Morton Kaplan, 27-
40. St. Paul: Paragon House, 1994.

________. “The Universal Robot.” Ars Electronica: Facing the Future, edited by Timothy
Druckrey, 116-123. Cambridge, MA: MIT Press, 1999.

________. “When will computer hardware match the human brain?” Journal of Transhumanism
1 (March 1998).

Muray, Leslie A. “Human Uniqueness Vs. Human Distinctiveness: The Imago Dei in the
Kinship of All Creatures.” American Journal of Theology & Philosophy 28, no. 3 (2007):
299-310.

Murphy, Nancey, and Christopher Knight. Human Identity at the Intersection of Science,
Technology, and Religion. Burlington, VT: Ashgate, 2010.
284

Murphy, Tim. “The Meltdown of the Anti-Immigration Minuteman Militia.” Mother Jones,
August 4, 2014. Accessed January 7, 2016. http://www.motherjones.com/ politics/2014/
08/minuteman-movement-border-crisis-simcox.

Nobel, David F. The Religion of Technology: The Divinity of Man and the Spirit of Invention.
New York: Penguin Books, 1997.

Nolch, Guy. “Face-to-Face with Machine Intelligence.” Australasian Science, 22, no. 10
(Nov/Dec 2001): 20-22.

Nolfi, Stefano, and Dario Floreano. Evolutionary Robotics: The Biology, Intelligence, and
Technology of Self-Organizing Machines. Cambridge, MA: MIT Press, 2000.

Olsen, Jan K. B. New Waves in Philosophy of Technology. New York: Palgrave MacMillan,
2009.

OK Go. “This Too Shall Pass” (music video). Directed by James Frost, OK Go, and Syyn
Labs, posted March 1, 2010. Accessed October 22, 2017.
https://www.youtube.com/watch?v=qybUFnY7Y8w.
O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and
Threatens Democracy. New York: Crown Publishers, 2016.
Orlebeke, Clifton J. “The Behavior of Robots.” In God and the Good, 204-220. Grand Rapids,
MI: Eerdmans Pub Co., 1975.

Palmer, Norris W. “Should I Baptize My Robot? What Interviews with Some Prominent
Scientists Reveal About the Spiritual Quest.” CTNS Bulletin 17, no. 4 (1997): 13-23.

PaxChristi International. “Interfaith Declaration in support of a Ban on Fully Autonomous


Weapons.” Accessed November 17, 2015. http://blogs.paxvoorvrede.nl/wp-content/
uploads/2014/08/ Signatories-list_April-2-2015.pdf.
Penny, Laurie. “Robots are racist and sexist. Just like the people who created them.” The
Guardian, April 20, 2017. Accessed July 31, 2017. https://www.theguardian.com/
commentisfree/2017/apr/20/robots-racist-sexist-people-machines-ai-language.

Peters, Ted, ed. Science and Theology: The New Consonance. Boulder, CO: Westview Press,
1998.

________. “The Soul of Trans-Humanism.” Dialog 44, no. 4 (2005): 381-395.

Peterson, Gregory R. “Imaging God: Cyborgs, Brain-Machine Interfaces, and a More Human
Future.” Dialog 44, no. 4 (2005): 337-346.

________. “Uniqueness, the Image of God, and the Problem of Method: Engaging van
Huyssteen.” Zygon 43, no. 2 (2008): 467-474.
285

Pickstock, Catherine. “The One Story: A Critique of David Kelsey’s Theological Robotics.”
Modern Theology 27, no. 1 (2011): 26-40.

Pruss, Alexander R. “Artificial Intelligence and Personal Identity.” Faith and Philosophy 26, no.
5 (2009): 487-500.

Puddefoot, John. God and the Mind Machine: Computers, AI, and the Human Soul. London:
SPCK, 1996.

Rabinowitz, F. Michael. “The Robot’s Rebellion: Finding Meaning in the Age of Darwin.”
Canadian Psychology 46, no. 1 (2005): 55.

Ramsey, Paul. Basic Christian Ethics. New York: Charles Scribner and Sons, 1950.

Regis College. “Will People or Robots Matter More in 2050?” Accessed November 18, 2015.
http://www.regiscollege.ca/files/bill_ryan_flyer.jpg.
Rehg, William R. “Religious Values and Science: Artificial Intelligence Technology.” In
Religious Values at the Threshold of the Third Millennium, 175-226. Villanova, Pa:
Villanova University Press, 1999.

Reich, K. Helmut. “Cog and God: A Response to Anne Foerst.” Zygon 33, no. 2 (1998): 255-
262.

Reinders, Hans. Receiving the Gift of Friendship: Profound Disability, Theological


Anthropology, and Ethics. Grand Rapids, MI: William B. Eerdmans, 2008.

Le réseau optic. “Homepage.” Accessed January 22, 2018. http://optictechnology.org/.


Rethink Robotics. “Baxter.” Accessed April 4, 2018. http://www.rethinkrobotics.com/baxter/.
Reynolds, Thomas E. Vulnerable Communion: A Theology of Disability and Hospitality. Grand
Rapids, MI: BrazosPress, 2008.
Robertson, John H. “From Artificial Intelligence to Human Consciousness.” Zygon 20, no. 4
(1985): 375-444.

Roff, Heather M. and P. W. Singer. “The Next President Will Decide the Fate of Killer Robots
and the Future of War.” Wired, September 2016. Accessed November 18, 2016.
https://www.wired.com/2016/09/next-president-will-decide-fate-killer-robots-future-
war/.

Rohde, Marieke. Enaction, Embodiment, Evolutionary Robotics: Simulation Models for a Post-
Cognitivist Science of Mind. Amsterdam: Atlantis Press, 2010.

Rosenfeld, Azriel. “Religion and the Robot.” Tradition 8, no. 3 (1966): 15-26.

Rosheim, Mark E. Robot Evolution: The Development of Anthrobotics. New York: John Wiley &
Sons, Inc., 1994.
286

Rossano, Matt J. “Artificial Intelligence, Religion, and Community Concern.” Zygon 36, no. 1
(2001): 57-75.

Russell, Robert John. “Theological Implications of Artificial Intelligence.” In Science and


Theology of Information, 245-260. Geneva: Labor et Fides, 1992.

Sample, Ian, and Max Hern. “Scientists dispute whether ‘Eugene Goostman’ passed Turing
Test.” The Guardian, June 9, 2014. Accessed October 23, 2017.
https://www.theguardian.com/technology/2014/jun/09/scientists-disagree-over-
whether-turing-test-has-been-passed.

Schmidt, Michael and Hod Lipson. “Distilling Free-Form Natural Laws from Experimental
Data.” Science 323, no. 5924 (2009): 81-85.

Schörnig, Niklas. “The Dangers of Lethal Autonomous Weapons Systems.” Lecture,


Commission of the Bishops’ Conferences of the European Union, Brussels,
November 19, 2015.
Schulson, Michael. “What robot theology can tell us about ourselves.” Religion Dispatches,
September 1, 2014. Accessed October 11, 2017. http://religiondispatches.org/automata/.
Schultz, Colin. “A Military Contractor Just Went Ahead and Used an Xbox Controller for
Their New Giant Laser Cannon.” Smithsonian.com, September 9, 2014. Accessed
January 21, 2018. https://www.smithsonianmag.com/smart-news/military-contractor-just-
went-ahead-and-used-xbox-controller-their-new-giant-laser-cannon-180952647/.

Shaban, Hamza. “Playing War: How the Military Uses Video Games.” The Atlantic,
October 10, 2013. Accessed March 28, 2018. https://www.theatlantic.com/
technology/archive/2013/10/playing-war-how-the-military-uses-video-
games/280486/.

Searle, John R. “Minds, Brains and Programs.” Behavioral and Brain Sciences 3, (1980): 417-
457.

________. “Artificial Intelligence and the Study of Consciousness.” In Human Search for Truth:
Philosophy, Science, Theology, 173-183. Philadelphia: Saint Joseph’s University Press,
2002.

Seegrid Corporation. “Seegrid: Flexible VGVs, Robotic Industrial Trucks.” Accessed


December 21, 2012. http://seegrid.com.
Sennett, James F. “Requiem for an Android?: Lillegard Traps Us in a False Dilemma.” Cross
Currents 46, no. 2 (1996): 195.

Shelley, Mary. Frankenstein, or The Modern Prometheus. Boston and Cambridge: Sever,
Francis, & Co., 1869.
287

SickKids. “SickKids receives $10 million in funding to support medical research and the
development of KidsArm.” SickKids Newsroom, March 10, 2010. Accessed August 30,
2011, http://www.sickkids.ca/AboutSickKids/Newsroom/Past-News/2010/KidsArm.html.
Silver, Andrew. “Brain Implant Allows Man to Feel Touch on Robotic Hand.” IEEE
Spectrum: Technology, Engineering, and Science News, October 13, 2016.
Accessed October 26, 2016. http://spectrum.ieee.org/the-human-
os/biomedical/devices/brainimplant-allows-man-to-feel-touch-on-robotic-hand.

Singer, P. W. “Prepared Testimony and Statement for the Record at the Hearing on
“Digital Acts of War”.” House Committee on Oversight and Government Reform,
July 13, 2016. Accessed November 18, 2016. https://oversight.house.gov/wp-
content/uploads/2016/07/Singer-Statement-Digital-Acts-of-War-7-13.pdf.

________. “The Predator Comes Home: A primer on domestic drones, their huge business
opportunities, and their deep political, moral, and legal challenges.” Brookings, March 8,
2013. Accessed November 18, 2016. https://www.brookings.edu/research/the-predator-
comes-home-a-primer-on-domestic-drones-their-huge-business-opportunities-and-their-
deep-political- moral-and-legal-challenges/.

________. “Do Drones Undermine Democracy?” The New York Times (online edition),
January 25, 2013. Accessed November 18, 2016. http://www.nytimes.com/
2012/01/22/opinion/sunday/do-drones-undermine-democracy.html.

________. “Military Robots and the Laws of War.” Brookings, February 11, 2009.
Accessed November 18, 2016. https://www.brookings.edu/articles/military-robots-and-
the-laws-of-war/.

________. Wired for War: The Robotics Revolution and Conflict in the 21st Century. New York:
Penguin, 2009.

Singer, P. W. and August Cole. “Humans Can’t Escape Killer Robots, but Humans Can Be Held
Accountable for Them.” Vice News, April 15, 2016. Accessed November 18, 2016.
https://news.vice.com/article/killer-robots-autonomous-weapons-systems-and-
accountability.

Smith, Andrew. “Science 2001: Net Prophets.” Observer (December 31, 2000): 18.

Somé, Malidoma Patrice. Of Water and Spirit: Ritual, Magic and Initiation in the Life of an
African Shaman. New York: Tarcher/Putnam, 2004.

Solon, Olivia. “Deus ex machina: former Google engineer is developing an AI god.” The
Guardian, September 28, 2017. Accessed October 10, 2017.
https://www.theguardian.com/technology/2017/sep/28/ artificial-intelligence-god-
anthony-levandowski.

Sorrel, Charlie. “Telepresence Robots For Sick Kids Are So Effective, They Even Get
Bullied.” FastCompany, September 20, 2017. Accessed November 4, 2018.
288

https://www.fastcompany.com/3063627/ telepresence-robots-for-sick-kids-are-so-
effective-they-even-get-bullied.

Southern Evangelical Seminary. “Ethics of Emerging Technologies.” Accessed November 17,


2015. http://ses.edu/ses.edu/about-us/eet.
Specialisterne. “Welcome to Specialisterne Denmark.” Accessed November 5, 2017.
http://dk.specialisterne.com/en/.
Spencer, William David. “Should the Imago Dei Be Extended to Robots? Love and Sex with
Robots, the Future of Marriage, and the Christian Concept of Personhood.” Africanus
Journal 1, no. 2 (2009).

Spezio, Michael L. “Brain and Machine: Minding the Transhuman Future.” Dialog 44, no. 4
(2005): 375-380.

Steels, L. and R.A. Brooks, eds. The Artificial Life Route to Artificial Intelligence:
Building Embodied Situated Agents. Lawrence Erlbaum Associates, Inc.: Hillsdale, NJ,
1995.

Stenger, Nicole. “Mind is a Leaking Rainbow.” In Cyberspace: First Steps, edited by


Michael Benedikt, 48-57. Cambridge, MA: MIT Press, 1991.
Tait, Amelia. “The rise of racist robots.” The New Statesman, October 20, 2016. Accessed
October 11, 2017. https://www.newstatesman.com/science-tech/future-
proof/2016/10/rise-racist-robots.

Takeshi Kimura. “Hybridity of Robotics and Mahayana Buddhism: The Mahayana


Buddhist Philosophy of Robot in the Case of Masahiro Mori.” American Academy
of Religion. Accessed January 7, 2016. https://www.aarweb.org/sites/default/
files/pdfs/Annual_Meeting/2015/2015AMProgramBookSessions.pdf.
Tamatea, Laurence. “If Robots R-Us, Who Am I: Online ‘Christian’ Responses to Artificial
Intelligence.” Culture and Religion 9, no. 2 (2008): 141-160.

________. “Online Buddhist and Christian Responses to Artificial Intelligence.” Zygon 45, no. 4
(2010): 979-1002.

Teilhard de Chardin, Pierre. The Future of Man, translated by Norman Denny. London: Collins,
1964.

Thweatt-Bates, Jeanine. Cyborg Selves: A Theological Anthropology of the Posthuman.


Burlington, VT: Ashgate, 2012.

Tillich, Paul. Systematic Theology, 3 vols. Chicago: University of Chicago Press, 1951- 1963.

________. Dynamics of Faith. New York: Harper, 1957.

Toumey, Chris. “Molecular Golems.” Nature nanotechnology 7, no. 1 (2012): 1-2.


289

Turkle, Sherry. Alone Together: Why we expect more from technology and less from each other.
New York: Basic Books, 2001.

________. Life on the Screen: Identity in the Age of the Internet. New York and Toronto:
Touchstone/Simon & Schuster, 1995.

Turing, Alan M. “Computing Machinery and Intelligence.” Mind 59, no. 236 (1950): 433-460.

The United Nations. “The Arms Trade Treaty.” United Nations Office for Disarmament
Affairs. Accessed November 17, 2015. http://www.un.org/disarmament/ATT/.
The University of Manchester. “President’s Doctoral Scholar Award.” Current Award
Holders. Accessed July 30, 2017.
http://www.presidentsaward.manchester.ac.uk/current/facultyofhumanities/
scottmi dson/.
Valentino, John. “My Hospital Has a Robot.” Journal of Pastoral Care 51, no. 1 (1997):
117-118.

Vanderbilt University. “Interactive Robot Helps Children With Autism.” Posted March 20, 2013.
Accessed November 4, 2017. https://www.youtube.com/watch?v=7T7cIY-MIxc.
Vanderburg, Willem H. The Labyrinth of Technology. Toronto: University of Toronto Press,
2000.

van Huyssteen, J. Wentzel. Alone in the World? Human Uniqueness in Science and Theology.
Grand Rapids, MI: Eerdmans, 2006.

________. “Coding the Nonvisible: Epistemic Limitations and Understanding Symbolic


Behavior at Çatalhöyük.” In Religion in the Emergence of Civilization, edited by Ian
Hodder, 99-121. New York: Cambridge University Press, 2010.

________. “‘Creative Mutual Interaction’ as an Epistemic Tool for Interdisciplinary Dialogue.”


In God’s Action in Nature’s World, edited by Ted Peters and Nathan Hallanger, 65-76.
Burlington, VT: Ashgate, 2006.

________. Duet or Duel: Theology and Science in a Postmodern World. London; Harrisburg,
PA: SCM Press; TPI, 1998.

________. “Emergence and Human Uniqueness: Limiting or Delimiting Evolutionary


Explanation?” Zygon 41, no. 3 (2006): 649-664.

________. “What Epistemic Values Should We Reclaim for Religion and Science? A Response
to J. Wesley Robbins.” Zygon 28, no. 3 (1993): 371-376.

________. Essays in Postfoundationalist Theology. Grand Rapids, MI: Eerdmans, 1997.

________. “Human Origins and Religious Awareness: In Search of Human Uniqueness.” Studia
theologica 59, no. 2 (2005): 104-128.
290

________. “Primates, Hominids, and Humans -- from Species Specificity to Human Uniqueness?
A Response to Barbara J. King, Gregory R. Peterson, Wesley J. Wildman, and Nancy R.
Howell.” Zygon 43, no. 2 (2008): 505-525.

________. “Response to Critics.” American Journal of Theology & Philosophy 28, no. 3 (2007):
409-432.

________. The Shaping of Rationality: Toward Interdisciplinarity in Theology and


Science. Grand Rapids, MI: Eerdmans, 1999.

________. “Should We Do What Jesus Did? Evolutionary Perspectives on Christology and


Ethics.” In Christology and Ethics, edited by F. LeRon Shults and Brent Waters, 149-
178. Grand Rapids, MI: William B Eerdmans, 2010.

________. “Theology, Science, and Human Nature.” Princeton Seminary Bulletin 27, no. 3
(2006): 201-221.

________. “When Our Bodies Do the Thinking, Theology and Science Converge.” American
Journal of Theology & Philosophy 27, no. 2-3 (2006): 127-153.

________. “When Were We Persons? Why Hominid Evolution Holds the Key to Embodied
Personhood.” Neue Zeitschrift für systematische Theologie und Religionsphilosophie 52,
no. 4 (2010): 329-349.

Vanier, Jean. Becoming Human. Toronto: House of Anansi Press Inc., 1998.

Vidal, Denis. “Anthropomorphism or Sub-Anthropomorphism? An Anthropological Approach to


Gods and Robots.” The Journal of the Royal Anthropological Institute 13, no. 4 (2007):
917-933.

Vidal, Jacques J. “Toward Direct Brain-Computer Communication.” Annual Revue of


Biophysics and Bioengineering, no. 2 (1973): 158.

Vitz, Paul C. “Artificial Intelligence and Spiritual Life.” Asbury Theological Journal 44, no. 1
(1989): 5-16.

Wallach, Wendell and Colin Allen. Moral Machines: Teaching Robots Right from Wrong.
Oxford: Oxford University Press, 2009.

Wang, Yilun and Michal Kosinski. “Deep neural networks are more accurate than
humans at detecting sexual orientation from facial images.” Journal of Personality and
Social Psychology (forthcoming).
Ware, Bruce A. “Robots, Royalty, and Relationships? Toward a Clarified Understanding of Real
Human Relations with the God Who Knows and Decrees All That Is.” Criswell
Theological Review 1, no. 2 (2004): 191-203.
291

Watters, Ethan. Crazy Like Us: The Globalization of the American Psyche. New York: Free
Press, 2010.

White, Ross. “The globalisation of mental illness” The Psychologist 26, no. 3 (March 2013):
182-185. Accessed August 14, 2017. https://thepsychologist.bps.org.uk/volume-
26/edition-3.

Wicks, Frank. “Legend Pilots a Radio-Controlled Model Airplane Across the Atlantic
Ocean.” Progressive Engineer, 2011. Accessed January 7, 2016.
http://www.progressiveengineer.com/profiles/maynardHill.htm.

Wildman, Wesley J. “But Consciousness Isn’t Everything.” Cross Currents 46:2 (1996):
215-220.

________. “Hand in Glove: Evaluating the Fit between Method and Theology in van
Huyssteen’s Interpretation of Human Uniqueness.” Zygon 43, no. 2 (2008): 475-491.

Willard, Dallas. A Place for Truth: Leading Thinkers Explore Life’s Hardest Questions.
Downers Grove, IL: IVP Books, 2010.

Wilson, Eric G. The Melancholy Android: On the Psychology of Sacred Machines. Albany, NY:
SUNY Press, 2006.

Wolff, Rachel. “A Researcher and a Robot Walk Into a Bar . . .” The Wall Street Journal,
April 20, 2012. Accessed March 19, 2018. https://www.wsj.com/articles/
SB10001424052702304432704577349860916159278.
Wood, Gaby. Living Dolls: A Magical History of the Quest for Mechanical Life. London: Faber
and Faber, 2002.

World Council of Churches. “Statement on the Way of Just Peace.” Resources, November 8,
2013. Accessed January 7, 2016. http://www.oikoumene.org/en/resources/
documents/assembly/2013-busan/adopted-documents-statements/the-way-of-just-peace.
World Council of Churches. “Killer Robots? Moral questions pervade UN conference.” Press
Centre, April 23, 2015. Accessed November 17, 2011. https://www.oikoumene.org/en/
press-centre/news/killer-robots-moral-questions-pervade-un-conference.
Wylie-Kellerman, Bill and Batstone David. “God is My Palm Pilot.” Sojourners Magazine 30,
no. 4 (2001): 20-27, 62-63.

Young, Thelathia N., “Queering ‘The Human Situation’,” Journal of Feminist Studies in
Religion 28, no. 1 (Spring 2012): 126-131.

Zimmerman, Michael E. “The Singularity: A Crucial Phase in Divine Self-Actualization?”


Cosmos and History: The Journal of Natural and Social Philosophy 4, no. 1-2 (2008):
347-380.

You might also like