Professional Documents
Culture Documents
Article Review
Abstract: This review article briefly considers the history of AI and the most
relevant milestones, and assesses the current state of the art in various AI
applications. It then reviews some of the deeper questions posed by present
and future AI technologies, and their representation in current literature in
the science and theology dialogue.
Introduction
Since the seminal work of AI pioneer Alan Turing, the field of AI has seen
its ups and downs, with times of high hope alternating with times of disap-
pointment (the so-called “AI winters”). The period we’re in, which started
roughly after the turn of the millennium, is once again one of real progress,
with some impressive achievements.
With AI permeating our lives at unprecedented levels, there is serious talk
about it representing a true 4th industrial revolution, with huge consequences
for how we work, live our lives, and relate to one another. At the same time,
the possibility of machines acquiring real intelligence in the near to medium
future is again on the table. This possibility raises a host of exciting questions
– ethical, philosophical, and theological – about the nature of intelligence,
personhood, and even humanity’s right to pursue such a game-changing pro-
ject.
On the other hand, the increasing ubiquity of AI in our legal, medical, and
financial systems, as well as in our social networks, media and entertainment
also brings to the fore a darker perspective. The concerns go from issues of
privacy, manipulation, election meddling, and inequality, to accountability
and even an existential risk for humanity.
History of AI
One of the difficulties in assessing the degree to which AI has advanced and
is already impacting our society is that we seem to have a blind spot for it. It
may be that the general public has a preconceived idea about AI forever being
something of the future, which may come from initially being exposed to the
ESSSAT News & Reviews, 29-2 June 2019 5
1
In his proposal for the Dartmouth workshop, which is seen as the founding event
of Artificial Intelligence as a field, John McCarthy explicitly states that his purpose
is “to study the relation of language to intelligence” (McCarthy et al., 1955).
6 ESSSAT News & Reviews, 29-2 June 2019
2
Named after AI researcher, roboticist and futurist Hans Moravec.
3
“It is comparatively easy to make computers exhibit adult level performance on
intelligence tests or playing checkers, and difficult or impossible to give them the
skills of a one-year-old when it comes to perception and mobility", (Moravec 1988,
15).
4
Moore’s law, named after co-founder of Intel, Gordon Moore, predicts that com-
puting power doubles roughly every two years, due to the increase in density of the
possible number of transistors in an integrated circuit (Moore 1965).
ESSSAT News & Reviews, 29-2 June 2019 7
Blue has been associated more with ‘brute force’, rather than with true intel-
ligence. It was able to outplay Kasparov because it could simply search and
evaluate millions of positions a minute, and not because it could actually rea-
son about what is the best strategy and why (Harmon 2019). This was enabled
by the computer’s huge computation power.
Another remarkable achievement came in 2011, when IBM’s Watson won
the TV contest Jeopardy, defeating two of the most successful human players
in the history of the game. This victory was of a totally different calibre, since
the task involved complex language manipulation. The AI had to ‘decipher’
the meaning of general knowledge questions, through natural language un-
derstanding, and then search for the answers in a database of hundreds of
thousands of online pages. Nevertheless, this accomplishment too triggered
the criticism of not representing real intelligence. Philosopher John Searle
argued at the time, using his famous Chinese room thought experiment, that
Watson is not able to think, in spite of winning Jeopardy (Searle 2011).
Arguably the most astonishing AI milestone came in a series of events in-
volving DeepMind’s AlphaGo and AlphaZero between 2015 and 2017. The
team at DeepMind took on the game of Go, which had been considered the
holy grail of AI for decades, due to its sheer complexity. With the number of
possible developments in a Go game being larger than the number of atoms
in the observable universe, the task could not be approached with brute force:
the time necessary to compute all the possibilities would have simply been
on the order of tens of billions of years. Instead, AlphaGo used supervised
learning to learn Go from thousands of expert human games, and then rein-
forcement learning to improve through self-play (Silver et al. 2016).
In October 2015, AlphaGo became the first computer program to defeat a
professional Go player. The match against the European champion Fan Hui
ended with a clear 5-0 victory. Only months later, in March 2016, an already
improved version of AlphaGo shockingly defeated one of the best human
players ever, Korean champion Lee Sedol, by 4-1. DeepMind did not stop
there, and in 2017 they developed AlphaGo Zero, which did not use human
games at all to train itself. Instead, it learned the game from scratch and, after
only three days of self-play, defeated AlphaGo by 100-0 (Hassabis & Silver
2017).
Finally, in December 2017, DeepMind created AlphaZero, a more general
program that could play chess, shogi and Go. In less than 24 hours of self-
play, it achieved super-human levels at all three games, crushing the com-
puter world champion in each game: Stockfish (chess, 28 wins, 72 draws, 0
losses), elmo (shogi, 90 wins, 8 draws, 2 losses), and AlphaGo Zero (Go, 60
wins, 40 losses) (Silver et al. 2018).
8 ESSSAT News & Reviews, 29-2 June 2019
5
Stemming from ‘deep learning’ and ‘fake’
ESSSAT News & Reviews, 29-2 June 2019 9
This technology brings a lot of hope for more realistic character depictions
in the videogame and virtual reality industries, but it also raises serious ethi-
cal questions. It is easy to imagine how blurring the line between what is
verifiably real and what is not could negatively affect us both individually
and collectively.
At the individual level, it opens up the possibility of being personally targeted
as subjects of fake videos, with catastrophic consequences for one’s reputa-
tion. Actress Scarlett Johansson, who was herself a target of computer-gen-
erated deepfake pornography, spoke out about how celebrities might still be
protected by their fame, while lesser known persons would be helpless in
front of such damaging campaigns (Harwell 2018). Deepfake technology
poses definite threats to the way we relate to one another.
At the collective level, this type of realistic fake video could have disastrous
effects for our society and our democracy. It is not difficult to imagine how
it could lead to serious diplomatic conflict, by falsely depicting politicians
making scandalous declarations or comments. But at a more fundamental
level, this technology has the potential to erode our society in an unprece-
dented way, by sowing the seed of distrust into any event or story. It is not
so much that false claims would be presented as real, but that anything could
be potentially fake.6 The infosphere as it is today is already hugely loaded
with information, and a lot of effort and training are required in order to dis-
cern the real from the dubious. The proliferation of deepfake could render it
even more difficult to make sense of.
The second example of the dark side of using image recognition technology
is that of face recognition. While its current superb level of performance sim-
plifies many processes, from smartphone unlocking to airport security check,
it can also be used in more chilling ways.
China’s new social credit system is designed to socially engineer behaviour.
Widespread surveillance allows the Chinese government to collect large
amounts of data on each citizen, and then compile it using AI to create a
social score. The data includes the individual’s tax and credit status, but also
her online and offline behaviour, including footage from street surveillance
cameras, powered by face recognition. A low score might prevent one from
getting credit or purchasing flights, or it could even lead to being blacklisted
as an ‘untrustworthy person’. Moreover, when such a person crosses some
6
Egelhofer & Lecheler speak of the two-dimensionality of fake news: the fake news
genre, i.e. the deliberate creation of pseudojournalistic disinformation, and the fake
news label, i.e. the instrumentalization of the term to delegitimize news media
(Egelhofer & Lecheler 2019).
10 ESSSAT News & Reviews, 29-2 June 2019
Theology and AI
One grand topic in the dialogue between theology and AI is that of eschatol-
ogy, apocalypticism, singularity and salvation. In her 2003 article, Artificial
Intelligence and Christian Salvation: Compatibility or Competition, Ilia
Delio (Delio 2003) marks the distinction between Christian salvation and
what she calls techno-salvation. Due to its “contingent contingency” (Delio
2003, 49), AI will never be able to provide the perfection of life and immor-
tality. Christian salvation, on the other hand, is qualitatively different, being
rooted in the Incarnation and sacrifice of Christ, it concerns the whole human
being, not just the mind, as the gnostic techno-salvation, and is the only one
congruent with the dignity of the human person as imago Dei.
Robert M. Geraci (Geraci 2008) makes an interesting connection between
apocalyptic religious thought, Jewish and Christian, and current “Apocalyp-
tic AI”, treating the latter as a “legitimate heir” to the former (Geraci 2008:
158). They both share a dualistic view of the world; in religious apocalyp-
ticism, God intervenes to grant the victory of good over evil, in Apocalyptic
AI it is evolution that ensures the victory of intelligence over ignorance; both
God and evolution provide transcendent promises for the future. Apocalyptic
AI is predicted to grow in exposure, and theologians and scholars of religion
should pay more attention to it.
Ronald Cole-Turner also makes the case, in a 2012 paper (Cole-Turner
2012), that deeper analysis reveals a strong connection between religious
views of the future and transhumanist hopes for a singularity. He looks in
parallel at American evangelical views of the future, as a case study, and at
secular ideas of the singularity and the intelligence explosion, as exemplified
in the predictions of Ray Kurzweil. Cole-Turner concludes that the two,
while fundamentally different in many respects, also share many features,
like the postulation of a clear periodization in the future unfolding of the his-
tory of the cosmos.
Another big topic in the science & theology dialogue on AI is that of imago
Dei, creation in the image of God. Its connection with AI might sound sur-
prising at first, but the two are related in at least two ways. Firstly, the crea-
tion of intelligent beings by humans draws inevitable parallels with the divine
creation of humans. Secondly, the emergence on the historical scene of non-
human intelligence raises questions regarding what it means for humans to
12 ESSSAT News & Reviews, 29-2 June 2019
be in the image of God, and of the possible inclusion of AI into the scope of
imago Dei.
In her influential article (Foerst 1998), Anne Foerst refers in a creative way
to the project of Cog, a humanoid robot created by the MIT AI Lab, connect-
ing it to the account of humans being created in the image of God. In her
approach, the two represent complementary stories: the very pursuit of build-
ing humanoid robots is a reflection of the creative powers that are part of
what it means to be in the image of God. Moreover, the common reactions of
humans to Cog, fear and awe, confirm the hypothesis that Cog stands as a
symbol for the divine creativity that is part of what it means to be a human
mirroring God.
Geraci (Geraci 2007) agrees with Foerst’s intuition about the human reac-
tions towards Cog, but argues that they may be better interpreted as proofs of
an unconscious elevation of machines to a divine status. Using Rudolph
Otto’s framework for the human encounter with the divine, Geraci makes a
strong case for how both Foerst’s empirical data and the science fiction of
the 20th century provide evidence of this machine apotheosis, which threatens
traditional Christian theologies.
Noreen Herzfeld’s book (Herzfeld 2002), In our Image: Artificial Intelli-
gence and the Human Spirit, remains an early classic on the topic of intelli-
gent machines and imago Dei. By using the examples of famous AIs from
science fiction, like HAL 9000, R2-D2, or David, she questions humanity’s
project of trying to create something in its own image. She draws a parallel
with the biblical story of humans created in the image of God, for which she
analyzes three historical interpretations: substantive, functional, and rela-
tional.
In a later article (Herzfeld 2005), she paints a beautiful analogy between the
history of interpretation of imago Dei and the history of the field of AI. Just
as theologians have shifted from a substantive interpretation to a functional,
and finally to a relational one, in the same way AI research has moved from
a substantive model (symbolic AI) to a functional one in the 1980s, with less
ambitious goals and more focus on the realization of narrow, specific tasks.
She correctly predicts a second shift, from functional to relational, which she
already notices in projects of the late 1990s like Cog and Kismet, towards the
long-awaited passing of the Turing Test, which is essentially relational.
Burdett (2015) follows up on the AI & imago Dei topic, but asks an even
more fundamental question: to what degree do current developments in in-
formation sciences (and biology) challenge the theological understanding of
what it means to be in the image of God. He follows Grenz and van Huyssteen
in identifying an additional historical interpretation to imago Dei (besides
Herzfeld’s substantive, functional and relational one), namely the dynamic
ESSSAT News & Reviews, 29-2 June 2019 13
model. This roots imago Dei deeper in Christology, defining it not as some-
thing fixed, but rather as a dynamic following-out of our “true anthropologi-
cal source” (Burdett 2015: 5) in Christ. Burdett concludes that the flexibility
of this model also equips it with the best answers to the challenges from in-
formation sciences.
Other theologians have dared to contest the very notion of intelligence in
machines. In his paper, Where There’s Life There’s Intelligence, Ted Peters
describes seven characteristics of intelligence: interiority, intentionality,
communication, adaptation, problem-solving, self-reflection, judgment.
Later on (Peters 2019) he returns to these characteristics to show that AI, at
least in its current disembodied instantiation, does not qualify as intelligence,
and that even the most primitive biological life satisfies the criteria better
than AI. For Peters, the problem with AI is that it lacks any sense of self or
agency, “there’s nobody home” (Peters 2019: 3). In his opinion, a more
promising route towards human-level AI is to reverse-engineer the human
brain, without any prior theory of intelligence, or even to amplify human in-
telligence through cyborg-ification, although both of these lines of attack are
still highly doubtful.
In a paper looking at the practical consequences of AI’s ubiquity in society,
Mohammad Yaqub Chaudhary (Chaudhary 2019) argues that the progress in
AI is subtly fueling a secular re-enchantment of the world. The boundary
between reality and augmented reality (AR) is becoming increasingly dif-
fuse, and this combines with the fact that human perception is more and more
mediated by technology. The consequence that begins to unveil is a world
where human existence is shaped by the personal and social AI agents that
inhabit the AR in the form of digital avatars and daemons.
Finally, William Young (2019) asks an interesting question, which has half-
seriously been on the lips of many for a while: with AI being projected to
take over many tasks in the fields of transportation, finance or law in the near
future, could the jobs of clergy be partially or fully automated? The article
presents a host of existing AI technologies that, with some little tweaks, could
perform some of the tasks of ministers. IBM’s Project Debater, for example,
already matched an Israeli debate champion in 2018. With access to an ex-
tensive corpus of sermons and other theological resources, it is likely that a
machine-learning, sermon-writing AI would perform at a human level.
Again, a palliative-care chatbot could, accordingly, perform some parts of
spiritual care.
However, Young argues that developing such technologies might require a
consistent financial incentive, which seems rather unlikely. Instead, what we
14 ESSSAT News & Reviews, 29-2 June 2019
could witness increasingly more in the future is “the artifacts of online spir-
ituality” (Young 2019, 498) eroding the already-declining physical church
attendance.
Conclusion
The engagement of theologians with challenges posed by AI research has
been so far rather limited. Some topics, like the similarity and difference be-
tween singularitarian views of the future and religious eschatology have been
well covered, with robust argumentations and quite clear conclusions.
Other topics, like the radical transformations that AI deployment is already
causing to our society and our relationships to one another, are still in need
of profound theological evaluation.
In our opinion, there are three other very promising research paths in the di-
alogue between theology and science, on the topic of Artificial Intelligence.
Firstly, we seem to lack a more positive approach of the inter-relation be-
tween the two. AI research seems to have abandoned its initial vocation –
that of exploring the nature of human intelligence – and focuses instead on
less ambitious, but more feasible, projects. Christian theological anthropol-
ogy has grappled for two thousand years with questions like what it means to
be human, or what is the role of the intellect in a virtuous life. It seems highly
unlikely that theology does not have any meaningful insight to contribute in
the interdisciplinary dialogue regarding what we should aim for in creating
Artificial Intelligence.
A second path, in the same spirit of positive appreciation and mutual collab-
oration, concerns the possibility of using AI as a fresh lens through which we
could look at religious experience. Could intelligent machines also experi-
ence faith? Is there any way of using machine learning algorithms to predict
religious behaviour, or does it totally escape any mathematical analysis?
Last, but not least, advances in AI force theologians to clarify some funda-
mental concepts, that for too long have been allowed to dwell in ambiguity.
What does it mean to be in the image of God? Are there any limits to human
creativity? Is there anything more to humans than the information processing
patterns in our bodies and brains? Hopefully, the possible imminence of in-
telligent machines can help the development of a ‘theology with a deadline’
that can better answer these questions.
ESSSAT News & Reviews, 29-2 June 2019 15
References
Bishop C M (2006) Pattern Recognition and Machine Learning, Springer.
Burdett M S (2015) “The Image of God and Human Uniqueness; Chal-
lenges from the Biological and Information Sciences”, The Expository
Times 127(1): 3-10.
Campbell C (2019) “How China Is Using ‘Social Credit Scores’ to Reward
and Punish Its Citizens”, Time, http://time.com/collection/davos-
2019/5502592/china-social-credit-score/ (retrieved 31.05.2019).
Chaudhary M Y (2019) “Augmented Reality, Artificial Intelligence, and the
Re-enchantment of the World”, Zygon 54(2): 454–478.
doi:10.1111/zygo.12521.
Cole-Turner R (2012) “The Singularity and the Rapture: Transhumanist
and Popular Christian Views of the Future”, Zygon 47(4): 777-796.
Crevier D (1993) AI: The Tumultuous Search for Artificial Intelligence,
Basic Books, New York, NY.
Delio I (2003). “Artificial Intelligence and Christian Salvation: Compatibil-
ity or Competition?”, New Theology Review, November 2003.
Egelhofer J L, Lecheler S (2019) “Fake news as a two-dimensional phe-
nomenon: a framework and research agenda”, Annals of the Interna-
tional Communication Association, 43(2): 97-116,
doi:10.1080/23808985.2019.1602782.
Foerst A (1998) “Cog, a Humanoid Robot, and the Question of the Image
of God”, Zygon, 33(1): 91–111. doi:10.1111/0591-2385.1291998129 .
Geraci R M (2007) “Robots and the Sacred in Science and Science Fiction:
Theological Implications of Artificial Intelligence”, Zygon, 42(4):
961–980, doi:10.1111/j.1467-9744.2007.00883.x.
- (2008) “Apocalyptic AI: Religion and the Promise of Artificial Intelli-
gence”, Journal of the American Academy of Religion, 76(1): 138–
166, doi:10.1093/jaarel/lfm101 .
Harari Y N(2018), 21 Lessons for the 21st Century, Jonathan Cape, Lon-
don.
Harmon P (2019) “AI Plays Games”, Forbes,
https://www.forbes.com/sites/cognitiveworld/2019/02/24/ai-plays-
games/#7b8e7da4a49f (retrieved 31.05.2019).
Harwell D (2018) “Scarlett Johansson on fake AI-generated sex videos:
‘Nothing can stop someone from cutting and pasting my image’”, The
16 ESSSAT News & Reviews, 29-2 June 2019
Palminteri S (2018) “10 ways conservation tech shifted into auto in 2018”,
Mongabay, https://news.mongabay.com/2018/12/10-ways-conserva-
tion-tech-shifted-into-auto-in-2018/ (retrieved 31.05.2019).
Peters T (2017) “Where There’s Life There’s Intelligence”, What is Life?
On Earth and Beyond, A Losch (ed.), Cambridge University Press,
236-259.
- (2019) “Intelligence? Not Artificial, but the Real Thing!”, Theology and
Science, 17(1): 1-5, doi:10.1080/14746700.2018.1557376.
Radford A et al. (2019) “Better Language Models and Their Implications”,
OpenAI blog, https://openai.com/blog/better-language-models (re-
trieved 31.05.2019).
Searle J (2011) “Watson Doesn’t Know It Won on ‘Jeopardy!’” Wall Street
Journal, https://www.wsj.com/arti-
cles/SB10001424052748703407304576154313126987674 (retrieved
31.05.2019).
Silver D et al. (2016) “Mastering the game of Go with deep neural networks
and tree search”, Nature 529: 484-489.
Silver D, Hubert T, Schrittwieser J, Hassabis D (2018) “AlphaZero: Shed-
ding new light on the grand games of chess, shogi and Go”, DeepMind
blog, https://deepmind.com/blog/alphazero-shedding-new-light-grand-
games-chess-shogi-and-go (retrieved 31.05.2019).
Suwajanakorn S, Seitz S M, Kemelmacher-Shlizerman I (2017) "Synthesiz-
ing Obama: Learning Lip Sync from Audio". ACM Trans. Graph.
36.4: 95, doi:10.1145/3072959.3073640.
U.S. Department of Energy (2018) “Deep learning for electron micros-
copy”, Phys.org, https://phys.org/news/2018-12-deep-electron-micros-
copy.html (retrieved 31.05.2019).
Vardi M Y (2012) “Artificial Intelligence: Past and Future”, Communica-
tions of the ACM, 55(1): 5 doi:10.1145/2063176.2063177.
Vincent J (2018) “DeepMind’s AI can detect over 50 eye diseases as accu-
rately as a doctor”, The Verge, https://www.thev-
erge.com/2018/8/13/17670156/deepmind-ai-eye-disease-doctor-moor-
fields (retrieved 31.05.2019).
Young W (2019) “Reverend Robot: Automation and Clergy”, Zygon 54(2):
479-500. doi:10.1111/zygo.12515.