You are on page 1of 14

4 ESSSAT News & Reviews, 29-2 June 2019

Article Review

Recent advances in Artificial Intelligence (AI) and some of the issues in


the theology & AI dialogue
Marius Dorobantu

Abstract: This review article briefly considers the history of AI and the most
relevant milestones, and assesses the current state of the art in various AI
applications. It then reviews some of the deeper questions posed by present
and future AI technologies, and their representation in current literature in
the science and theology dialogue.

Introduction
Since the seminal work of AI pioneer Alan Turing, the field of AI has seen
its ups and downs, with times of high hope alternating with times of disap-
pointment (the so-called “AI winters”). The period we’re in, which started
roughly after the turn of the millennium, is once again one of real progress,
with some impressive achievements.
With AI permeating our lives at unprecedented levels, there is serious talk
about it representing a true 4th industrial revolution, with huge consequences
for how we work, live our lives, and relate to one another. At the same time,
the possibility of machines acquiring real intelligence in the near to medium
future is again on the table. This possibility raises a host of exciting questions
– ethical, philosophical, and theological – about the nature of intelligence,
personhood, and even humanity’s right to pursue such a game-changing pro-
ject.
On the other hand, the increasing ubiquity of AI in our legal, medical, and
financial systems, as well as in our social networks, media and entertainment
also brings to the fore a darker perspective. The concerns go from issues of
privacy, manipulation, election meddling, and inequality, to accountability
and even an existential risk for humanity.

History of AI
One of the difficulties in assessing the degree to which AI has advanced and
is already impacting our society is that we seem to have a blind spot for it. It
may be that the general public has a preconceived idea about AI forever being
something of the future, which may come from initially being exposed to the
ESSSAT News & Reviews, 29-2 June 2019 5

AI concept mostly in the Sci-Fi genre. If certain AI applications become com-


mon enough, they are not seen as AI anymore. As John McCarthy, who
coined the term Artificial Intelligence, put it, “As soon as it works, no one
calls it AI anymore” (Vardi 2012).
Another hurdle, which usually comes up when philosophers or theologians
inquire about the possibility of human-level AI or machine consciousness,
comes from the AI field itself. Computer scientists seem to be at times so
enmeshed in the technical challenges of a particular ‘simple’ problem, that
many of them dismiss, right from the outset, the possibility of AI developing
human-type intelligence anytime soon.
However, at its very roots, namely the Dartmouth workshop of 1956, the pro-
ject of developing AI started as an exploration of the nature of human intel-
ligence1. Whether or not machines would prove capable of emulating human
intelligence would bring highly valuable clues about the nature of the latter.
At Dartmouth, building on the works of Norbert Wiener (cybernetics),
Claude Shannon (information theory), and Alan Turing (theory of computa-
tion), the founding fathers of AI set the agenda for the next four decades of
AI research.
This initial approach towards machine intelligence is called symbolic AI, or
GOFAI (good old-fashioned AI) (Haugeland 1985, 112). Its main assump-
tion is that most aspects of intelligence can be modeled through the manipu-
lation of symbols. This is how human intelligence seems to be functioning,
therefore machines could also achieve AI through manipulating a finite set
of symbols.
Symbolic AI delivered some impressive accomplishments in mathematics
and game playing. Programs like Simon & Newell’s ‘Logic Theorist’ and
‘General Problem Solver’, or Gelernter’s ‘Geometry Theorem Prover’ were
capable of solving a wide range of mathematical and logical problems, while
Samuel’s program could play checkers at a human level. These early achieve-
ments unleashed a wild optimism regarding near-future progress, with some
even predicting the imminence of a fully intelligent machine within one gen-
eration (Norving & Russell 2003, 21).
But this optimism was soon tempered by the realization that symbolic AI was
very limited in the scope of problems that it could solve. A common intuitive
assumption among the AI pioneers was that once AI became capable of high-
level reasoning tasks, like chess or mathematics, other more mundane prob-
lems in AI and robotics would be much easier to solve. Nonetheless, this

1
In his proposal for the Dartmouth workshop, which is seen as the founding event
of Artificial Intelligence as a field, John McCarthy explicitly states that his purpose
is “to study the relation of language to intelligence” (McCarthy et al., 1955).
6 ESSSAT News & Reviews, 29-2 June 2019

assumption proved to be wrong, because of what is known as Moravec’s par-


adox2: high-level, or ‘intellectual’ reasoning requires much less computa-
tional resources than low-level sensorimotor skills.3 In other words, skills
that for humans are very hard can be, counter intuitively, much easier to com-
pute than things than humans perform effortlessly.
Research in AI has since had its ups and downs, including two so-called ‘AI
winters’ (1974-1980 and 1987-1993) (Crevier 1993, 203). However, we
seem to be now in a period of unprecedented progress. It is difficult to pin-
point the exact moment when it took off, but that moment is surely located
in the first decade of the 21st century.
The new successes do not stem from any revolutionary new algorithms, but
rather from the resurgence of an approach called machine learning. Unlike
traditional symbolic AI, which is more logical, machine learning is more
probabilistic. After being trained on sets of sample data, machine learning
algorithms build their own mathematical models, which they then use to
make predictions or decisions on new data (Bishop 2006, 2).
Although the machine learning approach had been around since the 1980s,
its real success is rather recent, and it was enabled by two main factors: the
exponential increase in computation power, described by Moore’s law4, and
the wide availability of big data.
By shifting its goals from trying to achieve ‘real Artificial Intelligence’, as
symbolic AI had sought to do, to focusing on solving concrete problems, like
computer vision, speech recognition, or language translation, machine learn-
ing has attracted attention and funding from private companies. These have
seized on the opportunity to fuel AI research having in mind commercial,
rather than academic, goals.
Some of the most impressive milestones in AI have occurred in the sub-field
of game playing. In 1997, IBM’s Deep Blue took the world by surprise by
defeating the world chess champion Gary Kasparov in a 6-game match. Even
though chess may not be the epitome of intelligent behaviour, at the time it
stood as a symbol of human intelligence. However, the method used by Deep

2
Named after AI researcher, roboticist and futurist Hans Moravec.
3
“It is comparatively easy to make computers exhibit adult level performance on
intelligence tests or playing checkers, and difficult or impossible to give them the
skills of a one-year-old when it comes to perception and mobility", (Moravec 1988,
15).
4
Moore’s law, named after co-founder of Intel, Gordon Moore, predicts that com-
puting power doubles roughly every two years, due to the increase in density of the
possible number of transistors in an integrated circuit (Moore 1965).
ESSSAT News & Reviews, 29-2 June 2019 7

Blue has been associated more with ‘brute force’, rather than with true intel-
ligence. It was able to outplay Kasparov because it could simply search and
evaluate millions of positions a minute, and not because it could actually rea-
son about what is the best strategy and why (Harmon 2019). This was enabled
by the computer’s huge computation power.
Another remarkable achievement came in 2011, when IBM’s Watson won
the TV contest Jeopardy, defeating two of the most successful human players
in the history of the game. This victory was of a totally different calibre, since
the task involved complex language manipulation. The AI had to ‘decipher’
the meaning of general knowledge questions, through natural language un-
derstanding, and then search for the answers in a database of hundreds of
thousands of online pages. Nevertheless, this accomplishment too triggered
the criticism of not representing real intelligence. Philosopher John Searle
argued at the time, using his famous Chinese room thought experiment, that
Watson is not able to think, in spite of winning Jeopardy (Searle 2011).
Arguably the most astonishing AI milestone came in a series of events in-
volving DeepMind’s AlphaGo and AlphaZero between 2015 and 2017. The
team at DeepMind took on the game of Go, which had been considered the
holy grail of AI for decades, due to its sheer complexity. With the number of
possible developments in a Go game being larger than the number of atoms
in the observable universe, the task could not be approached with brute force:
the time necessary to compute all the possibilities would have simply been
on the order of tens of billions of years. Instead, AlphaGo used supervised
learning to learn Go from thousands of expert human games, and then rein-
forcement learning to improve through self-play (Silver et al. 2016).
In October 2015, AlphaGo became the first computer program to defeat a
professional Go player. The match against the European champion Fan Hui
ended with a clear 5-0 victory. Only months later, in March 2016, an already
improved version of AlphaGo shockingly defeated one of the best human
players ever, Korean champion Lee Sedol, by 4-1. DeepMind did not stop
there, and in 2017 they developed AlphaGo Zero, which did not use human
games at all to train itself. Instead, it learned the game from scratch and, after
only three days of self-play, defeated AlphaGo by 100-0 (Hassabis & Silver
2017).
Finally, in December 2017, DeepMind created AlphaZero, a more general
program that could play chess, shogi and Go. In less than 24 hours of self-
play, it achieved super-human levels at all three games, crushing the com-
puter world champion in each game: Stockfish (chess, 28 wins, 72 draws, 0
losses), elmo (shogi, 90 wins, 8 draws, 2 losses), and AlphaGo Zero (Go, 60
wins, 40 losses) (Silver et al. 2018).
8 ESSSAT News & Reviews, 29-2 June 2019

Current state of the art


Besides game playing, AI is currently making significant progress in image
recognition, computer vision, language manipulation, or prediction, with
huge possible impacts for healthcare, transportation, media, or the military.
It is beyond the scope of this article to offer a comprehensive review of all
the developments currently happening in all AI sub-fields. To better illustrate
both the positives and the challenges of this technology, we will focus on two
areas of research: image recognition and natural language processing.
Image recognition and classification provides the necessary framework for a
wide variety of applications. The ImageNet classification challenge, the
standard test in visual object recognition software, offers a relevant picture
of the recent historical advances. The state-of-the-art algorithms have im-
proved their performance from a 26.2% error rate in the late 1990s, to 15.3%
in 2012, to 2.25% in 2017 (Ouaknine 2018).
In healthcare, deep learning algorithms of image recognition are able to per-
form at human or super-human levels in a variety of tasks. Here are only two
of the most recent applications. A new model called MENDDL (Multinode
Evolutionary Neural Networks for Deep Learning) is as good as human ex-
perts at finding defects in electron microscopy, only much faster (US Depart-
ment of Energy 2018). Another example comes again from DeepMind,
whose software can identify 50 eye diseases as accurately as human doctors,
by looking at 3D scans of retinas (Vincent 2018).
Other examples of image recognition applications come from ecology. Cli-
mate researchers are using AI to model with unprecedented accuracy the pos-
sible impact of climate change on cloud density, a type of prediction that used
to be thought of as simply too complex to ever make (Jones 2018). Auto-
mated drone-based and satellite wildlife surveys are also pivotal in the at-
tempts to monitor and protect endangered species (Palminteri 2018).
While progress in image recognition and computer vision already delivers
breakthrough applications with a huge potential for positive impact, there is
also a darker side to this technology. Firstly, there is the technique called
deepfake5, which superimposes existing images or videos onto source images
or videos, using a machine learning procedure called generative adversarial
network (GAN). What this means, in practice, is that it is now possible to
create fake videos of anyone saying anything. One famous example is a video
of former president Barack Obama warning about the dangers of deepfake,
except the video itself is a deepfake (Suwajanakorn et al. 2017).

5
Stemming from ‘deep learning’ and ‘fake’
ESSSAT News & Reviews, 29-2 June 2019 9

This technology brings a lot of hope for more realistic character depictions
in the videogame and virtual reality industries, but it also raises serious ethi-
cal questions. It is easy to imagine how blurring the line between what is
verifiably real and what is not could negatively affect us both individually
and collectively.
At the individual level, it opens up the possibility of being personally targeted
as subjects of fake videos, with catastrophic consequences for one’s reputa-
tion. Actress Scarlett Johansson, who was herself a target of computer-gen-
erated deepfake pornography, spoke out about how celebrities might still be
protected by their fame, while lesser known persons would be helpless in
front of such damaging campaigns (Harwell 2018). Deepfake technology
poses definite threats to the way we relate to one another.
At the collective level, this type of realistic fake video could have disastrous
effects for our society and our democracy. It is not difficult to imagine how
it could lead to serious diplomatic conflict, by falsely depicting politicians
making scandalous declarations or comments. But at a more fundamental
level, this technology has the potential to erode our society in an unprece-
dented way, by sowing the seed of distrust into any event or story. It is not
so much that false claims would be presented as real, but that anything could
be potentially fake.6 The infosphere as it is today is already hugely loaded
with information, and a lot of effort and training are required in order to dis-
cern the real from the dubious. The proliferation of deepfake could render it
even more difficult to make sense of.
The second example of the dark side of using image recognition technology
is that of face recognition. While its current superb level of performance sim-
plifies many processes, from smartphone unlocking to airport security check,
it can also be used in more chilling ways.
China’s new social credit system is designed to socially engineer behaviour.
Widespread surveillance allows the Chinese government to collect large
amounts of data on each citizen, and then compile it using AI to create a
social score. The data includes the individual’s tax and credit status, but also
her online and offline behaviour, including footage from street surveillance
cameras, powered by face recognition. A low score might prevent one from
getting credit or purchasing flights, or it could even lead to being blacklisted
as an ‘untrustworthy person’. Moreover, when such a person crosses some

6
Egelhofer & Lecheler speak of the two-dimensionality of fake news: the fake news
genre, i.e. the deliberate creation of pseudojournalistic disinformation, and the fake
news label, i.e. the instrumentalization of the term to delegitimize news media
(Egelhofer & Lecheler 2019).
10 ESSSAT News & Reviews, 29-2 June 2019

intersections in Beijing, facial recognition allows the system to project their


face and ID on giant billboards, for public shaming (Campbell 2019).
This Orwellian example speaks for itself about the dangers of AI-powered
technologies for individual privacy and what that can do to a society. Harari
(2018: 61-68) explores the possibility of digital dictatorships, where 20th cen-
tury style authoritarian regimes combine with the technological surveillance
made possible by 21st century AI and big data. According to Harari, last-
century dictatorships ultimately failed because the technological and eco-
nomical landscape of the 20th century simply favoured distributed infor-
mation-processing (capitalism) over its centralized counterpart (com-
munism). However, the danger is that with the new technological landscape,
centralized processing might have the upper hand, making a dictatorship
more economically and politically efficient than a democracy. Furthermore,
while citizens subjected to 20th century dictatorships still preserved the fun-
damental freedom to keep their own thoughts to themselves, at times even
against torture, new technologies could do away even with that last resort of
inner freedom. Combined data from body sensors with algorithms of face-
recognition that can accurately infer one’s emotions could finally enable the
secret police to tell whether one really loves Big Brother or is just pretend-
ing…
Another area of significant recent process in AI and machine learning is nat-
ural language processing (NLP), which entails both language understanding
and language generation. Applications of NLP include examples with which
the general public is very familiar, such as machine translation (Google
Translate), intelligent personal assistants (Siri, Alexa etc.), search engines, or
chatbots. The benefits of these technologies and the ways they make life eas-
ier for so many of us are so obvious and widespread that it is unnecessary to
analyze them in more detail.
But this seemingly benign technology too can be used in a morally question-
able way, which is best illustrated by the example of GPT2, developed by
OpenAI (Hern 2019). GPT2 is a text generator, which means it is capable of
generating a coherent text starting from as little as a few words. It is impres-
sive in both the quality of its outputs – which match the input in subject, but
also in style – and in the scope of its expertise: not only can it generate plau-
sible text, but it can also translate, summarize, and pass simple comprehen-
sion tests.
However, precisely because it is such a breakthrough technology, its creators
decided that it is best not to release it yet, until they can better evaluate all
the ways in which it could be used. One such way is, of course, the generation
of fake news. Fed with merely the first paragraphs of a Guardian article on
Brexit, GPT2 was able to generate an entire story, including fake quotes from
ESSSAT News & Reviews, 29-2 June 2019 11

Jeremy Corbyn. Another way to misuse GPT2 is to train it to generate infinite


positive or negative reviews of products, as demonstrated by the OpenAI en-
gineers themselves (Radford et al. 2019), or hateful content. These possible
outcomes raise similar ethical questions to the example of deepfake above.

Theology and AI
One grand topic in the dialogue between theology and AI is that of eschatol-
ogy, apocalypticism, singularity and salvation. In her 2003 article, Artificial
Intelligence and Christian Salvation: Compatibility or Competition, Ilia
Delio (Delio 2003) marks the distinction between Christian salvation and
what she calls techno-salvation. Due to its “contingent contingency” (Delio
2003, 49), AI will never be able to provide the perfection of life and immor-
tality. Christian salvation, on the other hand, is qualitatively different, being
rooted in the Incarnation and sacrifice of Christ, it concerns the whole human
being, not just the mind, as the gnostic techno-salvation, and is the only one
congruent with the dignity of the human person as imago Dei.
Robert M. Geraci (Geraci 2008) makes an interesting connection between
apocalyptic religious thought, Jewish and Christian, and current “Apocalyp-
tic AI”, treating the latter as a “legitimate heir” to the former (Geraci 2008:
158). They both share a dualistic view of the world; in religious apocalyp-
ticism, God intervenes to grant the victory of good over evil, in Apocalyptic
AI it is evolution that ensures the victory of intelligence over ignorance; both
God and evolution provide transcendent promises for the future. Apocalyptic
AI is predicted to grow in exposure, and theologians and scholars of religion
should pay more attention to it.
Ronald Cole-Turner also makes the case, in a 2012 paper (Cole-Turner
2012), that deeper analysis reveals a strong connection between religious
views of the future and transhumanist hopes for a singularity. He looks in
parallel at American evangelical views of the future, as a case study, and at
secular ideas of the singularity and the intelligence explosion, as exemplified
in the predictions of Ray Kurzweil. Cole-Turner concludes that the two,
while fundamentally different in many respects, also share many features,
like the postulation of a clear periodization in the future unfolding of the his-
tory of the cosmos.
Another big topic in the science & theology dialogue on AI is that of imago
Dei, creation in the image of God. Its connection with AI might sound sur-
prising at first, but the two are related in at least two ways. Firstly, the crea-
tion of intelligent beings by humans draws inevitable parallels with the divine
creation of humans. Secondly, the emergence on the historical scene of non-
human intelligence raises questions regarding what it means for humans to
12 ESSSAT News & Reviews, 29-2 June 2019

be in the image of God, and of the possible inclusion of AI into the scope of
imago Dei.
In her influential article (Foerst 1998), Anne Foerst refers in a creative way
to the project of Cog, a humanoid robot created by the MIT AI Lab, connect-
ing it to the account of humans being created in the image of God. In her
approach, the two represent complementary stories: the very pursuit of build-
ing humanoid robots is a reflection of the creative powers that are part of
what it means to be in the image of God. Moreover, the common reactions of
humans to Cog, fear and awe, confirm the hypothesis that Cog stands as a
symbol for the divine creativity that is part of what it means to be a human
mirroring God.
Geraci (Geraci 2007) agrees with Foerst’s intuition about the human reac-
tions towards Cog, but argues that they may be better interpreted as proofs of
an unconscious elevation of machines to a divine status. Using Rudolph
Otto’s framework for the human encounter with the divine, Geraci makes a
strong case for how both Foerst’s empirical data and the science fiction of
the 20th century provide evidence of this machine apotheosis, which threatens
traditional Christian theologies.
Noreen Herzfeld’s book (Herzfeld 2002), In our Image: Artificial Intelli-
gence and the Human Spirit, remains an early classic on the topic of intelli-
gent machines and imago Dei. By using the examples of famous AIs from
science fiction, like HAL 9000, R2-D2, or David, she questions humanity’s
project of trying to create something in its own image. She draws a parallel
with the biblical story of humans created in the image of God, for which she
analyzes three historical interpretations: substantive, functional, and rela-
tional.
In a later article (Herzfeld 2005), she paints a beautiful analogy between the
history of interpretation of imago Dei and the history of the field of AI. Just
as theologians have shifted from a substantive interpretation to a functional,
and finally to a relational one, in the same way AI research has moved from
a substantive model (symbolic AI) to a functional one in the 1980s, with less
ambitious goals and more focus on the realization of narrow, specific tasks.
She correctly predicts a second shift, from functional to relational, which she
already notices in projects of the late 1990s like Cog and Kismet, towards the
long-awaited passing of the Turing Test, which is essentially relational.
Burdett (2015) follows up on the AI & imago Dei topic, but asks an even
more fundamental question: to what degree do current developments in in-
formation sciences (and biology) challenge the theological understanding of
what it means to be in the image of God. He follows Grenz and van Huyssteen
in identifying an additional historical interpretation to imago Dei (besides
Herzfeld’s substantive, functional and relational one), namely the dynamic
ESSSAT News & Reviews, 29-2 June 2019 13

model. This roots imago Dei deeper in Christology, defining it not as some-
thing fixed, but rather as a dynamic following-out of our “true anthropologi-
cal source” (Burdett 2015: 5) in Christ. Burdett concludes that the flexibility
of this model also equips it with the best answers to the challenges from in-
formation sciences.
Other theologians have dared to contest the very notion of intelligence in
machines. In his paper, Where There’s Life There’s Intelligence, Ted Peters
describes seven characteristics of intelligence: interiority, intentionality,
communication, adaptation, problem-solving, self-reflection, judgment.
Later on (Peters 2019) he returns to these characteristics to show that AI, at
least in its current disembodied instantiation, does not qualify as intelligence,
and that even the most primitive biological life satisfies the criteria better
than AI. For Peters, the problem with AI is that it lacks any sense of self or
agency, “there’s nobody home” (Peters 2019: 3). In his opinion, a more
promising route towards human-level AI is to reverse-engineer the human
brain, without any prior theory of intelligence, or even to amplify human in-
telligence through cyborg-ification, although both of these lines of attack are
still highly doubtful.
In a paper looking at the practical consequences of AI’s ubiquity in society,
Mohammad Yaqub Chaudhary (Chaudhary 2019) argues that the progress in
AI is subtly fueling a secular re-enchantment of the world. The boundary
between reality and augmented reality (AR) is becoming increasingly dif-
fuse, and this combines with the fact that human perception is more and more
mediated by technology. The consequence that begins to unveil is a world
where human existence is shaped by the personal and social AI agents that
inhabit the AR in the form of digital avatars and daemons.
Finally, William Young (2019) asks an interesting question, which has half-
seriously been on the lips of many for a while: with AI being projected to
take over many tasks in the fields of transportation, finance or law in the near
future, could the jobs of clergy be partially or fully automated? The article
presents a host of existing AI technologies that, with some little tweaks, could
perform some of the tasks of ministers. IBM’s Project Debater, for example,
already matched an Israeli debate champion in 2018. With access to an ex-
tensive corpus of sermons and other theological resources, it is likely that a
machine-learning, sermon-writing AI would perform at a human level.
Again, a palliative-care chatbot could, accordingly, perform some parts of
spiritual care.
However, Young argues that developing such technologies might require a
consistent financial incentive, which seems rather unlikely. Instead, what we
14 ESSSAT News & Reviews, 29-2 June 2019

could witness increasingly more in the future is “the artifacts of online spir-
ituality” (Young 2019, 498) eroding the already-declining physical church
attendance.

Conclusion
The engagement of theologians with challenges posed by AI research has
been so far rather limited. Some topics, like the similarity and difference be-
tween singularitarian views of the future and religious eschatology have been
well covered, with robust argumentations and quite clear conclusions.
Other topics, like the radical transformations that AI deployment is already
causing to our society and our relationships to one another, are still in need
of profound theological evaluation.
In our opinion, there are three other very promising research paths in the di-
alogue between theology and science, on the topic of Artificial Intelligence.
Firstly, we seem to lack a more positive approach of the inter-relation be-
tween the two. AI research seems to have abandoned its initial vocation –
that of exploring the nature of human intelligence – and focuses instead on
less ambitious, but more feasible, projects. Christian theological anthropol-
ogy has grappled for two thousand years with questions like what it means to
be human, or what is the role of the intellect in a virtuous life. It seems highly
unlikely that theology does not have any meaningful insight to contribute in
the interdisciplinary dialogue regarding what we should aim for in creating
Artificial Intelligence.
A second path, in the same spirit of positive appreciation and mutual collab-
oration, concerns the possibility of using AI as a fresh lens through which we
could look at religious experience. Could intelligent machines also experi-
ence faith? Is there any way of using machine learning algorithms to predict
religious behaviour, or does it totally escape any mathematical analysis?
Last, but not least, advances in AI force theologians to clarify some funda-
mental concepts, that for too long have been allowed to dwell in ambiguity.
What does it mean to be in the image of God? Are there any limits to human
creativity? Is there anything more to humans than the information processing
patterns in our bodies and brains? Hopefully, the possible imminence of in-
telligent machines can help the development of a ‘theology with a deadline’
that can better answer these questions.
ESSSAT News & Reviews, 29-2 June 2019 15

References
Bishop C M (2006) Pattern Recognition and Machine Learning, Springer.
Burdett M S (2015) “The Image of God and Human Uniqueness; Chal-
lenges from the Biological and Information Sciences”, The Expository
Times 127(1): 3-10.
Campbell C (2019) “How China Is Using ‘Social Credit Scores’ to Reward
and Punish Its Citizens”, Time, http://time.com/collection/davos-
2019/5502592/china-social-credit-score/ (retrieved 31.05.2019).
Chaudhary M Y (2019) “Augmented Reality, Artificial Intelligence, and the
Re-enchantment of the World”, Zygon 54(2): 454–478.
doi:10.1111/zygo.12521.
Cole-Turner R (2012) “The Singularity and the Rapture: Transhumanist
and Popular Christian Views of the Future”, Zygon 47(4): 777-796.
Crevier D (1993) AI: The Tumultuous Search for Artificial Intelligence,
Basic Books, New York, NY.
Delio I (2003). “Artificial Intelligence and Christian Salvation: Compatibil-
ity or Competition?”, New Theology Review, November 2003.
Egelhofer J L, Lecheler S (2019) “Fake news as a two-dimensional phe-
nomenon: a framework and research agenda”, Annals of the Interna-
tional Communication Association, 43(2): 97-116,
doi:10.1080/23808985.2019.1602782.
Foerst A (1998) “Cog, a Humanoid Robot, and the Question of the Image
of God”, Zygon, 33(1): 91–111. doi:10.1111/0591-2385.1291998129 .
Geraci R M (2007) “Robots and the Sacred in Science and Science Fiction:
Theological Implications of Artificial Intelligence”, Zygon, 42(4):
961–980, doi:10.1111/j.1467-9744.2007.00883.x.
- (2008) “Apocalyptic AI: Religion and the Promise of Artificial Intelli-
gence”, Journal of the American Academy of Religion, 76(1): 138–
166, doi:10.1093/jaarel/lfm101 .
Harari Y N(2018), 21 Lessons for the 21st Century, Jonathan Cape, Lon-
don.
Harmon P (2019) “AI Plays Games”, Forbes,
https://www.forbes.com/sites/cognitiveworld/2019/02/24/ai-plays-
games/#7b8e7da4a49f (retrieved 31.05.2019).
Harwell D (2018) “Scarlett Johansson on fake AI-generated sex videos:
‘Nothing can stop someone from cutting and pasting my image’”, The
16 ESSSAT News & Reviews, 29-2 June 2019

Washington Post, https://www.washingtonpost.com/technol-


ogy/2018/12/31/scarlett-johansson-fake-ai-generated-sex-videos-noth-
ing-can-stop-someone-cutting-pasting-my-image/?noredi-
rect=on&utm_term=.6c084d183413 (retrieved 31.05.2019).
Hassabis D, Silver D (2017) “AlphaGo Zero: Learning from scratch”,
DeepMind blog, https://deepmind.com/blog/alphago-zero-learning-
scratch (retrieved 31.05.2019).
Haugeland J (1985) Artificial Intelligence: The Very Idea, MIT Press, Cam-
bridge, MA.
Hern A (2019) “New AI fake text generator may be too dangerous to re-
lease, say creators”, The Guardian, https://www.theguardian.com/tech-
nology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fic-
tion (retrieved 31.05.2019).
Herzfeld N (2002) In Our Image: Artificial Intelligence and the Human
Spirit, Fortress Press, Minneapolis, MN.
- “Co-creator or co-creator? The problem with artificial intelligence”, in U
Görman, W B. Drees and H Meisinger (eds.), Creative Creatures: Val-
ues and Ethical Issues in Theology, Science and Technology, T&T
Clark, London: 45-52.
Jones N (2018) “Can Artificial Intelligence Help Build Better, Smarter Cli-
mate Models?”, Yale E360, https://e360.yale.edu/features/can-artifi-
cial-intelligence-help-build-better-smarter-climate-models (retrieved
31.05.2019).
McCarthy J, Minski M L, Rochester N, Shannon C E (1955) A Proposal for
the Dartmouth Summer Research Project on Artificial Intelligence,
http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf (retrieved
31.05. 2019).
Moore G E (1965) "Cramming more components onto integrated circuits",
Electronics 38 (8).
Moravec H (1988) Mind Children, Harvard University Press.
Norvig P, Russell S J (2003) Artificial Intelligence: A Modern Approach,
Prentice Hall.
Ouaknine A (2018) “Review of Deep Learning Algorithms for Image Clas-
sification”, Medium, https://medium.com/zylapp/review-of-deep-learn-
ing-algorithms-for-image-classification-5fdbca4a05e2 (retrieved
31.05.2019).
ESSSAT News & Reviews, 29-2 June 2019 17

Palminteri S (2018) “10 ways conservation tech shifted into auto in 2018”,
Mongabay, https://news.mongabay.com/2018/12/10-ways-conserva-
tion-tech-shifted-into-auto-in-2018/ (retrieved 31.05.2019).
Peters T (2017) “Where There’s Life There’s Intelligence”, What is Life?
On Earth and Beyond, A Losch (ed.), Cambridge University Press,
236-259.
- (2019) “Intelligence? Not Artificial, but the Real Thing!”, Theology and
Science, 17(1): 1-5, doi:10.1080/14746700.2018.1557376.
Radford A et al. (2019) “Better Language Models and Their Implications”,
OpenAI blog, https://openai.com/blog/better-language-models (re-
trieved 31.05.2019).
Searle J (2011) “Watson Doesn’t Know It Won on ‘Jeopardy!’” Wall Street
Journal, https://www.wsj.com/arti-
cles/SB10001424052748703407304576154313126987674 (retrieved
31.05.2019).
Silver D et al. (2016) “Mastering the game of Go with deep neural networks
and tree search”, Nature 529: 484-489.
Silver D, Hubert T, Schrittwieser J, Hassabis D (2018) “AlphaZero: Shed-
ding new light on the grand games of chess, shogi and Go”, DeepMind
blog, https://deepmind.com/blog/alphazero-shedding-new-light-grand-
games-chess-shogi-and-go (retrieved 31.05.2019).
Suwajanakorn S, Seitz S M, Kemelmacher-Shlizerman I (2017) "Synthesiz-
ing Obama: Learning Lip Sync from Audio". ACM Trans. Graph.
36.4: 95, doi:10.1145/3072959.3073640.
U.S. Department of Energy (2018) “Deep learning for electron micros-
copy”, Phys.org, https://phys.org/news/2018-12-deep-electron-micros-
copy.html (retrieved 31.05.2019).
Vardi M Y (2012) “Artificial Intelligence: Past and Future”, Communica-
tions of the ACM, 55(1): 5 doi:10.1145/2063176.2063177.
Vincent J (2018) “DeepMind’s AI can detect over 50 eye diseases as accu-
rately as a doctor”, The Verge, https://www.thev-
erge.com/2018/8/13/17670156/deepmind-ai-eye-disease-doctor-moor-
fields (retrieved 31.05.2019).
Young W (2019) “Reverend Robot: Automation and Clergy”, Zygon 54(2):
479-500. doi:10.1111/zygo.12515.

You might also like