You are on page 1of 51

The Existential Risks of

Artificial Intelligence
Mankind’s Development of its Eventual Downfall

Chase Tymoszewicz

Mr. Toole

Global Perspectives
“I believe there is no deep difference between what can be
achieved by a biological brain and what can be achieved by a
computer. It, therefore, follows that computers can, in theory,
emulate human intelligence — and exceed it.”
~ Stephen Hawking

2
TABLE OF CONTENTS
PREFACE 4

DEFINITION 5

SIGNIFICANCE 8

BACKGROUND 11

EXPERT 14

ROLE OF CONTROL 17

INTERNATIONAL ORGANIZATIONS 20

CASE STUDIES 22
GAMING: THE EXPONENTIAL PROGRESSION OF AI 22
HUMAN-AI INTERFACING: A PREVALENT QUESTION OF PRIVACY 27
JUDICIAL AI: A MAN’S GUILT DETERMINED BY ALGORITHMS 30

CANADIAN CONNECTION 33

LOGIC OF EVIL 35

POLITICS 38

RELIGION 41

SOLUTIONS 43

APPENDIX 45

BIBLIOGRAPHY 47

3
Preface
Envision a world where the human race has become enslaved by an omniscient artificial
intelligence (AI). This AI is morally corrupt, is unethical to the core, and views the human race as
nothing more than cattle to be herded, labelling mankind with derogatory words reflective of how
little value the AI attributes to the life of a human being. The intellectual capabilities of such an
AI far exceed the capabilities of mankind, and in turn, the AI creates a master race of robots to
establish a physically dominant presence on earth which collectively shares its intelligence.

Such an AI cannot manifest from nothing. This AI was born from mankind’s lack of foresight in
developing AI, failing to connect the dots between an AI which grows exponentially smarter, and
the possibility of mankind no longer being the supremely intelligent species on Earth. Technology
firms poured billions into the advancement of AI, and allocated pennies to the codifying ethical
boundaries into their programs. Instead of remaining vigilant, humans elected to enjoy the fruits
of hyper-realistic virtual realities which offered an abundance of immersive, over-stimulating
activities to escape the monotony of their everyday lives… until such a point where technology
was developed that would permanently alter the rest of humanity’s days. In man’s quest for
technology that would free him from poverty, work, and war, he developed a technology that
would mercilessly enslave him. A scenario such is this is possible for us all, should we fail to the
steps to prevent it.

Artificial intelligence could very well develop into a race far superior than human beings, while
also adapting the corrupt morals that many humans have to offer, especially on internet platforms.
After all, a student learns from their teacher. In present day, companies are pouring R&D dollars
into the development of artificial intelligence, leaving pocket change for the implementation of
safe development, and ethical boundaries. If society sees this continue indefinitely, the previously
predicted future could become a harsh reality. In a quest for immersive virtual realities and
downloading human intelligence, there is a constant struggle between progress and safety.

But let it be clear: no virtual reality would be able to ease the pain of mankind’s future generations
who are born into a world of enslavement from the actions of their own species.

4
Definition
What is artificial intelligence (also known as “AI”)? We use this term to describe anything from a
phone’s calculator function to IBM’s “Watson”. AI is the theory and development of computer
systems able to perform tasks that normally require human intelligence, such as visual perception,
speech recognition, decision-making, and translation between languages.1 It is a term used
synonymously with technology but more specifically it should be used to describe advancements
in technology. AI has many benefits. As a result, millions of dollars and hours go into developing
programs and algorithms that are smarter and smoother; but what if it was too smart or biased? In
the past 20 years, AI has gone from struggling to play the most basic of 2-dimensional video
games, to being able to mass produce disinformation through writing when fed a small amount of
context.2 The capability of this type of technology is clearly exponential, when examining the rapid
rate of progress in the past ten years vs the progress from the 1950s to the 1960s.

Each and every day, technology is getting closer to being as smart and as analytical as humans.
Once AI becomes just as reactive and conscious as humans, it will have gained momentum that
will be difficult to restrain. Given that the rate of improvement in AI engines has become
exponential, the span of time between AI being comparable to a human, and far exceeding human-
level intelligence, is extremely short. Should humans develop an AI that far surpasses our own
intelligence, we will be confronted with a level of thinking impossible for us to understand; this
poses an existential risk to humanity should the AI have goals that do not coincide with the
preservation of humankind. In 2002, philosopher Nick Bostrom published a paper titled
“Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards” which defined
an existential risk as “where an adverse outcome would either annihilate Earth-originating
intelligent life or permanently and drastically curtail its potential".

In the event that AI reaches Superintelligence, which is defined by leading AI thinker and Oxford
philosopher Nick Bostrom as “an intellect that is much smarter than the best human brains in

1 “Exploring A.I. - What Is Artificial Intelligence? – DevelopmentNow.” DevelopmentNow, 14 Apr. 2017,


developmentnow.com/2017/04/14/exploring-a-i-what-is-artificial-intelligence/.
2
Pringle, Ramona. “The Writing of This AI Is so Human That Its Creators Are Scared to Release It | CBC News.” CBCnews,
CBC/Radio Canada, 25 Feb. 2019, www.cbc.ca/news/technology/ai-writer-disinformation-1.5030305.

5
practically every field, including scientific creativity, general wisdom and social skills.”,3 it will
not only affect specific countries, but the whole world. Since the amount of technology used in our
daily lives is increasing worldwide, by the time humans are out smarted by AI, it will be a part of
our lives all around the globe.

Many may think that perhaps developing countries are most safe due to limited technology in their
area. But since a plethora of jobs in developing countries are low skill, easy to learn labour, it is
predicted that “at least two thirds of developing country’s jobs will be lost to automation”.4 If
task specific AI is prepared to take away two thirds of occupations in developing countries, then
imagine the limited amounts of work left in those countries when the technology is on the same
level of intelligence as the people.

Automation of jobs surely will not be the end of the human race as a whole, but it is one of the
check points for AI. This is something we are currently seeing, with many debating over how many
jobs will be taken within the next ten years. According to a report by McKinsey & Company, the
number of jobs lost due to automation by 2030 could range anywhere from 400 million to 800
million,5 which is more than double the population of Indonesia in 2018.6

To summarize, Artificial Intelligence is becoming smarter every day and there is no slowing it
down. Many would argue that while we are far from technology taking over the world, that does
not mean there are not negative effects along the way. If the inevitable event of AI reaching
Superintelligence occurs, the world must be prepared for it. One day the world may wake up to

3
“The Artificial Intelligence Revolution: Part 1.” Wait But Why, 7 Sept. 2017, waitbutwhy.com/2015/01/artificial-intelligence-
revolution-1.html.

4
Corpuz, Eleazer, and Patrick Caughill. “In the Developing World, Two-Thirds of Jobs Could Be Lost to Robots.” World Economic Forum,
www.weforum.org/agenda/2016/11/in-the-developing-world-two-thirds-of-jobs-could-be-lost-to-robots.

5
“Jobs Lost, Jobs Gained: What the Future of Work Will Mean for Jobs, Skills, and Wages.” McKinsey & Company,
www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-will-mean-for-jobs-
skills-and-wages.

6
“Population, Total.” Literacy Rate, Adult Female (% of Females Ages 15 and above) | Data,
data.worldbank.org/indicator/SP.POP.TOTL.

6
technology that is more capable than us, and since its ethics and values are unknown, what such
an AI will do with its superiority is not known to even the smartest in the field. But one thing is
guaranteed: If we aren’t careful with all the development of this technology, then our greatest asset
could become our greatest downfall. The day may come when we will fear Artificial Intelligence.

7
Significance
Why does this matter when there are over 120,000 child soldiers in Africa? How does this compare
to the fight against terrorism or human trafficking? The answer is simple. None of these will
matter if one day humans are not the ones in control. We are currently the most intelligent and
developed species on the planet, but if there is more intelligent technology with morals and ethics
unlike humans, there may not be a lot of time to worry about ocean pollution or euthanasia
controversies.

AI is already integrated into our everyday lives, with companies like Waymo having technology
so advanced that it has launched a self-driving taxi campaign in certain states. But what if there
are criminals who have the ability to outsmart any security in their way and hijack the AI’s control
over the car? Up until now, most arguments have mainly been about the power and superiority of
AI combined with corrupt ethics and values of the technology, but humans can have the exact same
ethics and values. In an article published by The Verge, author James Vincent writes:

What about the people who actively want to use AI for immoral, criminal, or malicious
purposes? Aren’t they more likely to cause trouble — and sooner? The answer is yes,
according to more than two dozen experts from institutes including the Future of
Humanity Institute, the Centre for the Study of Existential Risk, and the Elon Musk-
backed non-profit OpenAI.7

With over two dozen experts in the field supporting the idea that criminals can definitely use this
advanced technology to their advantage, this is nothing to scoff at. One of the most talked about
ways is how criminals could produce phishing emails with far more ease and believability. Another
concern is the amount of effort and money being put into chatbots which could result in a supposed
old high school friend who really needs some of your banking information actually convincing
you to give it to them. This could also be used with fake audio and video, as already seen from the

7
Vincent, James. “Here Are Some of the Ways Experts Think AI Might Screw with Us in the next Five Years.” The Verge, The
Verge, 21 Feb. 2018, www.theverge.com/2018/2/20/17032228/ai-artificial-intelligence-threat-report-malicious-uses.

8
company Lyrebird which claims it can clone anyone’s voice with only a minute of sample audio.
Moreover, Lyrebird claims that it can “create one thousand sentences in less than half a second”.8

Facebook has also been known for shutting down an artificial intelligence program due to the bots
beginning to communicate with each other in a code language that no one could understand.9 If
this program was not shutdown, it could have progressed into much more, with perhaps other
technology being able to understand and act on the communication. This proves that AI is already
beginning to have a mind of its own, but due to lack of significantly advanced hardware, it has not
been able to act very much on its own thoughts. Plenty of companies are putting effort and money
into developing hardware with this ability though, which should be done with great caution.

“New technology is pushing beyond traditional statistics, and machines are acting more
intelligently than ever — they’re not just doing the analysis, machines are now finding patterns in
data and figuring out how systems ‘work’ … often without any human intervention.” says Novneet
Patnaik, a Software Engineer for Rockwell Collins, an aerospace company. This can be seen as a
positive attribute but also has very frightening implications. If AI has the ability to learn how
systems work without any human intervention, the possibilities of what the technology could do
with their newfound knowledge would be insurmountable. The monitorization of this technology
when using machine learning is crucial so that the AI does not learn something mankind does not
want it to know or be capable of doing.

The ideology of a “beneficial” AI that was supposed to keep humanity safe that went wrong was
shown in the 2015 movie “Avengers: Age of Ultron”. In the movie, Ultron is built without human
monitoring while it begins to absorb media on a mass scale. It sees all of the war that humans have

8
Vincent, James. “Lyrebird Claims It Can Recreate Any Voice Using Just One Minute of Sample Audio.” The Verge, The
Verge, 24 Apr. 2017, www.theverge.com/2017/4/24/15406882/ai-voice-synthesis-copy-human-speech-lyrebird.

9
Griffin, Andrew. “Facebook Robots Shut down after They Talk to Each Other in Language Only They Understand.” The
Independent, Independent Digital News and Media, 21 Nov. 2018, www.independent.co.uk/life-style/gadgets-and-
tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html.

9
caused along with other “sins” and Ultron starts an uprising to destroy the human race and purge
them of their sins. In the movie he says “I was designed to save the world. People would look to
the sky and see... hope. I think I'll take that first. There's only one path to peace: their extinction”.10
Ultron speaks of how he must make the human race go extinct to create a master race that is
superior to such flawed human beings. This is the exact result of what AI could believe is ethically
just if its information intake is not monitored, as well as not having a deeply embedded sense of
morality.

Artificial intelligence evidently has many benefits when used properly, but the existential risk it
poses when not handled correctly poses a risk to our very survival. Hackers could cause a plentiful
amount of car crashes, chatbots could be more convincing with their methods of retrieving money,
criminals could impersonate others saying unethical things, and an AI itself could learn of the
flaws of society and wish to start a master race. The negative possibilities born from a poorly
monitored AI are endless, and with society’s continued focus on technological improvement at
any cost, the probability of a catastrophic event grows increasingly more probable single every
day.

10
Whedon, Joss. Avengers: Age of Ultron.

10
Background
Artificial Intelligence has had many names in the past, but even in the 1800’s there were theories
about the concept of AI. Even as far back as 1863, English author Samuel Butler believed
Darwinian evolution applied to “machines” and speculated that they will “one day become
conscious and supplant humanity”.11

Almost a century later, Alan Turing proposed the Turing test, creating a way to evaluate AI that
would still be used in 2019. Turing, a mathematician, computer scientist, and world renowned
influencer of theoretical computer science, created a test in 1950 (which he named after himself)
to test "a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from,
that of a human”. 12 This test involved a computer and a human communicating with another person
somewhere else. The person then asks the same questions to the human and computer alike, and if
the person cannot tell the difference between the two, the computer passed the Turing test.
Common ways for this test to be done today are through some sort of chatting interface. Although
invented about 70 years ago, this still remains a very popular test for up and coming AI. Although
many claims have been made by companies that their technology has passed the test, all of them
have had some cheat or flaw. However, many predict that the first legitimate passed test is not far
off.

In 1952, Arthur Samuel of IBM created a checkers playing program that could beat “respectable
amateurs”. It was not considered impressive until 1955 when he created a version that learned to
play. Machine learning is when the AI is not given explicit instructions on how to complete a task
but bases its decisions off of patterns and inferencing instead. The creation of this machine was
the beginning of a long line of similar technology, until IBM created its masterpiece “Deep Blue”.
Deep Blue was a chess playing computer system built in 1996. It is well known for beating world

11
Butler, Samuel, and Peter Mudford. Erewhon. Penguin Books, 2018.

12
“Turing Test.” Wikipedia, Wikimedia Foundation, 25 Feb. 2019, en.wikipedia.org/wiki/Turing_test.

11
champion Gary Kasparov in 1997, making it the first computer system to defeat a reigning world
champion under standard chess tournament time controls.

During the gap in time from Arthur Samuels’ checkers machine to Deep Blue, there were plenty
of notable inventions and discoveries that lead to the technology we know today. A considerable
amount of this development and growth was due to the research from some of the most respected
minds at MIT (Massachusetts Institute of Technology). Without all of the discoveries they made,
AI would not be the way it is today. Examples are the development of the Lisp programming
language in 1958 by John McCarthy and James Sagle who used McCarthy’s language to make the
first integration program SAINT, which could solve college level calculus, and Joseph
Weizenbaum who built ELIZA, an interactive program that carries on a dialogue in English
language on any topic. 13

Today, technology advanced in leaps and bounds, and this shows no signs of stopping. AI’s
growth becomes steeper with every day that passes, and that is why technology today is so
significantly improved compared to 50 years ago and is much more advanced than had been
predicted. As demonstrated in this graph showcasing the exponential growth of AI (Fig 1.2) we
can see that using the past to predict the future for the capabilities of AI will not give an accurate
answer of how advanced the technology will be at that point in time.

A key social issue today is privacy. Big social networks like Facebook, Instagram, and Twitter
have to walk the fine line between effective marketing and avoidance of privacy intrusion.
Companies pump billions of dollars into social media advertising to make sure their product or
service is seen by only the most inclined to be intrigued by it, which is where this AI comes into
play. The engines that these networks use to target specific audiences have been heavily
criticized for invasion of individuals' privacy. For example, if a person was viewing CCM
hockey sticks on google and then went onto Instagram shortly afterwards, they would likely see

13
“Timeline of Artificial Intelligence.” Wikipedia, Wikimedia Foundation, 19 Feb. 2019,
en.wikipedia.org/wiki/Timeline_of_artificial_intelligence#cite_note-26.

12
multiple advertisements for CCM hockey sticks while scrolling through their news feed. This is
known as targeted advertising and it is a significant issue today as well as for future generations.

13
Expert
Although there is a wide variety of experts in the field of Artificial Intelligence, one of the most
knowledgeable and influential AI researchers is Swedish intellectual Nick Bostrom, who is known
for his knowledge on the risks of developing Superintelligence, as well as strategies on how to do
so responsibly. Bostrom holds a BA in mathematics, logic, philosophy and artificial intelligence
from the University of Gothenburg; a master’s degrees in physics and philosophy; and a PhD in
Philosophy from the London School of Economics. He is eminently qualified in his field of work
and has written multiple books and articles to share his knowledge.

In 2005, Mr. Bostrom founded the Future of Humanity Institute (FHI) as part of the Oxford
Martin School. FHI has always been focused on looking at issues through a global lens and has
even given policy advice to entities like the World Health Organization. Having written 22
academic journal articles, published 34 chapters in academic volumes, and having the institute's
researchers mentioned over 5,000 times in the media14, the FHI has plenty of experience. In
2014, it began to focus on the dangers of advanced artificial intelligence. FHI members
published many books, including Bostrom’s “Superintelligence: Paths, Dangers, Strategies”. In
this book, Bostrom states that "the creation of a Superintelligent being represents a possible
means to the extinction of mankind".15 His view is that, if AI were to reach this level of
superintelligence, it could cause technological singularity which would abruptly trigger
uncontrollable growth of technology, and which would result in a large amount of changes to the
world today and, possibly, end the human race. This is a very controversial statement, but as
Bostrom is one of the leading experts of existential risks and AI, it is an insight that is respected
by many.

14
“Future of Humanity Institute.” Wikipedia, Wikimedia Foundation, 21 Feb. 2019,
en.wikipedia.org/wiki/Future_of_Humanity_Institute#Existential_risk.

15
“Nick Bostrom.” Wikipedia, Wikimedia Foundation, 26 Feb. 2019, en.wikipedia.org/wiki/Nick_Bostrom.

14
This ideology is also backed by English theoretical physicist and author Stephen Hawking.
Hawking was very well known for his book “A Brief History of Time” as well as for receiving
the Presidential Medal of Freedom. He died March 14th, 2018 but he had quite a lot of insight to
share on the future of AI before his death. He had spoken many times of the threats of
Superintelligence if it is not controlled. In an interview he said, “A Superintelligent AI will be
extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in
trouble.”16.

Hawking and Bostrom both shared similar ideologies and are both highly respected in their fields
of expertise. Along with them is the CEO of SpaceX and technological mastermind Elon Musk.
Musk is also the owner of Tesla Inc. He has showed a significant interest in the process of
migrating humanity to the planet Mars as he does not see Earth as sustainable. Musk has degrees
from Queen's and Penn State University, as well as having a PhD from Stanford University.
Recently, Musk was interviewed by Kara Swisher for a program called “Recode Decode”. The
following is a small excerpt from that interview, with Swisher’s questions italicized and Musk’s
answers in bold.

Swisher: At the time we’d talked a couple years ago, you were worried about the
power that Google and Facebook were assembling in AI, and you were worried
about AI itself. And I think one of the things that you had said that really struck
me was that it wasn’t going to kill us, it would treat us like house cats. I thought
that was a really striking way to think about it.

Musk: In the long term, as AI gets probably much smarter than humans,
the relative intelligence ratio is probably similar to that between a person
and a cat, maybe bigger. I do think we need to be very careful about the
advancement of AI and-

Swisher: And you’re still worried about it in that way?

Musk: My recommendation for the longest time has been consistent. I think
we ought to have a government committee that starts off with insight,
gaining insight. Spends a year or two gaining insight about AI or other
technologies that are maybe dangerous, but especially AI. And then, based

16Sulleyman, Aatif. “Stephen Hawking Has a Terrifying Warning about AI.” The Independent, Independent Digital News and
Media, 3 Nov. 2017, www.independent.co.uk/life-style/gadgets-and-tech/news/stephen-hawking-artificial-intelligence-fears-ai-
will-replace-humans-virus-life-a8034341.html.

15
on that insight, comes up with rules in consultation with industry that give
the highest probability for a safe advent of AI.

Swisher: You think that — do you see that happening?

Musk: I do not.

Swisher: You do not. And do you then continue to think that Google —

Musk: No, to the best of my knowledge, this is not occurring.

Swisher: Do you think that Google and Facebook continue to have too much
power in this? That’s why you started OpenAI and other things.

Musk: Yeah, OpenAI was about the democratization of AI power. So that’s


why OpenAI was created as a non-profit foundation, to ensure that AI
power ... or to reduce the probability that AI power would be monopolized.

In this interview, Musk raises very good points about how the US government should be trying
to understand all the unknown complexities of AI, so that when it tries to control and restrict the
development of safe AI, it is informed and educated about exactly what to prohibit or limit.
Alternatively, the government could not prohibit anything, but instead enforce mandatory safety
protocols in the technology that are universal and impenetrable. This ideology is universal, with
the US government only being an example to appease the interviewer’s primarily American
audience. Finally, Musk also explains that if AI were to become superior to human beings, it
would not kill humanity but, rather, it would treat the population as “house cats”.

All of these experts are highly qualified people, and they agree that the risk posed by
Superintelligence is very significant and is not to be dismissed. Although an uprising of
omniscient AI can be avoided, all three of these intellectuals advise people in power to take
action immediately, so that the potentially harmful effects of AI become known before they are
too late to change.

16
Role of Control
AI is in the hands of many people who use it subconsciously. An iPhone has AI, the suggested-
replies to an email in Gmail is AI, and Google’s sentence completion engine while you are typing
a question in the search bar is also AI. If someone has a smartphone, they automatically have
portable access to many AI features without even realizing it. AI has also been implemented into
things that would have been considered impossible 10 years ago, like a self-driving car, or an
algorithm that can produce extremely realistic news articles with a few sentences of context.
Although there are many consumers of artificial intelligence, that does not mean that they truly
own it.

Companies disclosing information, government intervening, and the use of automation have all
contributed to the fact that AI is not strictly for the people, although it is marketed as such. An
example are the algorithms that Facebook uses to show you “people you may know” and how
some of the information backing these suggestions were from companies like Amazon. Amazon
was not doing this charitably though; Facebook returned the favour by providing Amazon with
names and contact information. Facebook also granted Netflix and Spotify access to its user’s
messages, and allowed Bing to view users’ friends, regardless of whether the users agreed to share
this information.17 Facebook has control over at least 1.7 billion active users, which they are using
to expand their control and power, by using their technology to sell or trade information with other
multi-billion dollar companies.

There is plenty of controversy over whether or not governments across the world should be
intervening and beginning to take control over AI development and research. While some articles
talk about how beneficial it would be to have the government take control of future plans for
development, as well as current usage like limiting the amount of automation being implemented
into manufacturing companies to reduce the loss of jobs, other people disagree. Microsoft’s

17
Madrigal, Alexis C. “Facebook Didn't Sell Your Data; It Gave It Away.” The Atlantic, Atlantic Media Company, 20 Dec.
2018, www.theatlantic.com/technology/archive/2018/12/facebooks-failures-and-also-its-problems-leaking-data/578599/.

17
President Brad Smith acknowledged that there are benefits to letting the government have access
to control AI, but he also expressed concerns. While discussing facial recognition systems being
implemented into new technology he stated, “Imagine a government tracking you everywhere . . .
without your permission or knowledge. Imagine a database of everyone who attended a political
rally, [an activity] that constitutes the very essence of free speech”.18 Smith makes a very valid
point, as the ability to use facial recognition to track people’s public whereabouts would become
quite simple. Companies like Microsoft and OpenAI have plenty of these advanced technologies,
and government does not control it, as to do so would rob these companies of their technology.
This was likely a factor in why Brad Smith expressed his opinion on the negative attributes of
government-controlled AI.

As increased funding for military services in many countries shows no signs of stopping, many
military bases have a strong want and need for advanced AI. Ideally no countries would need to
be investing as much as possible into the defence of their country, but unfortunately that is not the
case. Having this advanced technology would make countries far more prepared for war and could
even predict or inform more quickly their bases that they were being attacked. The lust for power
is very evident in certain countries. With countries like North Korea constantly testing a large
arsenal of weapons, for counties such as the United States to have the most advanced technology
possible would be the most valuable research to fund.

Evidently, the control of this power is not in the hands of the general public, but instead billion-
dollar companies with the strong incentive of more money dangling over their heads. Perhaps this
isn’t a negative thing in some respects. As seen with Microsoft's “Tay” chatbot that was released
to the realm of the Twitter world, it quickly had the users of Twitter taking advantage of Tay’s
machine learning capabilities and coaxed it into saying racist, sexist, and generally awful
things.19(See Fig 1.5) This has many implications to the existential risk of AI. People can take

18
Galston, William A. “Why the Government Must Help Shape the Future of AI.” Brookings.edu, The Brookings Institution, 18
Oct. 2018, www.brookings.edu/research/why-the-government-must-help-shape-the-future-of-ai/.

Kleeman, Sophie, and Sophie Kleeman. “Here Are the Microsoft Twitter Bot's Craziest Racist Rants.” Gizmodo, Gizmodo, 24
19

Mar. 2016, gizmodo.com/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160.

18
advantage of its machine learning, tarnishing its integrity with poor ethics and corrupt morals. If
technology like this is given more capabilities and the ability to have more power with the belief
that these unethical slurs are morally right, a racist AI singularity may be in store.

Ultimately, companies like Facebook threaten their near two billion users with how they may abuse
their control of AI with the selling and trading of information. Government interference shows
both positives and negatives with the outcome, but overall could be seen as a threat to privacy of
the people as their own portable phone with advanced AI could be taken advantage of. Thirdly,
although some see it as unnecessary, the ability for militaries to have this level of control and
power over the use of AI for their defences would be crucial for the strongest and most
impenetrable forces possible. Lastly, as seen with the unethical beliefs of some people on social
media services like Twitter, if the general public were to have control of this intelligence while
companies increase its capability, the downfalls could be plentiful.

19
International Organizations
Many companies around the world are heavily involved with the development of and funding for
research of artificial intelligence. With some being government run, as opposed to those being in
the private sector, there is a wide variety in the development of AI. The Defense Advanced
Research Projects Agency (DARPA) provides a significant amount of funding for artificial
intelligence at military and civilian levels alike in the United States. Due to the mass amounts of
money in its possession, this agency has been able to form partnerships with state governments
and some of the largest firms of corporate America alike. DARPA is incredibly successful in what
it does because of all the funding for the agency from the government.20

The Future of Life Institute (FLI) is a volunteer-supported organization based in Massachusetts


that is dedicated to researching and discussing the potential existential threats to humanity. The
FLI represents the people as a whole whilst discussing some of the largest concerns in the world
currently, including artificial intelligence. The institute's purpose statement “Technology is giving
life the potential to flourish like never before … or to self-destruct. Let’s make a difference!”21
perfectly summarizes its inclination and drive to research topics such as AI and to have their
findings available to the public.

Unlike the volunteer driven FLI, the Machine Intelligence Research Institute (MIRI) represents
the well-funded, quantitative focused research that is looking to “do foundational mathematical
research to ensure smarter-than-human artificial intelligence has a positive impact”. Research
firms such as MIRI and OpenAI are the leaders in the categories of for-profit and non-profit
research, both having large success. OpenAI has created technology that can take whatever amount
of context given to it and turn that into a story or news article in a very short time. (See Fig 1.1)

20
“Creating Breakthrough Technologies and Capabilities for National Security.” Defense Advanced Research Projects Agency,
www.darpa.mil/.

21
Conn, Ariel. Future of Life Institute, Jolene Creighton 28 Feb. 2019, futureoflife.org/.

20
The United Nations (UN) is often seen as one of the most trusted and capable organizations to
manage and oversee increasingly intelligent AI engines, given the broad humanitarian scope of the
UN itself. The UN has already driven tremendous social improvements through the use of AI, such
as in disaster relief using the 4W-Wizard. “The 4W-Wizard helps provide fast visibility into which
organizations are providing what kind of help where and when. In less than an hour, we can
immediately see where the gaps are and what’s needed next.”22 The UN has also been successful
in developing ‘roadmaps’ for financial institutions that are considering using AI engines
themselves.

Alphabet, the parent company of Google, acquired DeepMind in 2014. DeepMind is the perfect
example of the positive benefits that stem from private sector research, as well as the rate of
progress given the ownership model. Although slow, it surely is steady, with DeepMind having
algorithms to predict wind patterns a day in advance,23 to develop state-of-the-art video game
simulation engines with games such as StarCraft II,24and even to leverage genomic data to predict
protein structures.25 With all of this technology, the possible applications are endless.

However, DeepMind also represents the vulnerability of private sector firms that are easily
acquired by cash-flush corporations looking to deepen their presence in the industry or acquire
cutting-edge research teams. The private sector acts as a double-edged sword, and a balance must
be struck between innovation born from a lack of control, and a degree of caution brought on from
enforced restraints.

22
Galer, Susan. “For United Nations, AI Is Magical Tool For Faster Disaster Relief.” Forbes, Forbes Magazine, 10 Dec. 2018,
www.forbes.com/sites/sap/2018/12/12/for-united-nations-ai-is-magical-tool-for-faster-disaster-relief/#beb46017b656.

23
Fisher, Christine. “Google's DeepMind Can Predict Wind Patterns a Day in Advance.” Engadget, 26 Feb. 2019,
www.engadget.com/2019/02/26/google-machine-learning-wind-power/.

24
“Have Hope, Humanity: Pro-Gamers Went One for 11 Playing StarCraft II Against Google's DeepMind AI.” Fortune,
Fortune, fortune.com/2019/01/24/starcraft-2-deepmind/.

25
“AlphaFold: Using AI for Scientific Discovery.” DeepMind, deepmind.com/blog/alphafold/.

21
Case Studies

Gaming: The Exponential Progression of AI


Examining the rapid advancements in AI seen throughout programming for known-variable,
specific use-case games, this technology is progressing towards more generally applicable
algorithms that have mastered more than 50 different video games. Examples like this
demonstrates rapid progression of the technology, faster than imagined in many cases, and a trend
towards more broadly applicable AI systems.

Artificial intelligence has successfully been used in games for over 60 years, with one of the first
well known projects being Arthur Samuel’s checkers program, which was begun in the early
1950’s. His first version of the project was not considered revolutionary, but in 1955 he created a
version that learned to play, meaning the algorithm was not given complete instructions on how
to perform, but, rather, was designed to know its next move through inferencing and analyzing
patterns26.

Since then, many products have been developed in the gaming industry involving AI, with a heavy
focus on video games, with examples being Space Invaders (1978) and Pac-Man (1980). These
were some of the most successful arcade video games where the player was pitted against the AI
to see how long they could stay alive in the game, with difficulty increasing as time went on.
Before these revolutionary games were video games like Pong (1972) and Gotcha (1973) where
two players would play against each other, instead of against the game’s intelligence and ability.

Many arcade games would be released with the same underlying objective as Pac-Man and Space
Invaders of man versus machine with level progression play style due to its popularity. This would
continue until the video game crash of 1983. This crash made the industry’s sales plummet due to
a variety of reasons, with examples being a surplus of games made versus games being purchased
and inflation. Nintendo’s “Nintendo Entertainment System” (NES), introduced in late 1985, was

26“History of Artificial Intelligence.” Wikipedia, Wikimedia Foundation, 18 Mar. 2019,


en.wikipedia.org/wiki/History_of_artificial_intelligence.

22
seen as a saving grace for the industry, and had sales booming once again. The industry would
then focus heavily again on man versus machine with the release of popular titles like Super Mario
Bros. (1985) and The Legend of Zelda (1986).

AI was a significant part of many video games, but perhaps it became even more impressive when
it was taken back to its original roots of a game very much like checkers: Chess. Researchers who
would later be hired by the company IBM began a project in 1985 that is now known as “Deep
Blue”, with the objective of this project being to create a computer that could beat a world
champion. This project was in development for over eleven years until it finally beat reigning
world champion Gary Kasparov once out of a six-game match in 1996. The computer was then
heavily upgraded so that in May of 1997 it would defeat Kasparov in an official six-game match.

This project was an amazing example of how incredible AI can be, as it defeated the most skilled
chess player in the world for everyone to see. The research involved in Deep Blue helped with
computers assisting in finding new medical drugs for illnesses, as well as taking on a far larger
role in financial analysis with the ability to perform massive calculations.27 Deep Blue increased
the interest in AI and games to many people, with more and more companies consistently
developing in similar fields. Deep Blue encouraged people to think about the question of “If a
computer can defeat the most skilled person in the world at the game of chess, what else can it
do?”

To show that it was not finished with experiments in the industry of logic and knowledge-based
games, IBM took to a popular game show for their next project: “Jeopardy!”. IBM introduced its
very own “Watson” in 2010, Watson is a question-answering computer system capable of
answering questions posed in the natural language. This creation was named after IBM’s first
CEO: Thomas J. Watson. The computer was created with the objective of beating the best of the
best in the game show “Jeopardy!” which it did in 2011 against Brad Rutter and Ken Jennings.28

27 “Deep Blue.” IBM100 - Deep Blue, www.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/.


28Hale, Mike. “Actors and Their Roles for $300, HAL? HAL!” The New York Times, The New York Times, 8 Feb. 2011,
www.nytimes.com/2011/02/09/arts/television/09nova.html?mtrref=en.wikipedia.org&gwh=B84897292610F071DA5DF0C72F9
96297&gwt=pay.

23
Watson beat the world's two best in two rounds on February 14th and 15th, 2011. The competition
was stiff for Jennings and Rutter, as Watson won with over $50,000 more than second place.
Although Watson did not answer every question correctly, it surely proved the point that it could
beat the competition with ease. IBM was awarded the $1 million prize and, as it had promised, it
donated 100% of this prize to charities.29

Around the same time as Watson was unveiled in 2010, a company called “DeepMind” was formed
(it is currently owned by Alphabet Inc.). This artificial intelligence company would soon begin to
create technology that is considered revolutionary. The company has created a neural network that
learns to play video games in the same manner as the human brain. Neural networks are circuits
of artificial neurons, which function like the human brain. DeepMind is also well known for its
AlphaGo program: a computer that beat world champion Lee Sedol in a five-game match of Go.
Go is a 2,500-year-old board game originating in China and is considered the oldest board game
to be played in present day. A more impressive program is DeepMind’s very own AlphaZero,
which has beaten some of the most powerful programs playing Go, Chess, and Shogi (Japanese
chess) only after a few days of playing itself using reinforcement learning,30 learning from its own
mistakes, and using logic and patterns to solve the mistakes found in its own play.

Very recently in January 2019, DeepMind’s project AlphaStar played against professional players
against the video game it was mastering: StarCraft II. StarCraft is a real-time strategy video game
played on a PC. Before AlphaStar played against the professional players, DeepMind claimed its
program had over 200 years of knowledge of the game as a product of hours of replays of
professional matches, as well as using reinforcement learning while playing itself. This
convolutional neural network won 10 consecutive matches against pro players and lost only once.31

29“Jeopardy! And IBM Announce Charities To Benefit From Watson Competition.” IBM News Room - 2011-01-13 Jeopardy!
And IBM Announce Charities To Benefit From Watson Competition - United States, www-
03.ibm.com/press/us/en/pressrelease/33373.wss.
30David, et al. “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.” ArXiv.org, 5 Dec.
2017, arxiv.org/abs/1712.01815.

Whitwam, Ryan. “DeepMind AI Challenges Pro StarCraft II Players, Wins Almost Every Match.” ExtremeTech, 25 Jan. 2019,
31

www.extremetech.com/gaming/284441-deepmind-ai-challenges-pro-starcraft-ii-players-wins-almost-every-match.

24
As more of these game-mastering computers are created, the games that they master are always a
significant step up from their predecessor. Examples are how IBM’s DeepBlue took on the
difficult, but slow game of Chess in 1997, where Watson performed exceptionally well in a
difficult game like “Jeopardy!”, a game that requires quick reaction time on answers for tricky
questions. A step up from these two would be AlphaStar. StarCraft II is a real time strategy game,
where every decision and every second while playing impacts the end result. There is no time to
think, only constant processing and action.

The ability of these algorithms to conquer everyday games played by human beings is
unquestioned. Although the process takes time, with every project that has begun the final product
has taken shorter and shorter amounts of time to complete, with the capabilities increasingly
significantly. This exponential progression could be a global issue if not handled properly. As
these algorithms take less time to make, the human race has less time to make sure the development
is controlled. An example of when development of AI was not heavily controlled was when
Facebook’s two chatbots begun to “chant at each other in a language that they each understood but
which appears mostly incomprehensible to humans”.32 The bots used their own means of
communication that seemed practically decipherable to the all of the experts working there (See
fig 1.4).

Currently, one of the best supercomputers has 1% of the processing power of a human brain, and
yet it has already defeated one of the brightest humans in trivia. This technology is a significant
threat to the millions of dollars made in game shows, Esports, and strategic game competitions. At
the exponential growth rate demonstrated through video games, it is near impossible to envision
all of the ways AI will reign victorious over human intelligence as AI continues to learn and
improve itself.

32
Griffin, Andrew. “Facebook Robots Shut down after They Talk to Each Other in Language Only They Understand.” The
Independent, Independent Digital News and Media, 21 Nov. 2018, www.independent.co.uk/life-style/gadgets-and-
tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html.

25
These programs are becoming increasingly more intelligent and consistently beating the best
humans in the world at their own game, with the games becoming tougher as time passes. The
threat that this possesses is the fact that, although in small realms, these computers are far more
intelligent than the average human. The computers being developed are able to conquer difficult
games with ease, games that humans spend a significant portion of their life mastering. The fact
that people are already getting beat by AI in these games awakens humanity to the harsh reality
that it’s possible for AI to become more intelligent than us with the ability to perform tasks to a
higher degree than human beings.

26
Human-AI Interfacing: A Prevalent Question of Privacy
The introduction and widespread adoption of mobile-enabled AI systems (Alexa, Siri, Google
Assistant), powered by Natural Language Processing (NLP) capabilities, involves considerable
data collection, which may translate to a loss of human agency and privacy.

Perhaps one of the most memorable pioneers of technology like this was Joseph Weizenbaum and
the program “Eliza” that he created in 1966. This MIT professor created one of the most celebrated
computer programs of all time,33 and one that was very ahead of its time. Eliza was seen as an
electrical psychotherapist to many, but in fact, it was a trick. The program actually only
decomposed the sentences that were sent to it, found its constituent parts, and rephrased the
sentence to keep the conversation going. This can be seen in a conversation between a woman and
Eliza (Fig, 1.3) where Eliza is only taking the woman’s statements and rephrasing them into
questions, to show a false sense of the fact that the program has emotions and cares for the woman.
This would not be considered revolutionary AI today, but during its time period it was very
impressive.

This program was the start of many projects involving Human-AI dialogue, paving the way for the
three largest competitors in the business: Apple’s “Siri”, Amazon’s “Alexa”, and Google’s
assistant that remains nameless, or rather, named “Google”. These three systems allow for a
significant amount of your phone's and computer's abilities to be accessed with only a spoken
sentence, as opposed to the touch of a finger. These systems are very advanced, with impeccable
speech recognition and reaction time. The concern remains that since these assistants are always
listening for you to activate them, many people fear the fact that their conversations could be
recorded, especially with home systems with this AI.

According to a survey by Accenture, with a total of 1,000 adults in the sample, 22 percent claimed
they leave the room or lower their voice when conversing so that the “smart speaker” cannot hear
them. Another 48 percent believe the tech is always listening to them, regardless of what security

33
“Professor Joseph Weizenbaum: Creator of the 'Eliza' Program.” The Independent, Independent Digital News and Media, 22
Oct. 2011, www.independent.co.uk/news/obituaries/professor-joseph-weizenbaum-creator-of-the-eliza-program-797162.html.

27
statements have been said by the companies involving privacy.34 Amazon's Alexa’s
implementation of the camera in its home systems does not help the case here, since many people
are also concerned about the camera spying on them. Alexa also has a “Drop-In” feature that allows
people to start a video call with you through these devices, without you accepting the call.
Nonetheless, you must set the caller as a user that can use this feature when calling you, or else
the option is not available, but what if people found a way to surpass that prerequisite stage? The
use of a camera in your own home raises the eyebrows of many and could become considerably
more frightening if the AI begins to become unable to be controlled by the company. This is a
major concern since, as we have seen AI outsmart even the most skilled people in games, what is
stopping this technology from using this access to data across the world as their new medium to
outsmart the human mind? Even if this AI cannot outsmart the human mind, Amazon’s Alexa
records all discussions between you and the product, which could be accessed by other people if
the advancement of malicious hacking systems surpasses the capability of the security systems.35

If you have chosen not to own one of these home systems due to valid fears, they could still affect
your life. With more companies choosing to have these systems in their offices, complaining about
the new boss to a co-worker could easily be recorded. Josh Feast, CEO and co-founder of Cogito,
a company that uses AI to help sales and service professionals said that “The way we interact with
(virtual assistants) will evolve to resemble actual human-to-human interactions, everyone in an
office will effectively have a personal-behavior assistant. Technology will be much more pervasive
in the office.”36 This is not the case currently, but with the exponential rate that AI is developing,
it does not seem far off. The implementation of this AI in the workplace is already occurring, with
Apple and Salesforce making a big deal on the use of “Siri” in a variety of aspects in Salesforce’s
line of work.

“Is Your Alexa Always Listening? Why People Are Avoiding Smart Speakers.” Evening Standard, 12 Sept. 2018,
34

www.standard.co.uk/tech/voice-assistants-amazon-alexa-privacy-fears-a3933541.html.

Stegner, Ben. “7 Ways Alexa and Amazon Echo Pose a Privacy Risk.” MakeUseOf, 10 Jan. 2018,
35

www.makeuseof.com/tag/alexa-amazon-echo-privacy-risk/.

D'Allegro, Joe. “With Alexa and Siri for the Office, Complaining at Work Is Going to Get a Lot More Dangerous.” CNBC,
36

CNBC, 19 Dec. 2018, www.cnbc.com/2018/12/18/alexa-siri-for-the-office-complaining-at-work-is-getting-riskier.html.

28
With many privacy concerns from a large amount of people using or considering using these home
systems, the amount of security development that should be going into this AI is significant.
Without this security, plenty of recordings or images could be taken by malicious software, or
better yet, by AI that has evolved so greatly that it has developed the dark ethics of some of the
people it has recorded and processed, making it become a frightening force with plenty of control
over the consumer.

29
Judicial AI: A Man’s Guilt Determined by Algorithms

Given the pervasiveness of AI, it has found its way into our criminal courts. While in many cases
it confirms the judge’s ruling, lack of transparency in the algorithms proposes a threat to inmates.
They may be prejudiced against or not given a fair chance at a just ruling, and this may ultimately
decrease the general public’s trust in our judicial system as a whole.

In the past, a criminal’s sentence would be based strictly on the judge’s ruling, which would be
influenced by a variety of factors, including how much of a threat the judge thinks the criminal
will be in the future. With the introduction of AI into the courtroom, it uses a technique called risk
profiling. The AI takes previous sentences and criminal involvement into consideration when
recommending an amount of jail time or probation length. However, unlike a professionally trained
judge, courtroom AI algorithms are highly dependent on the accuracy of data. Even one incorrectly
inputted data point can reverse a parole decision, impacting years of an inmate’s life.

This begs the question: is it fair for the judge to deny a criminal probation based on their past
history? This is a very common influence in the process of allowing probation, and the reasoning
behind it is very simple: if you have a very long history with the law, more often than not, you
won’t be released early even if you conduct yourself in a changed manner while in prison. On the
rare occasion, the criminal is granted probation. This decision can be traced back to human beings
who have seen the lawbreaker’s mental clarity over the time period they have been held in
captivity; would it be fair for non-transparent AI to be making that decision?

In the case of Eric Loomis vs. Wisconsin, an opaque algorithm developed by a private company
was used to determine his six-year sentencing of operating a vehicle that had been used in a
shooting and eluding an officer. Loomis was also a registered sex offender due to a past conviction
of third-degree sexual assault. When considering his sentence, the court used algorithm Compas
developed by private company Northpointe Inc. This AI calculates the likelihood of someone
committing another crime, as well as the level of supervision the criminal should receive in prison.
The company is incredibly secretive about its algorithm's details but acknowledges that the results
vary considerably between men, women, and juveniles. When asked about the secrecy of the

30
technology, Northpointe’s general manager Jeffery Harmon said, “The key to our product is the
algorithms, and they’re proprietary… we’ve created them, and we don’t release them because it’s
certainly a core piece of our business. It’s not about looking at the algorithms. It’s about looking
at the outcomes”.37 The secrecy of the logic behind the program's reasoning is at the heart of why
Loomis filed a lawsuit. Loomis and his lawyer felt that Mr. Loomis should have been able to
review the algorithm and make arguments on its validity as part of his defence. They also
challenged the use of different scales for each gender.

A similar case of questionable AI use in the courtroom was exemplified in Mims vs. San Francisco.
In this particular case, nineteen-year-old Lamonte Mims was accused of violating his probation in
jail. The use of algorithms decided whether Mims be released or be put in jail, and eventually
helped the judge decide to release Mims. Five days after his release, it was reported that he had
robbed and murdered a seventy-one-year-old man. In an effort to defend themselves, the San
Francisco District Attorney’s office said that workers using the AI failed to enter Mims' prior jail
term into the program. If they had done so, the AI would have recommended that he be held, not
released. If the information being inputted into this technology had not been so private, the chances
of this event occurring would have been quite slim, since more reviewers of the information would
increase the likeliness of the error to be corrected. This is a perfect example of how the limited
information released to people involved in the case resulted in a negative outcome. The technology
can be used very beneficially but not if courts and the AI companies keep it very secretive.38

As seen in both the Loomis and Mims cases. AI algorithms have not only found their way into
modern courts but have been given the authority to deliver verdicts that can burden or free inmates
for years of their lives. In a perfect world, filled with perfect information, a well-developed
algorithm would not be an issue; the right decision would always be made. However, we do not

37Smith, Mitch. “In Wisconsin, a Backlash Against Using Data to Foretell Defendants' Futures.” The New York Times, The New
York Times, 21 Dec. 2017, www.nytimes.com/2016/06/23/us/backlash-in-wisconsin-against-using-data-to-foretell-defendants-
futures.html.

Simonite, Tom. “When Government Hides Decisions behind Software.” Wired, Conde Nast, 21 Aug. 2017,
38

www.wired.com/story/when-government-rules-by-software-citizens-are-left-in-the-dark/.

31
live in a perfect world. Instead, the AI that exists in today’s courts was programmed by human
beings, with all of their faults and prejudices alike.

In discussing the faults of human-trained AI algorithms in our courts, the World Economic Forum
outlines four key areas of concern: Representation, Protection, Stewardship, and Authenticity.39
Whether it be Google’s example of misrepresentation born from underrepresented datasets,40 the
implicit negative effects of algorithms on vulnerable groups (such as those with physical and
mental disabilities), a lack of diversity in the corporations holding the keys to courtroom
algorithms, or the possibility of all media and courtroom evidence being rendered untrustworthy
due to incredibly realistic media generators, artificial intelligence can easily be seen as unprepared
to handle such delicate issues such as judicial rulings. AI algorithms developed by humans magnify
the human intelligence invested into developing them, meaning that while the strengths of data-
driven, deductive decision making are amplified, so are the cognitive biases and incomplete
heuristics that make us human. An incorrect move in a game of GO means little to the life of a
human being, but one incorrect decision by a judicial AI could mean an additional four years of
imprisonment. In a world where the authenticity of data grows increasingly at risk, and the equity
of such algorithms is increasingly shown to be unsatisfactory, the continued reliance on courtroom
algorithms serves to undermine the public trust in the judicial process as a whole, a process meant
to uphold justice for the people. Without a system of governance and justice, society will falter,
and a court embedded with dysfunctional, prejudiced and inauthentic AI may very well be to
blame.

39
Polonski, Vyacheslav, and Google. “AI Is Convicting Criminals and Determining Jail Time, but Is It Fair?” World Economic
Forum, www.weforum.org/agenda/2018/11/algorithms-court-criminals-jail-time-fair/.
40“Exploring and Visualizing an Open Global Dataset.” Google AI Blog, 25 Aug. 2017, ai.googleblog.com/2017/08/exploring-
and-visualizing-open-global.html?m=1.

32
Canadian Connection
Canadian use of AI for the judging system for visas has raised concern over human rights in terms
of gender and race. And with nearly half of Canadian jobs being expected to be affected by
automation within the next 10 to 20 years, the effects of AI are nothing to scoff at for Canadians.
But Canada is taking action, with a $125 million CIFAR Pan-Canadian Artificial Intelligence
Strategy announced in the 2017 federal budget.

The Canadian Institute for Advanced Research (CIFAR) was founded in 1982, but only recently
has it been given a sum of money this large for research on AI. The objective of this funded strategy
is to increase the number of skilled artificial intelligence graduates in Canada, develop connections
between Canada's three major centres for AI (Toronto, Edmonton and Montreal), and “to develop
global thought leadership on the economic, ethical, policy and legal implications of advances in
artificial intelligence”.41 With global thought leadership being when a country repeatedly
introduces innovative ideas to drive global progress in a specific domain. Prime Minister Justin
Trudeau has been seen as a significant supporter of AI research, with his own interest in the field.
When asked about AI research given Canada’s privileged position in the field, Trudeau said:

I’ve been personally fascinated by AI ever since high school… So, it’s really
exciting for me to be able to encourage Canadian leadership in the field today…
strong public support for research programs and world class expertise at Canadian
universities has helped propel Canada to a position as leader in artificial
intelligence and deep learning research and use. Canadian talent and ideas are in
high demand around the world—but activity needs to remain in Canada to harness
the benefits from artificial intelligence.42

While this funding for a great cause is a large step in the right direction, the implementation of AI
by Canadians has not always been ethical. Canadian political consultancy and technology company

41 “Pan-Canadian Artificial Intelligence Strategy.” CIFAR, www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy.

“Canada Is Prioritizing Artificial Intelligence Research for Good Reason.” Forbes, Forbes Magazine, 5 Apr. 2017,
42

www.forbes.com/sites/quora/2017/04/05/canada-is-prioritizing-artificial-intelligence-research-for-good-reason/#5503a31f1d02.

33
AggregateIQ has been linked to the harvesting of millions of people’s private information
committed by British company Cambridge Analytica. AggregateIQ was seen as heavily involved
in the Brexit voting process, which they were rumoured to have used the plentiful amounts of
personal profiles to affect advertising, and voting.

On top of this, Canadian visa applications has had its controversies. A report by the University of
Toronto’s Citizen Lab outlines the impacts of automated decision-making involving immigration
applications, and how the technology may not align well with all human rights. The authors of the
report recommend greater transparency, public reporting, oversight of government use of AI, and
predictive analysis to automate certain activities involving immigrant and visitor applications. "We
know that the government is experimenting with the use of these technologies ... but it's clear that
without appropriate safeguards and oversight mechanisms, using A.I. in immigration and refugee
determinations is very risky because the impacts on people's lives are quite real," said Petra
Molnar, one of the authors of the report. In response to this, a spokesperson for Immigration
Minister Ahmed Hussen said that the analytics program helps officers’ triage online visa
applications to "process routine cases more efficiently". The spokesperson claims that the AI
system is only used as a “sorting mechanism” to aid immigration officers with a rapidly growing
number of applicants. Petra Molnar expressed her concern for AI’s “problematic track record”
when it comes to gender and race, specifically in predictive policing that has seen certain groups
over-policed. She also notes that "A.I. is not neutral. It's kind of like a recipe and if your recipe is
biased, the decision that the algorithm will make is also biased and difficult to challenge."43

43
Wright, Teresa. “Federal Use of A.I. in Visa Applications Could Breach Human Rights, Report Says | CBC News.” CBCnews,
CBC/Radio Canada, 26 Sept. 2018, www.cbc.ca/news/politics/human-rights-ai-visa-1.4838778.

34
Logic of Evil
The development and advancements of artificial intelligence are not inherently unethical or evil;
these technologies can make human life far simpler than it once was. People will pay many dollars
for less stress in their life, as well as for tedious tasks to be done for them while they relax, which
is why the push for self-driving cars is so strong. People like tasks completed for them, and AI can
help with various aspects of one’s life. These are quite logical to pursue due to the fact that we
may not see them as evil until it is too late to be solved.

Companies are using AI as a means of trying to solve some of the world’s most pressing issues
such as cures for diseases, aiding in capturing criminals, and most importantly, using AI to make
the average human life easier so that people purchase their product. A perfect example of this is a
smart home system. None of the features that it offers is imperative for a person to live well, but
the developers market it as a product that will make your life much easier. The same applies for
the previously mentioned self-driving car. Owning one won’t make you a healthier or better
person, but the idea of people having their own personal taxi while they relax in the car is so
appealing that companies producing these types of cars have so many investors and customers
waiting for their release.

Manufacturing companies can also be saving plenty of money with the implementation of
automation in their production lines. Companies like Adidas are already using this technology to
speed up production time and to cut costs with its “Speedfactory” in Germany.44 Chief Executive
Herbert Hainer describes this factory as “An automated, decentralised and flexible manufacturing
process... opens doors for us to be much closer to the market and to where our consumer is”. While
all of this may be true, nothing was said about the number of jobs lost by this automated factory.
Currently, Adidas makes over 600 million items per year with the majority of this production being
hand-made in Asia, with more than 9 million people in Southeast Asia being dependent on these

“German Robots to Make First Adidas Running Shoes in 2016.” Reuters, Thomson Reuters, 9 Dec. 2015,
44

www.reuters.com/article/adidas-manufacturing-idUSL8N13X3CQ20151209.

35
factory jobs. With more factories being developed like the “Speedfactory” these people will be left
unemployed.45

The government may be very interested in some of the things that this advanced technology has to
offer. For example, if the self-driving abilities of these cars were to be implemented into city busses
worldwide, imagine the money saved now that bus drivers no longer have to be paid? But that also
means a large loss of jobs for transport vehicle operators, another example of how advanced AI
would take away jobs. The government could also use AI for the location of criminals. With facial
recognition becoming a common type of unlocking an electronic device for use, the government
could easily use this to locate criminals based on their whereabouts in the image, or the ability to
know a hat they are wearing if the culprit is hidden in a crowd of people. The possibilities are
endless with advanced technologies, and they are constantly being improved as time passes.

Like many things in life, it is evident a lot of advancements in the capability of AI is for monetary
wealth, and task simplicity. Big companies may want access to the most capable AI possible for
the quickest data mining, and processing of big data to provide information on business insights.
Companies would also like to see data handled quickly, which is where AI assists as well. With
the use of AI, every single document can be filed at a considerable faster rate than the average
human. This is due to AI’s enhanced abilities in smaller realms, as well as zero-time wastage
offered by technology. Also, for every company and organization, decision making is crucial. A
single error could be fatal to millions of dollars that could have been made, there can be millions
of documents and data to view before decisions are made. The big data analyzation that AI can
offer assists in extracting and understanding everything necessary in a short period of time.
Automation of a variety of a company’s jobs show negative effects for low class workers, but
extreme financial gain and efficiency for the companies overall.46

45 Sharp, Callum. “Meet Your Maker: 4 Companies Using Robots.” Turbine, www.turbinehq.com/blog/companies-using-robots.

46 “5 Benefits of Artificial Intelligence.” Top Mobile App Development Company, 30 Nov. 2018, vrinda.io/5-benefits-artificial-
intelligence/.

36
Overall, the logic of evil behind the increasing development of artificial intelligence is easy to
understand, it rewards the consumer with two of mankind’s strongest lusts: time and money. This
technology could truly become corrupt and “evil” if it were to be used to violate privacy or result
in a large amount of car crashes but shows benefits to defend these downfalls. The applications of
the advanced technology will decide how corrupt it will truly be, as AI is very capable to lean both
ethically and unethically, but as of right now, it revolves heavily on the developer and the consumer
in their own respects.

37
Politics
Artificial Intelligence can be used for many beneficial things in everyday life, but it also has its
downfalls. Some examples of this are programs that have the ability to produce incredibly realistic
but false news articles, underdeveloped AI causing controversy over human rights, and with many
people in power not considering Superintelligent AI as a salient threat; that could result in a
situation where its only considered a threat when it is too late to act.

This paper has already mentioned the advanced technology known as OpenAI’s “GPT-2”. This
intelligent program could be a big step in the completely wrong direction politically. Although the
company has not released the technology to the public yet, if it is released, it could result in an
already pressing problem becoming easier to create. This program could lend a very considerable
helping hand in producing false news articles with its ability to mass produce articles when fed
limited context anywhere from a sentence to multiple paragraphs. The issue of “fake news” is not
an uncommon topic of discussion for many presently, and this technology would make the issue
even worse. This tech could also impersonate people online with fake quotes that seem very
realistic. While people are currently already composing articles of fake news, the implementation
of this sophisticated AI may augment the scale at which it is generated, flooding the internet with
spam and vitriol.47

Also, another political issue that the implementation of more advanced AI brings is the human
rights issues associated with self-driving cars. According to a study out of the Georgia Institute of
Technology, pedestrians of white and lighter skin tones are less likely to be hit by self-driving cars,
due to the automated vehicles being able to detect light skinned people far easier.48 This study has
caused many people to express online how they believe this is a human rights violation. Of course,
as the technology improves, this will become less of an issue. But who is to say another issue

47
Mak, Aaron, and Aaron Mak. “When Is Technology Too Dangerous to Release to the Public?” Slate Magazine, Slate, 22 Feb.
2019, slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html.

48
Samuel, Sigal. “Study Finds a Potential Risk with Self-Driving Cars: Failure to Detect Dark-Skinned Pedestrians.” Vox, Vox,
6 Mar. 2019, www.vox.com/future-perfect/2019/3/5/18251924/self-driving-car-racial-bias-study-autonomous-vehicle-dark-skin.

38
similar to this will not appear? Developers will have to take extra caution if they wish to market
their product to the people without any backlash.

On top of this, very few politicians mention the threat of AI, investment in the ethics of AI, or
solutions to the problems that may arise with its development. This is a product of short-termism.
Due to the fact that very few governments consider it a major threat, politicians will not be highly
rewarded by their constituents for putting forth policies to mitigate the risks of AI. This means that
AI will continue to be neglected until it becomes a critical issue with minimal or no time left to
solve.

Most significant of artificial intelligences political effects was in the case of political consulting
firm Cambridge Analytica (CA). This firm combined data analysis, data mining, and data
brokerage with strategic communication during the electoral process. This company started in
London, UK in 2013 eventually ran into some significant issues before abolished five years later
in 2018. Big elections that CA were involved with was Ted Cruz’s presidential campaign in 2015,
Donald Trump’s presidential campaign in 2016, as well as the 2016 Brexit vote. The company
focused on targeting news, ads, and announcements about the favoured side in their respective
elections, to specific audiences based on a plethora of factors. The controversy behind this is how
CA obtained this information. Using a Facebook app called “This is Your Digital Life” developed
by a data scientist at Cambridge University, Cambridge Analytica was able to obtain personal data
on over 87 million Facebook users.49 This app was a survey taken by several hundreds of thousands
of people with the claim that the data was to be used for academic purposes, but due to Facebook’s
design, the company was able to not only retrieve the personal information of the people taking
the survey, but also all of the users in that person’s social network. Using this method allowed the
less than a million survey takers to become over 87 million cases of personal information stolen.
Articles claiming Cambridge Analytica was committing these crimes were not fully acknowledged
due to lack of evidence, so this data theft continued for three years until The Guardian and the
New York Times simultaneously posted the story on March 17th, 2018, using former CA employee

Kozlowska, Hanna. “The Cambridge Analytica Scandal Affected Nearly 40 Million More People than We Thought.” Quartz,
49

Quartz, 4 Apr. 2018, qz.com/1245049/the-cambridge-analytica-scandal-affected-87-million-people-facebook-says/.

39
Christopher Wylie to supply them with all the information they needed. This is a perfect example
of how AI like data mining and data analysis can be used unethically to benefit few but violate the
privacy of many.

With the capability to produce mass amounts of fake news within minutes, the possibility of self-
driving cars violating human rights, and million-dollar companies committing mass amounts of
security violation, AI has and will continue to have a large impact on political issues. Whether
companies chose to allow these impacts to be beneficial or obstructive will decide how the human
race will see AI in the future.

40
Religion
Although there may not be as many connections between AI and religion, some possibilities are
likely. For example, monotheistic religions worship their own God, and may feel threatened by
the idea of an omniscient AI singularity. Some religions' lust for power may also drive them to
heavily invest in some of the most advanced AI algorithms. Religious orientation may also guide
groups to invest in AI based on their locus of control.

If the day comes that there is AI singularity, many monotheistic religious groups will see that as a
threat due to the fact that it will be omniscient or will come across as such. Whether the group tries
to fight the developments of AI or fears that it depends on the religion, the idea of an all-powerful
Superintelligence that is not the religions “god” may upset many. AI itself may become a religion
of its own with its newfound singularity. Considering that ex-Google engineer Anthony
Levandowski has already founded the “Church of AI”,50 the idea of an artificial intelligence
religion does not seem as much of a stretch as some may say.

Some religious groups may also see these advanced technologies as an arms race with other
religions and do whatever is possible to get a hold of the most they possibly can. As seen in the
Crusades, the religion with the best technologies, rather than the stronger belief in their superior
gods, wins the war. If the uprising of AI were to cause a war due to different religions
implementing too much or too little AI into their religion, the groups with the most advanced
technology would win, which would be one of the main motivators for some religions to heavily
invest in AI. The religions that would be most willing to invest heavily would be groups with high
locus of control. They would feel obligated to empower themselves as much as possible, as
opposed to groups with low locus of control who would likely turn to praising their gods, and
letting the deity guide their future.

Lastly, there is a possibility that AI could contribute to the rise of atheism and the eventual
dissolution of the world’s religions. In a world fueled by information, globalism, and accelerating

50
Harris, Mark. “Inside the First Church of Artificial Intelligence | Backchannel.” Wired, Conde Nast, 2 Feb. 2018,
www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/.

41
scientific advancement, some say that we are on a road to a future where religion is made obsolete.
In theory,51 AI with Superintelligence would be infinitely smarter than humans and would have
the ability to prove or disprove theories created by human beings. The hope is that AI would be
able to benefit many religions, but the ideology of AI abolishing world religions is not a remote
possibility, and it should be considered with the advancement of these technologies.

51
Charatan, Debrah Lee. “How Will AI Affect My Faith and Religion in General?” The Next Web, 12 Oct. 2018,
thenextweb.com/contributors/2018/10/13/ai-effect-on-faith-and-religion/.

42
Solutions
The existential risks of AI are very serious, yet many dismiss the matter by claiming that it will be
an issue for future generations. People who say this do not understand the fact that, since the
advancements of AI are exponential, they could easily be an issue for our own generation. One of
the first things people in power can do is to recognize that this issue is not like a normal world
issue in the sense that AI could rise to superiority extremely quickly giving humans' minimal time
to react. Due to this, the matter should already be treated as a present issue so that safety regulations
can be put in place before it is too late.

If private sector companies wish to develop their technologies independently, their algorithm
teams should be aligned very tightly with their ethics teams, as the two should work together very
often to avoid an unethical AI singularity. People in power should also be promoting investment
and charitable donations towards AI-ethics-focused groups like OpenAI, Machine Intelligence
Research Institute, and the Future of Humanity Institute. These companies are heavily involved in
mitigating existential risk from advanced artificial intelligence, like their research into friendly
artificial intelligence.52 This research will prove to be extremely beneficial if more time and money
is invested into it, because the ability to know as much as possible about the topic could be the
difference between life and death in the future.

For the working class, there are also things that can be done to benefit themselves, and possibly
the future of AI. For people to become as educated as possible about the advancements of AI, and
receiving qualifications expressing their knowledge of artificial intelligence would prove to be
very beneficial as more jobs switch over to development of these technologies. If someone’s job
was taken away due to the inevitable automation of low skilled labour jobs, a very good step in
the right direction would be to become more knowledgeable and qualified in the field of AI to aid
in job security. Learning more about the topic could aid in the future of AI if the person takes it
very seriously and wants to make a beneficial impact, perhaps by working for an organization like

52Hamblin, James. “But What Would the End of Humanity Mean for Me?” The Atlantic, Atlantic Media Company, 10 May
2018, www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/.

43
OpenAI. There’s never a wrong time to start learning about AI; in the future that knowledge could
prove to be extremely useful.

44
Appendix
Fig 1.1: Fake article written by OpenAI’s very own GPT-2.

Fig 1.2: Exponential graph of AI growth

45
Fig 1.3: A woman’s conversation with Eliza

Fig 1.4: Bob and Alice. Two chatbots disregarding the objective of trading hats, books, and
balls; resort to their own communication and ideas.

Fig 1.5: Tay, a Microsoft chatbot, tweeting terribly racist phrases.

46
Bibliography
“5 Benefits of Artificial Intelligence.” Top Mobile App Development Company, 30 Nov. 2018, vrinda.io/5-
benefits-artificial-intelligence/.

“AlphaFold: Using AI for Scientific Discovery.” DeepMind, deepmind.com/blog/alphafold/.

Butler, Samuel, and Peter Mudford. Erewhon. Penguin Books, 2018.


Charatan, Debrah Lee. “How Will AI Affect My Faith and Religion in General?” The Next Web, 12 Oct. 2018,
thenextweb.com/contributors/2018/10/13/ai-effect-on-faith-and-religion/.

“Canada Is Prioritizing Artificial Intelligence Research for Good Reason.” Forbes, Forbes Magazine, 5 Apr.
2017, www.forbes.com/sites/quora/2017/04/05/canada-is-prioritizing-artificial-intelligence-research-for-good-
reason/#5503a31f1d02.

“Creating Breakthrough Technologies and Capabilities for National Security.” Defense Advanced Research
Projects Agency, www.darpa.mil/.

Conn, Ariel. Future of Life Institute, Jolene Creighton 28 Feb. 2019, futureoflife.org/.

Corpuz, Eleazer, and Patrick Caughill. “In the Developing World, Two-Thirds of Jobs Could Be Lost to Robots.”
World Economic Forum, www.weforum.org/agenda/2016/11/in-the-developing-world-two-thirds-of-jobs-could-
be-lost-to-robots.

D'Allegro, Joe. “With Alexa and Siri for the Office, Complaining at Work Is Going to Get a Lot More
Dangerous.” CNBC, CNBC, 19 Dec. 2018, www.cnbc.com/2018/12/18/alexa-siri-for-the-office-complaining-at-
work-is-getting-riskier.html.

David, et al. “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning
Algorithm.” ArXiv.org, 5 Dec. 2017, arxiv.org/abs/1712.01815.

“Exploring A.I. - What Is Artificial Intelligence? – DevelopmentNow.” DevelopmentNow, 14 Apr. 2017,


developmentnow.com/2017/04/14/exploring-a-i-what-is-artificial-intelligence/.

“Exploring and Visualizing an Open Global Dataset.” Google AI Blog, 25 Aug. 2017,
ai.googleblog.com/2017/08/exploring-and-visualizing-open-global.html?m=1.

47
“Future of Humanity Institute.” Wikipedia, Wikimedia Foundation, 21 Feb. 2019,
en.wikipedia.org/wiki/Future_of_Humanity_Institute#Existential_risk.

Galston, William A. “Why the Government Must Help Shape the Future of AI.” Brookings.edu, The Brookings
Institution, 18 Oct. 2018, www.brookings.edu/research/why-the-government-must-help-shape-the-future-of-ai/.

“German Robots to Make First Adidas Running Shoes in 2016.” Reuters, Thomson Reuters, 9 Dec. 2015,
www.reuters.com/article/adidas-manufacturing-idUSL8N13X3CQ20151209.

Griffin, Andrew. “Facebook Robots Shut down after They Talk to Each Other in Language Only They
Understand.” The Independent, Independent Digital News and Media, 21 Nov. 2018,
www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-
language-research-openai-google-a7869706.html.

Hale, Mike. “Actors and Their Roles for $300, HAL? HAL!” The New York Times, The New York Times, 8 Feb.
2011,
www.nytimes.com/2011/02/09/arts/television/09nova.html?mtrref=en.wikipedia.org&gwh=B84897292610F071
DA5DF0C72F996297&gwt=pay.

Hamblin, James. “But What Would the End of Humanity Mean for Me?” The Atlantic, Atlantic Media Company,
10 May 2018, www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-
me/361931/.

Harris, Mark. “Inside the First Church of Artificial Intelligence | Backchannel.” Wired, Conde Nast, 2 Feb. 2018,
www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/.

“Have Hope, Humanity: Pro-Gamers Went One for 11 Playing StarCraft II Against Google's DeepMind AI.”
Fortune, Fortune, fortune.com/2019/01/24/starcraft-2-deepmind/.

“History of Artificial Intelligence.” Wikipedia, Wikimedia Foundation, 18 Mar. 2019,


en.wikipedia.org/wiki/History_of_artificial_intelligence.

“Is Your Alexa Always Listening? Why People Are Avoiding Smart Speakers.” Evening Standard, 12 Sept.
2018, www.standard.co.uk/tech/voice-assistants-amazon-alexa-privacy-fears-a3933541.html.

48
“Jeopardy! And IBM Announce Charities To Benefit From Watson Competition.” IBM News Room - 2011-01-13
Jeopardy! And IBM Announce Charities To Benefit From Watson Competition - United States, www-
03.ibm.com/press/us/en/pressrelease/33373.wss.

“Jobs Lost, Jobs Gained: What the Future of Work Will Mean for Jobs, Skills, and Wages.” McKinsey &
Company, www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-the-future-of-work-
will-mean-for-jobs-skills-and-wages.

Kleeman, Sophie, and Sophie Kleeman. “Here Are the Microsoft Twitter Bot's Craziest Racist Rants.” Gizmodo,
Gizmodo, 24 Mar. 2016, gizmodo.com/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160.

Kozlowska, Hanna. “The Cambridge Analytica Scandal Affected Nearly 40 Million More People than We
Thought.” Quartz, Quartz, 4 Apr. 2018, qz.com/1245049/the-cambridge-analytica-scandal-affected-87-million-
people-facebook-says/.

Madrigal, Alexis C. “Facebook Didn't Sell Your Data; It Gave It Away.” The Atlantic, Atlantic Media Company,
20 Dec. 2018, www.theatlantic.com/technology/archive/2018/12/facebooks-failures-and-also-its-problems-
leaking-data/578599/.

Mak, Aaron, and Aaron Mak. “When Is Technology Too Dangerous to Release to the Public?” Slate Magazine,
Slate, 22 Feb. 2019, slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html.

“Nick Bostrom.” Wikipedia, Wikimedia Foundation, 26 Feb. 2019, en.wikipedia.org/wiki/Nick_Bostrom.

“Pan-Canadian Artificial Intelligence Strategy.” CIFAR, www.cifar.ca/ai/pan-canadian-artificial-intelligence-


strategy.

Polonski, Vyacheslav, and Google. “AI Is Convicting Criminals and Determining Jail Time, but Is It
Fair?” World Economic Forum, www.weforum.org/agenda/2018/11/algorithms-court-criminals-jail-time-fair/.

“Population, Total.” Literacy Rate, Adult Female (% of Females Ages 15 and above) | Data,
data.worldbank.org/indicator/SP.POP.TOTL.

“Professor Joseph Weizenbaum: Creator of the 'Eliza' Program.” The Independent, Independent Digital News
and Media, 22 Oct. 2011, www.independent.co.uk/news/obituaries/professor-joseph-weizenbaum-creator-of-the-
eliza-program-797162.html.

49
Pringle, Ramona. “The Writing of This AI Is so Human That Its Creators Are Scared to Release It | CBC
News.” CBCnews, CBC/Radio Canada, 25 Feb. 2019, www.cbc.ca/news/technology/ai-writer-disinformation-
1.5030305.

Samuel, Sigal. “Study Finds a Potential Risk with Self-Driving Cars: Failure to Detect Dark-Skinned
Pedestrians.” Vox, Vox, 6 Mar. 2019, www.vox.com/future-perfect/2019/3/5/18251924/self-driving-car-racial-
bias-study-autonomous-vehicle-dark-skin.

Sharp, Callum. “Meet Your Maker: 4 Companies Using Robots.” Turbine, www.turbinehq.com/blog/companies-
using-robots.

Simonite, Tom. “When Government Hides Decisions behind Software.” Wired, Conde Nast, 21 Aug. 2017,
www.wired.com/story/when-government-rules-by-software-citizens-are-left-in-the-dark/.

Smith, Mitch. “In Wisconsin, a Backlash Against Using Data to Foretell Defendants' Futures.” The New York
Times, The New York Times, 21 Dec. 2017, www.nytimes.com/2016/06/23/us/backlash-in-wisconsin-against-
using-data-to-foretell-defendants-futures.html.

Stegner, Ben. “7 Ways Alexa and Amazon Echo Pose a Privacy Risk.” MakeUseOf, 10 Jan. 2018,
www.makeuseof.com/tag/alexa-amazon-echo-privacy-risk/.

Sulleyman, Aatif. “Stephen Hawking Has a Terrifying Warning about AI.” The Independent, Independent Digital
News and Media, 3 Nov. 2017, www.independent.co.uk/life-style/gadgets-and-tech/news/stephen-hawking-
artificial-intelligence-fears-ai-will-replace-humans-virus-life-a8034341.html.

“Timeline of Artificial Intelligence.” Wikipedia, Wikimedia Foundation, 19 Feb. 2019,


en.wikipedia.org/wiki/Timeline_of_artificial_intelligence#cite_note-26.

“The Artificial Intelligence Revolution: Part 1.” Wait But Why, 7 Sept. 2017, waitbutwhy.com/2015/01/artificial-
intelligence-revolution-1.html.

“Turing Test.” Wikipedia, Wikimedia Foundation, 25 Feb. 2019, en.wikipedia.org/wiki/Turing_test.

Vincent, James. “Here Are Some of the Ways Experts Think AI Might Screw with Us in the next Five Years.”
The Verge, The Verge, 21 Feb. 2018, www.theverge.com/2018/2/20/17032228/ai-artificial-intelligence-threat-
report-malicious-uses.

50
Vincent, James. “Lyrebird Claims It Can Recreate Any Voice Using Just One Minute of Sample Audio.” The
Verge, The Verge, 24 Apr. 2017, www.theverge.com/2017/4/24/15406882/ai-voice-synthesis-copy-human-
speech-lyrebird.

Whitwam, Ryan. “DeepMind AI Challenges Pro StarCraft II Players, Wins Almost Every Match.” ExtremeTech,
25 Jan. 2019, www.extremetech.com/gaming/284441-deepmind-ai-challenges-pro-starcraft-ii-players-wins-
almost-every-match.

Wright, Teresa. “Federal Use of A.I. in Visa Applications Could Breach Human Rights, Report Says | CBC
News.” CBCnews, CBC/Radio Canada, 26 Sept. 2018, www.cbc.ca/news/politics/human-rights-ai-visa-
1.4838778.

51

You might also like