You are on page 1of 6

Golden 1

Proposal to Survey the Opinions of American Voters on the


Potential Benefits and Dangers of Advanced AI
Dale Eugene Golden
1. Introduction
There is currently an ongoing conversation regarding the advancement of ‘human-like’
AI (Artificial Intelligence) and the accelerating rate at which humans are becoming
reliant on technology. In modern day America nearly half of the population are using a
simple AI as a digital assistant.1 These rudimentary AI’s are improving daily and if
executed correctly could one day lead to a post-scarcity civilization allowing humans to
lead more fulfilling lives without needing to spend time doing work they find unfulfilling.
However, many prominent figures, such as renowned theoretical physicist Stephen
Hawking, are worried that such AI’s could advance to a point that they become sentient
and take over the world, either destroying or enslaving humanity in the process. 2 This
idea of sentient AI attacking humanity is a popular topic for Sci-Fi thrillers, but is this
likely? Skeptics, such as American software engineer Grady Booch, believe such fears
are unfounded.3 Broody claims that for an AI to be dangerous it would require access to
resources and assets that it is incredibly unlikely to be given, and to obtain these
resources it would need to compete with humans, and in this way an AI would can only
be as dangerous as a rogue nation, such as North Korea, though nations cannot be
destroyed by unplugging it from the wall.
If Artificial Intelligence is a reasonable danger then it is necessary to enact laws and
policies now, before it becomes a problem. However, if it is not reasonable to fear AI,
then the restrictions that would come with such laws will only slow progress and delay
the progress of humanity. I propose it is therefore valuable to know the opinions of voters
on this topic, and more importantly the reasoning behind these opinions.

2. Background
AI is used every day for many purposes, and it is being continuously improved. 4 Banks
processing mobile check deposits and monitor accounts for fraud, Facebook
automatically suggesting tags for photos and personalizing a user’s newsfeed. Uber and
Google predicting traffic and displaying the most efficient routes, Email sorting spam and
organizing the mailbox into categories, these and a million other things are all powered
through various AI and without them the world would be a fundamentally different
place5. In future the applications will only increase, such as through self-driving cars,
which would not only free users to perform other tasks during commute but also relieve
traffic by removing and negating the inefficiencies of human drivers. 6 However, as we
become more reliant on AI and as AI becomes smarter, many believe that one will
become so smart that it will be able to improve itself exponentially, quickly becoming far
superior to humanity.7 Should such a superior being come into existence, we must then
wonder whether it will judge humans and destroy us, or even see us how we see cattle,

1
(Olmstead 2017)
2
(Cellan-Jones 2014)
3
(Booch 2016)
4
(Kelly 2016)
5
(Narula 2018)
6
(Condliffe 2017)
7
(Cellan-Jones 2014)
Golden 2

and treat us thusly.8 These themes are comprehensively explored in Sci-Fi books and
Hollywood blockbusters, such as iRobot, Terminator, and even Black Mirror, but those
who work in fields related to AI and those who imagine AI taking over tend to be two
separate groups of people..
Critics of the concept of dangerous AI bring up that even though AI can outperform a
human in some areas, it is only able to perform in that one area and not others. The AI
predicting traffic has no ability to recognize faces and the facial recognition AI can not
beat a human at a board game.9 Therefore, even if an AI did become dangerous it would
be trivial to fix the problem. One could imagine an AI that controls the other AI’s and
they all work together to do harm, though for the central AI to use information from the
facial recognition AI it would need to understand the concept of a face, and therefore be
able to do the job itself, which it cannot do while also being useful as a control AI. This
inability is due to the fact that the way AI is created and taught is essentially brute force,
similar to evolution.1011 An alternative worry is that rather than an AI controlling many
other forms of AI to do harm, a human could control the AI to do harm. Yet another is
that an AI wouldn’t be smart enough to be malicious, but rather it would be improperly
coded and do more harm than good. In these facets it may be worthwhile to enact laws
and regulation upon AI. 12

3. Methods
A survey consisting of no more than 10 questions will be posted on popular forums such
as Facebook, Reddit, and Twitter to determine whether the public fears advanced AI, if
they think it should be policed, and other factors related to government policy or
restrictions in the field. Using popular public forums in this way is incredibly useful in
that not only will it provide a quantitative analysis, but it will also generate discussion in
the comments of these posts and allow for a qualitative analysis of why there are or are
not fears, or whether the public believes the danger is realistic. The comments will be
analyzed in addition to survey responses, and researchers will record a qualitative
sentiment analysis, as well as quantitatively recording the frequency of signal words such
as “great”, “evil”, “useful” and so on.

8
(Tegmark 2016)
9
(Littman 2017)
10
(Miller 2017)
11
(Grey 2017)
12
(Olah 2016)
Golden 3

Timeline

The timeline for this project will begin immediately following the approval of this study.
The qui will be created on March 10th and be sent out to all major social media sites, as
well as family and friends. This data will be collected until April 8 th, and the research
report will begin to be written on March 18 th and will conclude on April 10th.

4. Closing Remarks
Artificial Intelligence, AI, is used in every perceivable industry and device. This
technology vastly increases quality of life and will continue to do so. Without AI the
modern world as we know it would be crippled. AI is used to fly planes, manage traffic,
monitor banking, filter e-mail, and more. Despite this, popular culture carries a deep fear
of these tools surpassing their creators and becoming dangerous. Movies such as
Terminator envision the creation of an AI that becomes super intelligent and decides to
overthrow humanity. The Matrix envisions AI that surpasses humanity and enslaves
them, using them as a fuel source. A Space Odyssey envisions a scenario wherein an AI
misinterprets instructions and becomes deadly as a result. There are many ways that an
AI can seemingly be a world ending disaster, and there are many reasons that these fears
resonate with people, and plenty of reasons it may be well-founded. However, those in
industry are often of the opinion that it is not possible for a computer to reach the level of
complexity needed to pose any real threat. AI is very good at automating one very
specific task, and an AI can theoretically be created for any conceivable task. But in order
for an AI to be useful at multiple tasks one would need a controller AI, and in order for
that controller AI to be effective at controlling, it would have to be proficient at the tasks
that it’s minions are doing, so the complexity needed to control any respectable number
of AI increases in complexity to the point that the controller AI might as well be a minion
AI, and now you need a new controller AI. In this way, it is believed that AI can never
rival that of humanity. Indeed, even if it were to bypass this limitation, an AI would still
require resources in order to perform any harmful acts, and so worst-case scenario is that
humanity must now face a rogue nation such as North Korea, but simply with a robotic
leader, this also assumes that the AI would reach the power level of a nation, as opposed
to a small resistance group. If AI is or can be dangerous, it is important that it is
regulated, and laws are put in place well before they are needed. On the other hand, if it is
not possible for this danger to occur than said laws and regulations would only hinder
progress and slow advancement. The opinions of voters are what will decide the fate of
AI. For this reason, I propose it is important to understand not only their opinion, but the
Golden 4

reason for their opinion. If it is found that the public fears AI yet does not understand it,
then it is important to know so that they may be educated.
Golden 5

Bibliography
Booch, Grady. 2016. Don't fear superintelligent AI. November.

https://www.ted.com/talks/grady_booch_don_t_fear_superintelligence.

Cellan-Jones, Rory. 2014. Stephen Hawking warns artificial intelligence could end mankind.

December. http://www.bbc.com/news/technology-30290540.

Condliffe, Jamie. 2017. A single Autonomous Car has a Huge Impact on Alleviating Traffic. May

10. https://www.technologyreview.com/s/607841/a-single-autonomous-car-has-a-huge-

impact-on-alleviating-traffic/.

Grey, CGP. 2017. How Machines Learn. December 18.

https://www.youtube.com/watch?v=R9OHn5ZF4Uo.

Kelly, Kevin. 2016. How AI can bring on a second Industrial Revolution. June.

https://www.ted.com/talks/kevin_kelly_how_ai_can_bring_on_a_second_industrial_revo

lution.

Littman, Micheal L. 2017. Elon Musk is Wrong Again. AI Isn't More Dangerous than North

Korea. August 15. http://fortune.com/2017/08/15/elon-musk-ai-artificial-intelligence-

threat-twitter-north-korea/.

Miller, Ron. 2017. Artificial intelligence is not as smart as you (or Elon Musk) think. July 25.

https://beta.techcrunch.com/2017/07/25/artificial-intelligence-is-not-as-smart-as-you-or-

elon-musk-think/.

Narula, Guatam. 2018. Everyday Examples of Artificial Intelligence and Machine Learning.

March 1. https://www.techemergence.com/everyday-examples-of-ai/.

Olah, Chris. 2016. Bringing Precision to the AI Safety Discussion. June 21.

https://research.googleblog.com/2016/06/bringing-precision-to-ai-safety.html.
Golden 6

Olmstead, Kenneth. 2017. "Voice assistants used by 46% of Americans, mostly on

smartphones." Pew Research Center. December. http://www.pewresearch.org/fact-

tank/2017/12/12/nearly-half-of-americans-use-digital-voice-assistants-mostly-on-their-

smartphones/.

Tegmark, Max. 2016. Benefits & Risks of Artificial Intelligence.

https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/.

You might also like