You are on page 1of 5

Acharya 1

Sulochan Acharya

29 February 2018

Artificial Intelligence is Not a Threat to Humanity

In the current era of technology, individuals are seen in a self-driving car ignoring the

wheel, having their drink, and texting their loved ones. Some individuals are seen talking with a

virtual assistant on their phone just to know the weather. Only a year prior, a human-like robot

named Sophia was given citizenship by the kingdom of Saudi Arabia. It all started when Charles

Babbage, considered as the father of computers, invented the machine known as Analytical

Engine. This machine used to calculate mathematical operations utilizing input, process and

output approach. This approach led to the invention of modern computers and the rest is just

history. Today modern computers are far more complicated and powerful compared to what was

invented during the nineteenth century. According to Moore’s law, their processing power has

been increasing exponentially every year. This exponential increase in processing power has led

computers to compute long numerical operations all the way to virtual simulations of nature to a

whole new kind of intelligence known as Artificial Intelligence.

People believe that soon computers will have so much processing power that it could

reenact a human brain. They believe artificial intelligence will reach a point of singularity where

it will outperform human intelligence. Some believe that the superior artificial intelligence can

go rogue and try to wipe out their own creators. But, one author comments, “I think the concept

of the singularity is ill-conceived. It is based on an oversimplified and false understanding of

intelligence” (Bundy 40). Machines that use artificial intelligence are designed to work in a
Acharya 2

confined domain, are specific on its objectives, are dangerous only with the possibility to steal

human jobs, and are misunderstood by the public.

Every form of machine intelligence that individuals see at the present can perform on a

restricted space. But, the same machine cannot process if given the tasks that are out of their

scope. For example, voice assistant in iPhones called Siri can text, call or have a conversation

with a person. If someone asks Siri to play chess, it might respond by giving a web search of the

chess game; but, it still cannot play chess. Alan Bundy, a professor at the University of

Edinburgh, asserts, “Deep Blue was a chess-playing computer, developed by IBM, that defeated

the then-world champion, Garry Kasparov, in 1996. Deep Blue could play chess better than any

human, but could not do anything other than play chess—it could not even move the pieces on a

physical board” (Bundy 40). Artificial intelligence always tends to solve a particular task instead

of replicating human intelligence (Dubhashi and Lappin 44). It can be inferred that, however,

machines with artificial intelligence can outperform humans in certain tasks, it can never step out

of its realm. The fact that Deep Blue could not move the chess piece supports the assertion that

the objectives of the machines with artificial intelligence are constrained to a certain extent. The

possibility of going beyond that extent is near to zero.

While most people fear that artificial intelligence might someday lead humanity to the

doomsday, the real danger is not of that kind. According to Dubhashi and Lappin, “By contrast

to super intelligent agents, we are currently facing a very real and substantive threat from AI of

an entirely different kind” (Dubhashi and Lappin 45). It is found that automation is at its peak

due to current AI technologies. Automation is increasing swiftly during recent years and has

threatened jobs of some professionals including medical consultants (Dubhashi and Lappin 45).
Acharya 3

Companies prefer automation as a substitute for humans because of their efficiency, stability,

budget-friendly operation and endless diligence. It can be deduced that unemployment is an

existential threat to humans caused by artificial intelligence. Authors state that, “no form of

employment is immune to automation by intelligent AI systems” (Dubhashi and Lappin 45). It

can be inferred that someday in the future, almost every company will implement automation

leading to high unemployment rates. However, unemployment cannot steer humanity into a

cataclysmic event like extinction. The issue of unemployment, which might be induced by

artificial intelligence in the future, can be solved by the government. The government has the

power to regulate companies and its policies. Unemployment is a real concern for humans. But,

it is not a fearsome risk powerful enough to threaten humanity.

Furthermore, people believe anything that interacts or thinks like a human is some form

of artificial intelligence. Individuals misjudge some of the conventional computers as the

machine that uses artificial intelligence. For example, weather forecasting is done by

conventional computers given some variables and processing power. Some people believe that

weather forecasting is a form of artificial intelligence. Some purveyors describe the system as

artificial intelligence because of their phenomenal capabilities (Parnas 27). This means that

people who are astounded by current generation computing technologies misconstrue it as an

artificial intelligence. The author of The Real Risks of Artificial Intelligence: Incidents from the

Early Days of AI Research Are Instructive in the Current AI Environment explains, “those who

use the term ‘artificial intelligence’ have not defined that term. I first heard the term more than

50 years ago and have yet to hear a scientific definition. Even now, some AI experts say that

defining AI is a difficult [and important] question” (Parnas 27). It can be inferred that artificial
Acharya 4

intelligence has no rational definition. Every individual has their own definition of artificial

intelligence.

Human beings are intelligent in almost every aspect. Humans can solve problems, predict

outcomes, play games and act according to the situation using a single processor called the brain.

Only when a single artificially intelligent machine can perform general tasks, it can be compared

with human intelligence. When programmers refer to artificial intelligence, they always mean

something specific to them (Parnas 31). As explained earlier, programmers can only create

artificial intelligence to perform a very specific task. There is no such thing as a general artificial

intelligence.

These are the primary reasons why artificial intelligence is not a threat to humanity.

Intelligent machines are very specific to their task. It may pose some insignificant threat to

humans which can be eliminated, however, artificial intelligence cannot induce disastrous effects

capable of the downfall of humanity. If humans can create such thing, then humans can destroy it

too. Sometimes, certain conventional machines designed to work in a specified task are

misjudged by individuals as being intelligent. Because there is no form of general artificial

intelligence, current task-specific artificial intelligence technologies cannot be compared to

humans. If there are any developments of general artificial intelligence in the future, one might

consider it comparing with humans. However, humans will always be superior and will have the

power to shut it down.


Acharya 5

Works Cited

Bundy, Alan. "Smart Machines Are Not a Threat to Humanity." Communications of the

ACM, vol. 60, no. 2, 2017. pp. 40-42.

Dubhashi, Devdatt and Shalom Lappin. "AI Dangers: Imagined and

Real." Communications of the ACM, vol. 60, no. 2, 2017. pp. 43-45.

Parnas, David Lorge. "The Real Risks of Artificial Intelligence: Incidents from the Early Days

of AI Research Are Instructive in the Current AI Environment." Communications of the

ACM, vol. 60, no. 10, 2017. pp. 27-31.