You are on page 1of 6

Joseph A.

Ramirez

Dr. Lindberg

ENGL 1302 221

4/4/2022

ESSAY III

INTRODUCTION

Technology still grows, and different ways to control humanity are beginning to change

as we go into the world of tomorrow; unfortunately, that time is now. AI, or artificial

intelligence, enables a computer to process tasks faster than a human. AI can move scientists

forward while cutting processing time by half or more. For every blessing, there is always a

curse. AI gives humanity the illusion of control. With that, AI is vulnerable to cyber-attacks, as

well as abused by human users. AI must not be trusted with such vital information, nor be used

to treat patients in hospitals by itself, defend our country—or become a part of our lives. This

article will argue induce vulnerability, and make opportunities for people to abuse the processes

of AI, cannot be tampered with human emotion, and not become entrusted to take over a

situation without human control.

THE PET CALLED MACHINE

In the realm of traffic, humans are not clever."In a study, at least 90% of all motor

vehicle crashes are caused fully or in part by human error. The exact percentage is debatable, and

the report examined several studies that ranged anywhere between 90% and 99% (Buck, Toscano

& Tereskerz LTD) ". Issued in 2015, All Tesla car models were set to travel one billion miles to

ensure self-driving mode is safe, the trials ended in 2018 (717 Kiss). The results show there is a
possibility of one accident occurring every 3.34 million miles."According to the US Department

of Transportation, there is an accident every 492 000 miles in America, making the self-driven

mode seven times safer (Kiss)". However, there has been a rise in AI-related crashes, involving

especially, the line of Tesla-branded cars outfitted with AI. The autopilot feature is ahead of its

time. In an article by NPR, there is an increase in vehicular manslaughter from the misuse of the

autopilot of Tesla brand vehicles. In 2019, as an example, a Model S Tesla had autopilot enabled

on a freeway. When this Tesla exited the freeway, (top speed: 75mph) the car ran a stoplight and

crashed into a Honda civic. Two people were killed in the accident. (NPR) Note, that at 50mph,

there is a 69% chance of death. With a car going top speed at 75 mph, death is certain

(Maisonlaw). It is shown by the many studies that Tesla wants to ensure that the person is safe

from any crashes while driving in traffic, these articles show how people will use Tesla's

artificial intelligence for their laziness. When a human becomes lazy and enables the autopilot

feature, humans never know they're putting others in danger. This article suggests that people

must understand, and become aware of the dangers people put on others. If people can afford a

costly tesla but are inept behind the wheel, these kinds of people don't deserve gem like the tesla.

"In the proximity of autonomous vehicles, using their shortened reaction time, drivers in human-

driven cars might abuse it to their advantage (Kiss)." I disagree with AI becoming integrated into

cars because of the unfair usage of AI for laziness.

THE FESTERING MACHINE WITHIN BODIES

Next, AI has already been installed within the confines of the technologies of hospitals.

All people know that modern medicine is a blessing for all of humanity, but that is only from the

miracles of AI. Some AIs were built to "predict illnesses, such as sepsis (2)" but they are

becoming adept at creating clinical decision making. AI's making clinical decisions for patients
who may have an illness or have cancer growing soon is a fragile topic, as a single

miscalculation will jeopardize the future life of the patient. What if the patient was misdiagnosed

and was treated for a different disease? Or if the AI kills the human whilst practicing surgery.

This article understands that doctors, scientists, and technicians are doing what they can to

ensure the safety of humans. One solution or possibility is the installment of the Hippocratic oath

into the algorithms of AI. Firstly, the Hippocratic oath is a very solemn or romantic oath taken

by any physician. This oath, from the reading, is a heartfelt promise that that person will not

harm.

I will love those who taught me these arts as I love my parents and I will offer my skills

to the young with the same generosity that they were given to me. And I will never ask

them for gold, but demand that they stand by this covenant in return. I also swear that if I

earn fame and wealth, I will share it with my masters and my students. (Edelstein)

Since when does an AI feel love? When does an AI have parents or a single parent? What kind

of figure does an AI look up to? The creator? Does it have a chemical inside of its brain that will

"spark" when it sees something it recognizes? No, the AI doesn't have a brain. An AI will be able

to share its knowledge with others, but not with its children. If it shares the knowledge with

simple children, the children will not understand or comprehend the complexities of the human

body, the children must learn through rudimentary means, which an AI doesn't understand. Can

an AI restrict itself—or control itself just to humble itself for children. AI has never conceived a

child before. In theory, if an AI can conceive a child, either through creation or through

procreation, the AI must first develop the appropriate intelligence and self-awareness to

create/conceive the child--then share its intelligence with the child.


THE WORLD'S MACHINE

AI has been beneficial to humanity, as it can calculate measurements faster, and

". . .simplify data workflows and improve the accuracy and speed… (Hurley 112)". Ai can

control algorithms efficiently, and kill organisms with increased efficiency. In an experienced

general's eyes, if something can kill humans quickly and efficiently, a general will smile as if he

has not smiled since serving in Vietnam. AI in the DOD (Department of Defense or any other

military-like agency cannot trust AI to the fullest due to the risks of hacking, hijacking, or

control by someone that's our enemy, or someone abled enough to take control over the AI. Once

taken control, there's a possibility such power of a superpower will be used against someone else

—which could begin a war, or it could be used against the superpower itself—which leaves the

country vulnerable to many countries who wants to see America burn like Rome. ," There are…

additional concerns that AI may be unpredictable or vulnerable to unique forms of manipulation.

(Hurley 115)". Since AI will improve the fighting capability of a power or a superpower, it's the

people's decision of the government will implement machines to do the fighting for the people,

assist the military by technicians controlling where the AI deploys bombs, or not have any AI-

assistance in battle—only attrition and blind-eye. This article supports the usage of AI

integration with a superpower, but controlling is the primary topic that must be addressed by the

people and the government that controls the AI. If an AI were to manipulate the constant turning

points of battle, when shall we know we lost control of the AI? Will the AI be fooled by the

enemy, or will it mistake the user as the enemy?

CONCLUSION
AI is sensitive to most people on Earth. We can control the machines we've created, but

the topic is 'how' humanity can control such machines? Do we let AI do the work for humanity

while we sit back and let it work? Or must we use AI while we use our intelligence to understand

any situation? Humanity must acknowledge that they cannot use it to their advantage and for

their selfish gains—just so they could become lazy. If they become lazy, they become distracted;

thus, the AI will not be in control of the human user—dire consequences follow. If an AI is

trained to become more efficient in physiology, the Hippocratic oath will be incompatible with

the algorithms of the AI. AI cannot comprehend emotion, neither can they distribute it

efficiently, nor rudimentary. Finally, control must be given to the people over the AI, as it must

not take complete control over any situation by itself. If the AI takes control over any situation at

hand, the AI will neither be efficient nor will be on the hand of the user any longer.
Work Cited

By, and Buck. “What Percentage of Car Accidents Are Caused by Human Error?: Virginia Law

Blog.” Buck, Toscano & Tereskerz, Ltd., 11 Jan. 2021, https://www.bttlaw.com/what-percentage-

of-car-accidents-are-caused-by-human-error/#:~:text=According%20to%20a%20Stanford

%20Law,between%2090%25%20and%2099%25.

Press, The Associated. “A Tesla Driver Is Charged in a Crash Involving Autopilot That Killed 2

People.” NPR, NPR, 18 Jan. 2022, https://www.npr.org/2022/01/18/1073857310/tesla-autopilot-

crash-charges.

“The Fatal Car Accident: When Speed Is a Factor.” Maison Law, 22 Apr. 2019,

https://maisonlaw.com/2019/03/the-fatal-car-accident-when-speed-is-a-factor/#:~:text=At

%2050%20mph%2C%20the%20risk,of%2070%20mph%20or%20more.

Edelstein, Ludwig. The Hippocratic Oath: Text, Translation and Interpretation. Baltimore: The

Johns Hopkins Press, 1943.

Hurley, JS. “Fitting the Artificial intelligence Approach to Problems in DoD.” Journal of Infor-

mation Warfare, 2021, p. 110-123.

You might also like