Professional Documents
Culture Documents
Ramirez
Dr. Lindberg
4/4/2022
the report examined several studies that ranged anywhere between 90% and 99% (Buck, Toscano
& Tereskerz LTD) ". Issued in 2015, All Tesla car models were set to travel one billion miles to Commented [7]: Citation is not part of quotation- close
quotation marks, then place citatio, then period
ensure self-driving mode is safe, the trials ended in 2018 (717 Kiss). The results show there is a
possibility of one accident occurring every 3.34 million miles."According to the US Department Commented [8]: This paragraph is kind of all over the
place- let’s break it up into 2-3 (4?) more tightly focused
areas
of Transportation, there is an accident every 492 000 miles in America, making the self-driven
mode seven times safer (Kiss)". However, there has been a rise in AI-related crashes, involving
especially, the line of Tesla-branded cars outfitted with AI. The autopilot feature is ahead of its
time. In an article by NPR, there is an increase in vehicular manslaughter from the misuse of the
autopilot of Tesla brand vehicles. In 2019, as an example, a Model S Tesla had autopilot enabled
on a freeway. When this Tesla exited the freeway, (top speed: 75mph) the car ran a stoplight and
crashed into a Honda civic. Two people were killed in the accident. (NPR) Note, that at 50mph,
there is a 69% chance of death. With a car going top speed at 75 mph, death is certain
(Maisonlaw). It is shown by the many studies that Tesla wants to ensure that the person is safe
from any crashes while driving in traffic, these articles show how people will use Tesla's
artificial intelligence for their laziness. When a human becomes lazy and enables the autopilot
feature, humans never know they're putting others in danger. This article suggests that people
must understand, and become aware of the dangers people put on others. If people can afford a
costly tesla but are inept behind the wheel, these kinds of people don't deserve gem like the tesla.
"In the proximity of autonomous vehicles, using their shortened reaction time, drivers in human-
driven cars might abuse it to their advantage (Kiss)." I disagree with AI becoming integrated into
Next, AI has already been installed within the confines of the technologies of hospitals.
All people know that modern medicine is a blessing for all of humanity, but that is only from the Commented [9]: Because of?
miracles of AI. Some AIs were built to "predict illnesses, such as sepsis (2)" but they are
becoming adept at creating clinical decision making. AI's making clinical decisions for patients
who may have an illness or have cancer growing soon is a fragile topic, as a single
miscalculation will jeopardize the future life of the patient. What if the patient is misdiagnosed Deleted: was
and is treated for a different disease? Or if the AI kills the human while practicing surgery. This Commented [10]: “whilst” is an archaic form
Deleted: was
article understands that doctors, scientists, and technicians are doing what they can to ensure the Deleted: whilst
Commented [11]: Author?
safety of humans. One solution or possibility is the installment of the Hippocratic oath into the
algorithms of AI. Firstly, the Hippocratic oath is a very solemn or romantic oath taken by any
physician. This oath, from the reading, is a heartfelt promise that that person will not harm.
I will love those who taught me these arts as I love my parents and I will offer my skills Commented [12]: Good - assuming that there is no page
number to cite, this is set up and cited correctly.
to the young with the same generosity that they were given to me. And I will never ask
them for gold, but demand that they stand by this covenant in return. I also swear that if I
earn fame and wealth, I will share it with my masters and my students. (Edelstein)
Since when does an AI feel love? When does an AI have parents or a single parent? What kind
of figure does an AI look up to? The creator? Does it have a chemical inside of its brain that will
"spark" when it sees something it recognizes? No, the AI doesn't have a brain. An AI will be able
to share its knowledge with others, but not with its children. If it shares the knowledge with
simple children, the children will not understand or comprehend the complexities of the human
body, the children must learn through rudimentary means, which an AI doesn't understand. Can
an AI restrict itself—or control itself just to humble itself for children. AI has never conceived a
child before. In theory, if an AI can conceive a child, either through creation or through Commented [13]: Paragraph rambles a bit, but point taken.
procreation, the AI must first develop the appropriate intelligence and self-awareness to
AI has been beneficial to humanity, as it can calculate measurements faster, and ". . Commented [14]: What is this paragraph about? If military
function, let’s make that clear in topic sentence
.simplify data workflows and improve the accuracy and speed… (Hurley 112)". Ai can control Commented [15]: No quotation mark here
algorithms efficiently, and kill organisms with increased efficiency. In an experienced general's
eyes, if something can kill humans quickly and efficiently, a general will smile as if he has not
smiled since serving in Vietnam. AI in the DOD (Department of Defense or any other military-
like agency cannot trust AI to the fullest due to the risks of hacking, hijacking, or control by an Deleted: someone that's
Deleted: our
enemy, Once taken control, there's a possibility such power of a superpower will be used against Commented [16]: Such an entity has
Deleted: or someone abled enough to take control over the
someone else—which could begin a war, or it could be used against the superpower itself— AI.
which leaves the country vulnerable to many countries who wants to see America burn like Commented [17]: Confusing sentence- let’s break it into
shorter, more accurate ones
Rome. ," There are…additional concerns that AI may be unpredictable or vulnerable to unique
forms of manipulation. (Hurley 115)". Since AI will improve the fighting capability of a power Commented [18]: Fix punctuation
or a superpower, it's the people's decision of the government will implement machines to do the Commented [19]: Confusing sentence
fighting for the people, assist the military by technicians controlling where the AI deploys
bombs, or not have any AI-assistance in battle—only attrition and blind-eye. This article
supports the usage of AI integration with a superpower, but controlling is the primary topic that
must be addressed by the people and the government that controls the AI. If an AI were to
manipulate the constant turning points of battle, when shall we know we lost control of the AI?
Will the AI be fooled by the enemy, or will it mistake the user as the enemy?
CONCLUSION
AI is sensitive to most people on Earth. We can control the machines we've created, but Commented [20]: ???
the topic is 'how' humanity can control such machines? Do we let AI do the work for humanity Commented [21]: Comma
while we sit back and let it work? Or must we use AI while we use our intelligence to understand
any situation? Humans must acknowledge that they cannot use it to their advantage and for their Deleted: Humanity
selfish gains—just so they could become lazy. If they become lazy, they become distracted; thus,
the AI will not be in control of the human user—dire consequences follow. If an AI is trained to
become more efficient in physiology, the Hippocratic oath will be incompatible with the
algorithms of the AI. AI cannot comprehend emotion, neither can they distribute it efficiently,
nor rudimentary. Finally, control must be given to the people over the AI, as it must not take
complete control over any situation by itself. If the AI takes control over any situation at hand,
the AI will neither be efficient nor will be on the hand of the user any longer.
Work Cited Commented [22]: Indent all lines of a works cited entry
after the first
By, and Buck. “What Percentage of Car Accidents Are Caused by Human Error?: Virginia Law
of-car-accidents-are-caused-by-human-
error/#:~:text=According%20to%20a%20Stanford%20Law,between%2090%25%20and%2099
%25.
Press, The Associated. “A Tesla Driver Is Charged in a Crash Involving Autopilot That Killed 2 Commented [23]: There must be an actual listed author
crash-charges.
“The Fatal Car Accident: When Speed Is a Factor.” Maison Law, 22 Apr. 2019,
https://maisonlaw.com/2019/03/the-fatal-car-accident-when-speed-is-a-
factor/#:~:text=At%2050%20mph%2C%20the%20risk,of%2070%20mph%20or%20more.
Edelstein, Ludwig. The Hippocratic Oath: Text, Translation and Interpretation. Baltimore: The
Hurley, JS. “Fitting the Artificial intelligence Approach to Problems in DoD.” Journal of Infor-