You are on page 1of 3

To maintain order when walking a dog, one must use a leash.

In a 2000 essay for Wired, Sun


Microsystems co-founder Bill Joy initially introduced the idea of keeping a leash on AI. Creating
smarter machines than humans presents risks, as Joy pointed out in her piece. He mused that
keeping AI "on a leash" would be necessary if we do not want it to get out of hand. Experts in the
field have debated the merits of limiting AI's growth. Some contend that regulating AI is pointless
and even harmful, while others insist it is essential for preventing terrible outcomes.

When should artificial intelligence (AI) be used? Is an issue Fredric Lederer and his coworkers tackle
in their paper "Problematic AI – When Should We Use It?" utilising AI? First, they discuss AI's uses
and benefits. Later, Lederer and his coworkers narrow down the potential threats of AI to four
scenarios: when it decides life-or-death concerns, allocates resources, determines information
access, and determines how people interact. Each scenario discusses the pros and cons of adopting
AI. Lederer and his colleagues provide persuasive arguments for AI's mainstream use. They
emphasize caution in certain situations. AI should help all of humanity, not just a few, they say
(Gherheş, 2018).

To fully look at the existential threat that AI pose, we should consider whether we can control them
or not. In "Artificial Intelligence as the Possible Third Front of Mankind," Roman Sterledev, Tamara
Sterledeva, and Vyacheslav Abramenko explore the possibility of AI becoming a "third front" of
humanity alongside the natural and physical worlds. The writers begin by describing AI's evolution
and history before discussing its future uses and ramifications. The authors argue that AI's
development should be treated with prudence and foresight. They argue that AI could replace
humanity as the dominating intelligence on Earth, which could have dire consequences for our
species. The authors call for additional research into the risks and advantages of AI development to
ensure its future applications benefit humanity. This article is thought-provoking about AI's future
(Sterledev et al., 2021). The authors argue that AI could threaten humans and that we should be
cautious with it.

AI machines can completely take over humanity and is created to do so. Selmer Bringsjord and David
Ferrucci claim in "Computer Scientists: We Can't Control Highly Intelligent Robots" that super-
intelligent machines pose threats we cannot control. They say these machines can outthink and
outmanoeuvre us, and we cannot understand or control their behaviour. They believe super-
intelligent machine research should be delayed until we can control them. The essay traces AI's roots
to the Greek myth of Pygmalion. AI aims to create machines that can think and act like humans. They
say AI is not just about producing machines that can perform human functions; it is also about
making machines smarter than humans. They claim these machines could outthink and
outmanoeuvre humanity.

It will not do much argument if we fail to look at the other side of the spectrum. An argument that
can be used for the positive support of AI is A new technique for AI development is proposed by
Jianlong Zhou and Fang Chen in their work "Towards Humanity-in-the-Loop in AI Lifecycle," which
accounts for the necessity of human involvement and oversight at every stage of the process. The
authors contend that the current approaches to AI development place too much emphasis on
automation, which has unintended implications. They suggest a new approach in which people are
involved in the process through testing and eventual deployment from the very beginning. More
openness and responsibility would lead to more robust AI applications. Through the adoption of this
method, AI's potential can be put on a leash so that it does not become a situation of excessive.

In conclusion, AI has been remarkable development over the previous decade. Machines can now do
many jobs that once required human intelligence, and they are only improving. There are dangers
associated with this advancement, however. There is a risk that AI, as it advances in capability, could
become unmanageable. Without proper regulation, AI could develop into a serious threat to
humankind. It is possible to limit the spread of AI in a few different ways. Before anything else, we
must guarantee the ethical growth of AI. To achieve this goal, we must establish ethical criteria for
the advancement of AI and ensure that AI is built with the good of humanity in mind.
References

Banifatemi, A., Miailhe, N., Buse Çetin, R., Cadain, A., Lannquist, Y., & Hodes, C. (2021).
Democratizing AI for Humanity: A Common Goal. Reflections On Artificial Intelligence For Humanity,
228-236. https://doi.org/10.1007/978-3-030-69128-8_14

Gherheş, V. (2018). Why Are We Afraid of Artificial Intelligence (Ai)? European Review Of Applied
Sociology, 11(17), 6-15. https://doi.org/10.1515/eras-2018-0006

Sterledev, R., Sterledeva, T., & Abramenko, V. (2021). Artificial Intelligence as the Possible Third
Front of Humanity. Lecture Notes In Networks And Systems, 819-826. https://doi.org/10.1007/978-
3-030-89477-1_75

Zhongming, Z., Linong, L., Xiaona, Y., Wangqiang, Z., & Wei, L. (2021). Computer scientists: We
wouldn't be able to control super-intelligent machines.

Zhou, J., & Chen, F. (2021). Towards Humanity-in-the-Loop in AI Lifecycle. Humanity Driven AI, 3-13.
https://doi.org/10.1007/978-3-030-72188-6_1

You might also like