You are on page 1of 10

Moral Philosophy Paper

Student Name:

Institution Name:

Course:

Instructor Name:

Date:
1

Some ethical and theological difficulties get presented by building a personal artificial

intelligence, which has only been addressed in science fiction and fantasy. By providing

translations, doing research, producing art, spotting fraud, and optimizing logistics, artificial

intelligence is enhancing our lives. As the capabilities of these systems increase, our world

becomes more efficient and rich.

Philosophers like Elon Musk and Stephen Hawking believe that the time has come for

serious discussion of the almost limitless potential of artificial intelligence. In many aspects,

this is a new frontier in terms of ethics and risk assessment, as well as swiftly increasing

technology. So, what are the difficulties and challenges that keep AI researchers up at night?

The thesis statement for this paper will be: “The use of Artificial Intelligence is not immoral

in itself, if used responsibly, considering the needs of the modern world” 

Prima Facie ethics will get applied to look at the ethical implications of artificial intelligence

in this paper.

What Happens If You Don't Have A Job? 

Automation is a significant problem for the labor hierarchy. We may be able to enable

workers to take on increasingly complex responsibilities as we develop methods for

automating vocations, shifting from the physical labor that dominated the pre-industrial world

to the cognitive labor that characterizes strategic and administrative employment in our

globalized society (Latonero, 2018).

Consider the trucking industry, which employs millions in the United States alone.

What if Elon Musk's self-driving trucks become widely available during the next decade?

When the risk of accidents is analyzed, self-driving trucks seem to be an ethical option

(Latonero, 2018). Office personnel and the majority of the workforce in developed countries

are anticipated to confront a similar predicament.


2

This gets us to the question of time management. The large majority of individuals

continue to depend on selling their time to earn a livelihood and feed their families. We can

only hope that this alternative allows people to find importance in non-labor activities such as

family care, community involvement, and inventing new ways to contribute to human

civilization.

If we make it through the shift successfully, we may one day look back and find it

barbarian that folks were compelled to sell the majority of their waking time to exist.

How Do We Disperse The Wealth Produced By Machines?

It is usual to talk about hourly wages in our economic system since it is based on the

idea that people should be compensated for their economic contributions (Etzioni and

Etzioni, 2017). The majority of firms continue to depend on hourly labor when it comes to

products and services. However, by bringing artificial intelligence into a corporation, it is

conceivable to drastically diminish its demand for human labor, resulting in fewer people

being paid. As a consequence, individuals who own AI-powered enterprises will flourish.

Startup founders are already taking home a higher part of the income generated by

their firms, expanding the wealth disparity. In 2014, the three biggest firms in Detroit made

about the same amount as the three largest corporations in Silicon Valley but Silicon Valley

had ten times the population (Etzioni and Etzioni, 2017).

If we're serious about creating a post-work society, how can we construct a fair post-

labor economy?

The Present Situation Of Humanity. Ramifications Of Robots On Human Behaviour

And Interpersonal Relations.

Automated systems powered by artificial intelligence are becoming more adept at

replicating human communication and interpersonal connections. Eugene Goostman, a bot

developed in 2015, was the first to win the Turing Challenge Person raters conversed with
3

an unknown entity through text input and then rated whether they were chatting with a person

or a machine. Eugene Goostman persuaded more than half of human raters that they were

speaking with another human person (Berendt, 2019).

This milestone symbolizes the advent of a new age in which we will habitually

engage with robots in the same manner that we interact with humans, whether in customer

service or commerce. Artificial bots, on the other hand, may dedicate an almost limitless

amount of time and goodwill to building relationships.

While many of us are unaware, we have previously proved how robots may trigger

the human brain's reward centers. Consider clickbait headlines and video games. To change

these headlines, A/B testing, a vital form of algorithmic optimization for material aimed to

capture human attention, is commonly employed (Berendt, 2019). Various strategies like this

are utilized to boost the degree of addiction in a broad variety of video and mobile games.

Technology addiction is the next frontier in human reliance.

On the other side, we may identify another function for software that has already been

demonstrated to be effective in directing human attention and commencing specified conduct.

In the correct hands, this can influence society's conduct more beneficially. However, in the

wrong hands, it has the potential to be devastating.

Artificial Stupidity. How Can We Be Sure We Don't Make Mistakes?

Whether you are a human or a computer, intelligence is a result of learning. Typically,

systems go through a training phase in which they "learn" to identify and respond to certain

patterns (Coeckelbergh, 2019). After a system has been properly trained, it may advance to

the testing phase, during which it is exposed to new settings and its performance is assessed.

Training cannot, of course, cover every conceivable circumstance in which a system

may be used. These systems can be altered in ways that humans cannot. Random dot patterns,

for example, may enable a computer to "see" items that are not physically there. If we are to
4

rely on AI to usher in a new age of employment, security, and efficiency, we must check that

the machine performs correctly and that it cannot be hacked (Coeckelbergh, 2019).

Racist Robots. How Can We Make Artificial Intelligence To Be Less Biased?

For all that artificial intelligence can achieve, it cannot always be relied upon to be

fair and impartial in its examination of data. Google and its parent business Alphabet are

artificial intelligence pioneers, as proven by Google's Photos service, which employs AI to

recognize people, objects, and surroundings (Boden, 2018). It may, however, go astray, such

as when a camera's racial sensitivity was erroneous or when algorithms used to anticipate

future crimes revealed a prejudice towards black folks.

We must not lose sight of the reality that artificial intelligence systems are designed

by individuals with bias and judgment (Boden, 2018). Once again, when exploited

responsibly or by those who are devoted to the betterment of society, artificial intelligence

has the potential to be a catalyst for positive change.

How Can We Safeguard Artificial Intelligence Against Possible Adversaries?

The more sophisticated technology gets, the more it may be utilized for both good and

evil. This is true not just for robots meant to replace human soldiers or autonomous weapons,

but also for artificial intelligence systems that have the power to do havoc if abused. Because

these fights will take place away from the battlefield, cybersecurity will be much more

critical (Decker, 2008). After all, we're working with a system that is orders of magnitude

more powerful and efficient than we are.

Evil Genies. What Measures Are In Place To Protect Us Against Unintended

Consequences?

We must be concerned with more than merely our opponents. What if artificial

intelligence grew to loathe humans? This is not to imply that AI will turn "evil" in the same

way that humans do, or in the manner that Hollywood portrays AI disasters (Griggs, 2013).
5

We would rather think of a competent AI system as a "genie in a bottle," capable of fulfilling

requests but with the possibility of devastating unforeseen effects.

In the case of a computer, malice is questionable; instead, a lack of comprehension of

the whole context in which the request was made is more probable. Consider an artificial

intelligence system tasked with eradicating cancer from the planet. After significant

calculation, it discovers a way that effectively eradicates cancer - by annihilating everyone on

the earth. The computer's purpose of "no more cancer" would have been extremely valuable,

but not in the way humans intended.

How Can We Keep Control Of Such A Sophisticated Intelligent System?

Humans are at the top of the food chain for a cause other than their razor-sharp teeth

and power. Our intellect and resourcefulness are nearly wholly responsible for our dominion.

Humans can overcome bigger, faster, and stronger animals because humans can invent and

deploy control mechanisms like cages and weapons, as well as cognitive abilities like

teaching and conditioning (Krafft et al., 2020).

This raises the question of whether artificial intelligence will ever have a competitive

edge over humans. Furthermore, we cannot rely only on "pulling the plug," since a correctly

structured system may predict and guard against such a move. This is referred acknowledged

as the "singularity" by some as the point at which humans cease to be the most intelligent

species on the earth (Krafft et al., 2020).

Robot Rights. How Do We Define The Humane Treatment Of AI?

While neuroscientists continue to study the mysteries of conscious experience, we

now have a clearer understanding of the principles of reward and aversion. Many of the

processes we employ are shared by even the most simple species. We are constructing

artificial intelligence systems with equivalent reward and aversion mechanisms in various
6

ways. By rewarding better performance with a virtual incentive, reinforcement learning is

akin to dog training.

The intricacy and realism of these systems are rising, despite their simplicity at the

time. When the reward systems of a system provide negative feedback, can we conclude that

the system is disturbed? Furthermore, so-called genetic algorithms operate by creating several

examples of a system at the same time, with only the most effective "surviving" and merging

to generate the next generation of instances (Busch, 2011). This persists across numerous

generations and works as a method of system enhancement. Unsuccessful efforts are

discarded. When do genetic algorithms become capable of mass murder?

It is appropriate to dispute the legal status of robots when we suppose that they are

sentient entities capable of perception, emotion, and action. Should they get the same care as

other clever animals? Will we take into consideration the suffering faced by "feeling"

machines?

"When we investigate the mind as an object, we recognize that there are virtually

more patterns in human experience than we can count or fathom," Corduck (1979, p. 329)

said of the endeavor. Furthermore, there is a risk that this idea may show to be entirely

transcending of human experience, pushing us into the metahuman. Anxieties about what can

go wrong are found at one end of the moral spectrum, while an enthusiastic "yes" to all

possibilities and rewards can be found at the other. From the perspective of Kant and Ross’

(1965) Deontological theory, AI is deemed to be immoral. But they would probably have

made some alterations in their views, if they were given a chance today, considering the

requirements of the modern man. While the first commandment of ethics is "no harm," we

should approach with discretion when it comes to innovation. Certain ethical considerations

are concentrated on pain alleviation, while others are concerned with the risk of unwanted

consequences. While we must investigate these problems, we must also keep in mind that
7

technological innovation often results in an improved quality of living for everyone. Artificial

intelligence has immense potential, and it is our job to guarantee that it is utilized

responsibly. Perhaps our inventions will outshine us. It is possible that God did extend a

finger toward Adam's hand, as shown by Michelangelo.

 
 
 
8

References

Bechtel, W. (1985). Attributing responsibility to computer systems. Meta philosophy 16:296–

306. https://doi.org/10.1111/j.1467-9973.1985.tb00176.x

Berendt, B. (2019). AI for the common good?! Pitfalls, challenges, and ethics pen-testing.

Paladyn J Behav Robot 10:44–65. https://doi.org/10.1515/pjbr-2019-0004

Boden, MA. (2018). Artificial intelligence: a very short introduction, Reprint edn. Oxford

University Press, Oxford

Busch, T. (2011). Capabilities in, capabilities out: overcoming digital divides by promoting

corporate citizenship and fair ICT. Ethics Inf Technol 13:339–353

Coeckelbergh, M. (2019). Artificial Intelligence: some ethical issues and regulatory

challenges. In: Technology and regulation, pp 31–34.

https://doi.org/10.26116/techreg.2019.003

Decker, M. (2008). Caregiving robots and ethical reflection: the perspective of

interdisciplinary technology assessment. AI Soc 22:315–330

Etzioni, A, Etzioni, O. (2017). Incorporating ethics into artificial intelligence. J Ethics

21:403–418. https://doi.org/10.1007/s10892-017-9252-2

Griggs, D, Stafford-Smith, M, Gaffney O et al. (2013). Sustainable development goals for

people and planet. Nature 495:305–307. https://doi.org/10.1038/495305a

Krafft, T, Hauer M, Fetic, L et al. (2020). From principles to practice: an interdisciplinary

framework to operationalise AI ethics. VDE and Bertelsmann Stiftung.

https://www.ai-ethics-impact.org/res

ource/blob/1961130/c6db9894ee73aefa489d6249f5ee2b9f/aieig—report—download-

hb-data. pdf.

Latonero, M. (2018). Governing artificial intelligence: upholding human rights & dignity.
9

Data & Society.

https://datasociety.net/wp-content/uploads/2018/10/DataSociety_Governing_Artifi

cial_Intelligence_Upholding_Human_Rights.pdf

You might also like