Professional Documents
Culture Documents
Michelle Brown
CST 300 Writing Lab
14 October 2022
As the world continues to progress and advance, so does technology. This technology can
affect our lives both positively and negatively, with examples in Artificial Intelligence (AI)-
generated synthetic media, or also called deepfake technology. There are examples of how
deepfake technology can be beneficial and detrimental, but the issue is whether or not deepfake
technology should be made illegal because of its potential harm. The terms “deepfake” and “AI-
Background
adversarial networks (GANs) to generate videos and/or audio of events that never actually
happened; this includes fake videos and/or audio of people (Marr, 2022). An example of this
technology being used is if a video is recorded of one person making a speech, but then
swapping out the original person’s face with a celebrity’s face; therefore, making it seem as if
the speech was actually presented by the celebrity. Today, there are various free online
applications that are available to the public, allowing anyone to make their own deepfake video
and/or audio. Since its rise in popularity, the technology has been used for positive (i.e. detecting
tumors) and negative (i.e. swapping someone’s face into a pornographic video that they did not
consent to) use cases (Çolak, 2021). This has brought about the on-going issue of whether or not
History
2
Although data manipulation is much more elaborate today, the idea of it is not new. Data
manipulation has been happening since the Ancient Roman times; they “chiseled names and
portraits off stone, permanently deleting a person’s identity and history” (Somers, 2020). With
computers and technology (especially AI), the world is only getting more advanced with data
manipulation (i.e. apps that allow users to edit pictures of how they look online). In 2017, a
Reddit user, who went by the name of “deepfakes”, “created a space on the online news and
aggregation site, where they shared pornographic videos that used open source face-swapping
technology” (Somers, 2020). As a result of this, this Reddit user came up with the “deepfake”
term that we use today. However, because of the negative use cases associated with the original
Stakeholder Analysis
A stakeholder is any person or group of people affected by an issue (whether they are
affected positively or negatively). Regarding the issue of whether or not deepfake technology
should be advanced, the two key stakeholders affected are those who are pro deepfake
Values. Stakeholders who are pro deepfake technology value that this is an area of AI
and technology where advancements can be made in order to create new ideas and opportunities
that would otherwise not be possible. Some areas where this technology can be used and benefit
Position. Because of the positive use cases that have occurred from the use of AI-
Generated Synthetic Media technology, pro deepfake stakeholders believe that the technology
Examples of positive use cases have occurred in areas such as medical, accessibility and
education. In the medical field, this technology has been used in medical studies without having
to use live test subjects, while still benefiting the greater population with its end result
(treatment, medication, etc.). Without harming animals or humans, this technology may be useful
in developing treatments (Marr, 2022). By lessening the gap between the advantaged and
disadvantaged, this technology has also improved accessibility. An example is the use of AI-
Generated Synthetic Media to create a synthetic voice for patients who cannot speak, giving
them the ability to communicate with their friends and family (Jaiman, 2020). In education, AI-
Generated Synthetic Media has been used to allow students to view a video of a historical figure
delivering an important speech, rather than only reading about it in a textbook because it was
Claims. One claim used to support this position is Claim of Value, which states that an
action is right or wrong based on what a person values. This can differ depending on what is
“normal” in a person’s society. The medical use case previously mentioned values not harming
live humans and/or animals in test studies. It is desirable if we can use deepfake technology to
develop helpful treatments without the negative or harmful effects on the way there. The
education use case previously mentioned values the quality of a student’s educational
experience. By using deepfake technology, it provides the opportunity to deliver the same
Another claim used to support this position is Claim of Cause, which states that the effect
of one event is caused by another. The accessibility use case above makes a connection between
the effect of using deepfake technology to develop the ability for patients to communicate with a
synthetic voice. They now have one less disadvantage than they did before.
4
Values. Stakeholders who are against deepfake technology value preventing the spread of
misinformation and defamation that can come from the use of this. They also value preventing
criminal activity from the use of deepfake technology and suspicion that any piece of media
could be fake, potentially causing chaos and confusion. Technology is advancing at a rapid rate.
In very little time, the technology used to make deepfake media will be so good that “In the
months and years ahead, deepfakes threaten to grow from an Internet oddity to a widely
destructive political and social force. Society needs to act now to prepare itself” (Toews, 2020).
Position. Because of the negative use cases that have occurred from the use of AI-
Generated Synthetic Media technology, stakeholders against deepfake technology believe that
The following are some examples of negative use cases. Previously, viral deepfake
videos have spread of President Barack Obama using a swear word when describing President
Donald Trump, when he did not actually say this (Toews, 2020).
On New Year’s Day 2019, Ali Bongo, the president of Gabon (an African country),
spoke in a video to the public in an attempt to reassert his leadership, as the public had not heard
from him for quite some time. However, Bongo did not seem natural in this video and suspicions
grew that it was fake and people jumped to conclusions that he was actually dead or unable to
perform his duties. It was never proven that the video was real or fake, but the military did try to
After Mumbai journalist Rana Ayyub wrote an article about an Indian political party in
April 2018, she faced retaliation from those who did not agree with her article. This came in the
5
form of a deepfake pornographic video which used her face without her consent and was spread
on social media. The embarrassment and negative attention she received from this caused her to
go to the hospital and no longer continue with social media (Jaiman, 2020).
Lastly, in 2020, an attorney from Pennsylvania was tricked into thinking that he was
actually talking to his son on the phone, who asked for $9000 to bail him out of jail, when in
Claims. One claim used to support this position is Claim of Policy, which states that
policies should be put in place to fix an issue in society. Deepfake technology can be used for
harmful and malicious intent. Therefore, a policy should be put in place to prevent it, thus
making it illegal. The use cases above provide examples of how it has caused defamation,
Argument Question
Should AI-Generated Synthetic Media (Deepfake Technology) be illegal, thus halting the
Stakeholder Arguments
Stakeholder 1 - Pro deepfake technology; the technology should not be made illegal
The pro deepfake technology stakeholders use the Utilitarianism ethical framework to
argue their position. The Utilitarianism ethical framework was developed by Jeremy Bentham
(MacAskill et al., 2022) and is guided by how an act affects everyone. The goal is for the
consequences of an act to result in happiness for the most people possible. One should aim to
help others.
Referring to the medical and education use cases discussed in the corresponding
stakeholder’s “Position” section above, the following demonstrates how the tenets of the
6
Utilitarianism framework are used to argue the pro deepfake technology position. Medical
advancements using deepfake technology have the possibility to provide benefits to the larger
population. Treatment and medications that result from it can help large groups of sick and
disadvantaged patients. Education advancements can teach a new generation of children who
According to this stakeholder’s perspective, the correct course of action to take on this
issue is that deepfake technology should not be made illegal and should continue to be advanced.
The developments that come from it may not be possible if we do not continue to learn and
progress in this area. This course of action would be a gain for this stakeholder because
individuals by giving them tools that they may not have had access to before. We also continue
The stakeholders against deepfake technology use the Ethical Egoism framework to
argue their position. As discussed in his book, “The Methods of Ethics” by Sidgwick (1874),
Ethical Egoism is explained as the individual acting in their own interest; what will have the
most desirable effect for them. “...I have used the term “Egoism” as others have done, to denote a
system which prescribes actions as means to the end of the individual’s happiness” (Sidgwick,
1874, p. 72).
The tenets of the Ethical Egoism framework are used to argue this stakeholder’s position
against deepfake technology in the following examples. Deepfake videos created that target
celebrities, politicians, or everyday people can greatly harm the individual. Entire reputations can
be ruined, and in some cases, the individual cannot recover from it. Deepfake audio can trick
7
individuals into thinking that they are speaking to someone whom they trust and potentially
According to this stakeholder’s perspective, the correct course of action to take on this
issue is that deepfake technology should be made illegal and should not continue to be advanced.
By advancing this technology, it makes it easier and easier for users to use it with malicious
intentions. As the technology gets better, it will also make it harder to distinguish real from fake
media (which could be detrimental when reviewing evidence for a court case). If deepfake
technology were to continue, it would be a loss for this stakeholder. Continuing to advance
deepfake technology will cause mistrust and confusion from the public on what is real vs. fake
media. Individuals are more at risk of being scammed by fake phone calls from someone they
think they know. Individuals are also more at risk of being the target of defamation.
My Position
not be illegal and continue to advance. With any technology comes the potential for its misuse by
certain individuals/groups of people. We cannot halt the advancement of this technology and
miss out on the potential benefits and opportunities it can unlock to contribute to our society.
My position on this issue aligns with Stakeholder 1 (Pro deepfake technology). If we can
perform medical studies and tests without using live animals or humans, we receive the benefit
of the end result without harming anyone in the process. Students can experience a more
historical figure themselves (even if it is a deepfake video), may leave a better lasting impact and
impression vs. reading it out of a textbook. Additionally, the synthetic voice for the patient who
8
can’t speak may provide opportunities they wouldn’t have had before due to easier
communication.
I don’t think the use of deepfake technology should be illegal or banned as a whole.
Rather, the crime that the technology is used for should be charged as it would today and
deepfake technology is merely the tool that they used to commit the crime. For example, if
someone uses deepfake technology to ruin someone’s reputation, they should be charged in a
defamation case regardless of the tool they used to carry it out. If someone is using deepfake
technology for non-malicious reasons, I don’t think they should receive any kind of punishment.
As Castro (2020) states, “... make it unlawful to distribute deepfakes with a malicious
intent…However, it is important that lawmakers carefully craft these laws so as not to erode free
References
Castro, D. (2020, January/February). Deepfakes are on the rise — How should government
https://www.govtech.com/policy/deepfakes-are-on-the-rise-how-should-government-
respond.html
Çolak, B. (2021, January 19). Legal issues of deepfakes. Institute for Internet and the Just
issues-of-deepfakes
Jaiman, A. (2020, August 14). Positive use cases of synthetic media (aka deepfakes). Towards
https://towardsdatascience.com/positive-use-cases-of-deepfakes-49f510056387
Jaiman, A. (2020, August 19). Deepfakes harms & threat modeling. Towards Data Science.
and-threat-modeling-c09cbe0b7883
MacAskill, W., Meissner, D., & Chappell, R.Y. (2022). Introduction to utilitarianism.
https://www.utilitarianism.net/introduction-to-utilitarianism
Marr, B. (2022, January 11). Deepfakes – The good, the bad, and the ugly. Forbes. Retrieved
https://www.forbes.com/sites/bernardmarr/2022/01/11/deepfakes--the-good-the-bad-and-
the-ugly/?sh=208a35e74f76
10
https://www.google.com/books/edition/The_Methods_of_Ethics/PQ8SAAAAYAAJ?hl=
en&gbpv=0
Somers, M. (2020, July 21). Deepfakes, explained. MIT Sloan. Retrieved September 25, 2022,
from https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained
Toews, R. (2020, May 25). Deepfakes are going to wreak havoc on society. We are not prepared.
https://www.forbes.com/sites/robtoews/2020/05/25/deepfakes-are-going-to-wreak-havoc-
on-society-we-are-not-prepared/?sh=421a8b987494