You are on page 1of 7

1

Final Research Essay : Deepfake: Authenticity in Media

Date: December 8 2023

Introduction

Through the global era of technological advancement, globalization's effects have created a space

where we have access to various categories of information. Additionally, with the rise of social media,
2

apps like TikTok, Instagram and Vsco, allow individuals to share personal information that the public

would not have been able to retrieve otherwise. Utilizing the fast knowledge of the internet, Artificial

intelligence (AI) has taken the world by storm. With the ability to provide human-like tasks, AI has many

different forms within the online world. The most recent and notable would be the creation of deep fake

technology, which manipulates information to produce auditory and visual content that mimics human

expression. Unrestricted access to the internet with its vast contents of information makes it extremely

accessible to create misinformation about a topic, event, or person. This lack of privacy and trust among

the general population creates a layer of fear around AI technology and what it is truly capable of. When

looking at the foundation of trust built between the public and the information they are receiving, the

unsupervised utilization of this technology will continue to wedge a gap in between creating an

overwhelming sense of caution in the information that is found online. Deepfake technology has great

power with extremely dangerous potential effects, creating a divide within the political and social aspects

of the world, and bringing into question the security of one’s personal information. Without limitations, it

will continue to create tension on whether the information that an individual receives is true.

Arguments

Political and corporate effects of the presence of deep fake technology detrimentally affect one's

trust in the system they reside in and who they place their trust in. While the lack of trust in media starts

with small comments on fake reboots of movies and TV shows, deepfakes can go as far as to affect

corporate giants as well as politicians. In reality “The term deepfakes was coined after the works of

Suwajanakorn et al. [9] in 2017, when they created fake videos of the US president of the time, Barak

Obama” (Saif et al., 2022). This technology can be used to imitate anyone with an appearance online.

Whether that is the president of a country or a normal individual, if a person has excess to enough digital

information, they can create anything. This is a problem that large media corporations, as well as
3

companies such as Google, Twitter and Facebook and well as journals such as The Wall Street Journal,

Reuters and The Washington Post (Vizoso et al., 2021) all face the challenge of dealing with the mass

spread of misinformation which is not only affecting their reputation but also their credit as a reliable

source of information for the public. It is hard to rat out what has been planted within these media

companies as “fake news [tries] to imitate news items’ formal appearance… [to be] conceived like

journalistic pieces are common features of this misinformation strategies” (Vizoso et al., 2021) that

continuously break the trust between cooperation and the general populations as they can no longer

differentiate between what is real and what is fake. Now, corporations are not the only ones that are

affected by this spread of deepfakes, the popularity of social media creates a battleground for every other

citizen as well.

The effects of deepfakes within the presence of social media have created a space where personal

information is tampered with and exposed at the cost of an individual's image. Privacy is valued in

society. With values of individualism in capitalist societies, the importance of confidential information is

stressed and currently “The protection of data privacy has garnered significant attention,” (Smith et al.,

1996). Through the introduction of social media, the privileged information of individuals circles within

the online pool of information. As stated, this provides a much bigger target for the use of deep fakes as

they utilize the visual and auditory information given by individuals to create deceptions. The problems

that stem from this can be catastrophic as “[deepfake] technology is used to make several kinds of videos

such as funny or pornographic videos of a person involving the voice and video of a person without any

authorized use” (Shahzad et al., 2022) using no form of consent. This violates many ethical issues and

brings the victims into a negative light. Deepfake technology and its ability to mimic human-like

behaviour have resulted in the creation of pornographic images of those who have not given consent and

not participating in those practices. Sharing the real dangers that can be presented when deep fakes are

easily accessible and unmonitored approximately “ninety-six percent of all deepfake videos online are

pornographic, and those depicted in pornographic deepfakes are almost exclusively women” (Kugler et
4

al., 2021) and are targeting an audience who will not be negativity affected by their creations. Meanwhile,

the victims of the fabrications have their images ruined and their professional careers destroyed. Through

this spread of misinformation through deep fakes comes the topic of online security and what is being

done to prevent the use of destructive disinformation to pursue the privacy and integrity of individuals.

Security and research focused on deepfakes has been more heightened than ever before zoning in

on new and more officiant technology. With AI and the various creations that can appear to be human or

created by humans, there needs to be a focus on intervening in full access to using AI. Whether it is

spotting an essay that has utilized ChatGPT, a company using an actor's scan to create extras or any other

number of creations, more interference needs to be implemented in the free access to such technology.

The rush for answers is causing “a strong acceleration in multimedia forensics research” (Verdoliva,

2020) from which two categories of detection have been realized; “in-camera processing chain

(camera-based clues) or the out-camera processing history (editing-based clues)” (Verdoliva, 2020). These

ideas and various mechanisms have been utilized to distinguish deepfakes within media. Tactics such as

“Lens distortion, CFA artifacts, Noise levels, Noise patterns, Compression Artifacts and Editing artifacts”

(Verdoliva 2020) come out of this through the research on the distinct patterns that are found at distinct

stages within visual and auditory production with a camera. Technology can create such fear with

misinformation and disinformation that is shared. This research on being able to detect both photo and

video deepfake creations can allow society to slowly mend its trust beating the truth and the information

that they are constantly given. This technology can have benefits but not while the proper precautions are

in place.

Rebuttal

Deepfake technology is impressive. The ability to create something that appears real from

previous data is truly a technological wonder. Those for this technology would argue that inhibiting their
5

unregulated use of this technology would be discriminating against them and their freedom of expression.

That there are positive ways to use the technology and simply because someone has misused it does not

mean they will. Acknowledging that there are many positives to this technology and can be used for many

interesting projects, as “[deepfakes] are also beginning to be used by creative artists. The rapper Kendrick

Lamar released a music video in 2022 (‘The Heart Part 5’) depicting himself rapping but superimposing

different influential black men onto his face using deepfake technology.” (Murphy et al., 2023) and used

in the Star Wars production The Mandalorian to create a younger version of Mark Hamill who played

Luke Skywalker. Deepfakes allow these artists a different creative outlook on their projects and another

means to achieve a product. While this technology can provide amazing results the problem lies in the

matter of consent. Within both instances mentioned consent was given from all parties to create and use

those pieces of art safely and creatively. However, because this technology is “creating realistic

manipulated media assets may be very easy, provided one can access large amounts of data.” (Verdoliva,

2020) and “Realistic product without significant training or expensive equipment” (Kietzmann et al.,

2020) there are a greater amount of deepfakes being produced without consent than with consent. This

lack of communication and permission is what creates fear and conflict with the use of this technology. It

does not take much to fabricate a lie and use it to harm another with malicious intent. For this reason, it is

harder to use video evidence in court and sometimes reaches the point where “video evidence cannot be

used in court cases” (Murphy et al., 2023). For these reasons, there needs to be a discussion on the

restrictions of freedoms when it comes to AI technology and the dangers behind it.

Conclusion

Technology and AI have advanced beyond our grasp. The easy accessibility of such compelling

technology has created a gap in what is believed to be true with one's own eyes. With deepfake

technology, an individual can watch someone say something that they never said, but when it is seen as

real the public will often take it as real. This technology simply needs to be monitored more thoroughly
6

than the current political climate is allowing for. By focusing on responsible practices, advancing

detection technologies, and educating individuals on misinformation, this technology could be used for

wonderful creative development and not malicious, destructive intents. Due to the nature of globalization

within the world, this technology is unavoidable in society slowly becoming more dependent on online

and AI technologies in day-to-day life. There must simply be a boundary drawn between full individual

freedom and detrimental repercussions for those affected.

References

Karnouskos, S. (2020). Artificial Intelligence in Digital Media: The Era of Deepfakes.IEEET

Transactions on Technology and Society, 1(3), 138–147. https://doi.org/10.1109/tts.2020.3001312

Kugler, M. B., & Pace, C. (2021). Deepfake privacy: Attitudes and regulation. SSRN Electronic Journal.

https://doi.org/10.2139/ssrn.3781968
7

Li, M., & Wan, Y. (2023). Norms or fun? the influence of ethical concerns and perceived enjoyment on

the regulation of deepfake information. Internet Research, 33(5), 1750–1773.

https://doi.org/10.1108/intr-07-2022-0561

Murphy, G., Ching, D., Twomey, J., & Linehan, C. (2023). Face/off: Changing the face of movies with

deepfakes. PLOS ONE, 18(7). https://doi.org/10.1371/journal.pone.0287503

Saif, S., & Tehseen, S. (2022). Deepfake videos: Synthesis and detection techniques – A survey. Journal

of Intelligent & Fuzzy Systems, 42(4), 2989–3009. https://doi.org/10.3233/jifs-210625

Shahzad, H. F., Rustam, F., Flores, E. S., Luís Vidal Mazón, J., de la Torre Diez, I., & Ashraf, I. (2022). A

review of image processing techniques for deepfakes. Sensors, 22(12), 4556.

https://doi.org/10.3390/s22124556

Verdoliva, L. (2020). Media Forensics and DeepFakes: An overview. IEEE Journal of Selected Topics in

Signal Processing, 14(5), 910–932. https://doi.org/10.1109/jstsp.2020.3002101

Vizoso, Á., Vaz-Álvarez, M., & López-García, X. (2021). Fighting deepfakes: Media and internet giants’

converging and diverging strategies against hi-tech misinformation. Media and Communication, 9(1),

291–300. https://doi.org/10.17645/mac.v9i1.3494

You might also like