You are on page 1of 14

TOPIC-DEEPFAKE TECHNOLOGY AND IT’S IMPLICATIONS ON

CYBERSECURITY IN THE DIGITAL MEDIA

Lipsa Dash, Assistant Professor, KIIT School of Law, KIIT DU ,lipsa.das@kls.ac.in,

Neha Mohanty, 4th Year Student, KIIT School of Law, KIIT DU, 2082085@kls.ac.in,

Reeti Nanda, 4th Year Student, KIIT School of Law, KIIT DU, 2083054@kls.ac.in,

ABSTRACT

Deep fake technology is an advanced type of computational intelligence that may be used to
produce fake information that is incredibly lifelike, making it harder for viewers to identify the
difference between real and fake. In the field of information security, there is serious worry over
the recent rapid rise in the use of counterfeit technology. The significant effects of counterfeit
technology on cyber security in the setting of electronic media are examined in this research.

The numerous effects of deep fake advancements on digital media, data security, and the cyber
world at large are the main topic of this research paper. Further examining the shifting threat of
digital environment using an interdisciplinary approach that includes technology, cyber security,
and media studies examining how deep fakes affect the integrity of digital content. The study
also discusses legal and regulatory issues, the significance of cyber security awareness in digital
media, and methods for identifying and handling cyber security risks associated with deep fake
technology. In light of the spread of deep fake technology, this paper's conclusion emphasizes
the necessity of a multifaceted strategy to cyber security. In the digital era, resolving these
problems is essential to preserving the trustworthiness of information because deep fakes
continue to provide intricate hurdles.

KEY WORDS: - Cyber security, deep fake technology, Information technology act 2000,
Digital media, Artificial Intelligence
INTRODUCTION

Deep fakes are computer algorithm produced, incredibly lifelike videos, pictures, or sounds that
alter material to give the impression that someone said or did something they never did.
Technology presents serious cyber security risks in addition to creative opportunities in
marketing and entertainment1. Both cyber security professionals and IT enthusiasts have become
interested in deep fake technology in recent years. DEEPFAKE, a hybrid of profound
comprehension and counterfeit happy, is a technique that involves transforming a person's face in
a video to that of an intended individual, acting as though the targeted person is speaking the
words that were originally stated by someone else.

The technology behind Deep fake analyses and summarizes large volumes of data using
sophisticated machine learning methods, including deep neural networks. These computer
programmes then generate believable duplicates of real people, frequently famous personalities
or celebrities, and alter their gestures, speech patterns, and facial expressions to create extremely
misleading content. Deep fakes have grown more intricate as technology has advanced, making
it harder for the unaided eye to distinguish them.

Deep Fake is a term for an occurrence that involves fabricating content by influencing the face of
a person identified as the originator in a picture or video of another person identified as the
target. The term comes from a Reddit account called "Deepfake" on the social media platform,
which purported to have created a system based on machine learning for incorporating fame
faces into adult-themed material.

A plethora of applications exist for Deepfake, including the creation of fictitious celebrity
pornographic content, the dissemination of false information, the fraudulent voices of politicians,
financial crime, and many more. Although deepfake technology has many potential uses, most of
the time it is employed erroneously. Still, it offers a wide range of applications. The unlawful
application of Deepfake technology has negative long- and short-term effects on the progress of
humanity. Regular social media users run a very high chance of falling victim to Deepfake.

1
Rutuja, ( February 5th,2024), Deepfake Technology: The cybersecurity implications and defenses. CyberNX
Technologies Pvt Ltd. https://www.cybernx.com/b-deepfake-technology-the-cybersecurity-implications-and-
defenses
CYBER SECURITY IMPLICATIONS OF DEEP FAKE TECHNOLOGY:
SAFEGUARDING AGAINST ADVANCED TEXTS

Deep fake technology has become a potent weapon in the toolbox of malicious individuals and
hackers in recent years. Deep fakes are incredibly lifelike artificial intelligence (AI) produced
images, sounds, or text that remarkably resemble actual people—often with malevolent intent.

A significant turning point in the field of cyber security will occur in 2024. Technology is
developing at a quick pace, which has made the imminent danger landscape more dangerous and
complex. Cyber hazards have evolved beyond the conventional techniques of phishing and
malware.

CYBERSECURITY IMPLICATIONS:-

1) Social Engineering and Phishing Attacks:- Deepfake video and audio files can be
used by hackers to pose as senior government officials or business leaders, deceiving
staff members into divulging private information or carrying out illegal activities 2.

2) Damage to Reputation :- Deepfakes can be used to produce misleading and


detrimental data, which can have negative effects on a person's or a company's brand
and have financial and legal repercussions.

3) Money Laundering :- With the use of deepfake technology, one can generate
believable counterfeit identities for use in forged transactions, like hacking bank
accounts or applying for loans under fraudulent identities.

4) False information and news :- By spreading deceptive data or propagating political


lies, deepfakes can intensify erroneous efforts and increase public scepticism.

2
Singh, A. (2024, February 6). DEEPFAKE AND ITS LEGAL IMPLICATIONS - The Amikus Qriae. The Amikus
Qriae. https://theamikusqriae.com/deepfake-and-its-legal-implications/
5) Manipulation of Evidence:- Spreading false information or political propaganda,
deepfakes can intensify misinformation efforts by confusing and misinforming the
public
6) Political Deception:
Election Interference: By producing fraudulent video or audio recordings of political
leaders, deep fakes might potentially sway public opinion and even affect the results
of elections.
Diplomatic ties: By fabricating records of talks or events, deep fakes have the
potential to sour ties between countries.

7) Verification Difficulties:
Trust Issues: The emergence of deepfake technology could lead to a decline in the
credibility of video and audio recordings, making it more challenging to discern
between authentic and altered media.
Biometric Security Risks: Facial or voice recognition-based biometric authentication
systems may be circumvented by deep fakes.

Another hazard posed by deep fakes is cyber security-related. The business community has
already shown interest in guarding against viral frauds because deep fakes have the potential to
influence the price of commodities and securities by, among other things, displaying a CEO
making false claims about financial losses or liquidation, announcing a fake merger, or depicting
them as having committed a crime3.

Governing Issues: Developing comprehensive regulatory frameworks that strike a balance


between innovation and freedom of expression and safeguards of individual rights and
community integrity is necessary to tackle the ethical and legal ramifications of deepfake
technology.

3
Mustak, M., Salminen, J., Mäntymäki, M., Rahman, A., & Dwivedi, Y. K. (2023). Deepfakes: Deceptions,
mitigations, and opportunities. Journal of Business Research, 154, 113368.
https://doi.org/10.1016/j.jbusres.2022.113368
Legislative actions, awareness efforts, and persistent study and improvement in sophisticated
counterfeit detection tools are essential to addressing these cyber security consequences. Strong
cyber security procedures, such as multi-factor verification, staff training, and keeping up with
new dangers, should be used by individuals as well as businesses.

DEEP FAKE TECHNOLOGY AFFECTING THE INTEGRITY OF MEDIA

When integrated with the Internet's extensive reach and speed, a plethora of news outlets, social
media platforms4, and other applications, the production, dissemination, and circulation of deep
fakes can reach millions of people quite quickly. Social media outlets have just begun to scan
materials for potential DF, although they only find about two-thirds of them. Malicious deep
fakes have been outlawed or are about to be outlawed on several social media platforms.
Facebook, Instagram, Twitter, PornHub, Reddit, and Tiktok are a few of these.
Since criminals frequently operate in secret, it might be challenging for a victim to hold them
accountable.

The ability to produce incredibly lifelike and convincingly fictitious audio or video recordings
using deep fake technology has important ramifications for media integrity. The following are
some ways that deepfake technology is compromising media objectivity.

 Misinformation and Disinformation: By producing fraudulent audio or video recordings,


deep fakes can be implemented to disseminate misleading information and sway public
opinion. This can help deception and deceit propagate and pose a major threat to the
media's credibility.

Political Manipulation: Fake speeches or interviews with political personalities can be


produced using deepfake technology for political ends. During elections, this might be
used to tarnish individual credibility or sway public opinion.

4
Scott, L. (2024, March 7). Deepfakes a “Weapon against journalism,” analyst says. Voice of America.
https://www.voanews.com/a/deepfakes-a-weapon-against-journalism-analyst-says-/7442897.html
 Deepfakes can be used to imitate certain people, which makes it challenging for viewers
to tell the difference between real and fake content. Identity theft is brought up by this,
and the people who steal identities may suffer grave repercussions.

Privacy Concerns: Individuals' personal and professional lives may be harmed by the
misuse of technology, which can be used to fabricate content starring them without their
permission and violate their right to privacy.

Legal and Ethical Challenges: As deep counterfeit technology proliferates, it becomes


more important than ever to address the possible misuse of this technology. This presents
legal and ethical challenges for content authors, platforms, and regulators. It could be
necessary for laws and moral standards to change to keep up with the rapid growth of
technology.

Effect on Journalism and Media Credibility: Journalists and media organizations face
serious difficulties as a result of deepfakes Verifying the veracity of visual evidence is
more difficult as deepfakes become more convincing. This can accidentally spread
incorrect material that is presented as authentic, which can damage the trust of journalists
and news organizations. The public's confidence in journalism may decline, making it
more difficult to distinguish trustworthy sources from misrepresented information.

SOCIETAL IMPLICATIONS OF DEEPFAKE TECHNOLOGY


 A few significant societal effects of deepfake technology Misinformation and the Crisis
of Trust: Misinformation feeds on itself, weakening public confidence in the media,
institutions, and public leaders. The capacity to produce convincing fake material
compromises the veracity of information and makes it more challenging for people to
distinguish fact from fiction. The functioning of public discourse, political processes, and
society at large are all impacted by this crisis of confidence.
 Election Reliability and Political Influence : An important danger to political integrity is
deepfakes. Deepfakes can be used by malicious actors to sway public opinion, fabricate
stories, and affect elections. Deepfakes have the potential to erode public confidence in
politicians, skew public discourse, and interfere with the democratic process by
disseminating phony audio or video samples of political people.
 Deepfakes give rise to questions regarding consent and privacy. The capacity to alter and
create audio, video, and image content puts people's privacy and data management at risk.
Without permission, deepfakes can be produced, breaching people's privacy and possibly
endangering their wellbeing5.

Concerns about the Law and Ethics : Deepfakes bring up difficult moral and legal issues. Laws
already in place frequently find it difficult to keep up with the quickly developing technology,
which makes it difficult to hold people liable for the production and spread of deepfakes.
Legislators and legal systems constantly struggle to strike a balance between the right to free
speech and the need to stop harm and defend people's rights.

5
Gupta, A., Gupta, M., Garuna Chauhan, & Department of Computer Science, Mata Sundri College For Women,
University of Delhi, Delhi, India. (2022). ERA OF DEEPFAKE TECHNOLOGY: THREAT OR AIMBLE [Journal-article].
Journal of Emerging Technologies and Innovative Research, 9(6), 124–125.
https://www.jetir.org/papers/JETIRFM06023.pdf
LEGAL STAND AND ISSUES
Deepfake technology has evolved as a potent tool for creating realistic and fraudulent digital
content by manipulating or generating audio, video, or images through artificial intelligence
(AI). While this technology opens up new opportunities in a variety of industries, it also raises
serious legal and cybersecurity problems, particularly in the context of digital media. The paper
examines the legal standing6 and difficulties of deepfake technology, as well as the consequences
for cybersecurity.

Legal Stand:

 Intellectual Property Rights: Deepfake technology frequently involves the unauthorised


use of people's likenesses, potentially violating their intellectual property rights. Victims
may have legal grounds to suit for defamation, false light, or misuse of identity.

 Privacy laws: Deepfake material may break privacy regulations by modifying and
utilizing personal information without consent. Legislators are working to update privacy
laws to address these concerns and provide individuals greater choice over the use of
their likeness in digital media.

 Criminal Activities: The malicious use of deepfake technology for criminal purposes,
such as making fraudulent videos or disseminating misleading information, may result in
legal consequences. Governments are currently discussing legislation to handle the
criminal consequences of deepfakes.

 Defamation and Libel: Deepfake content can be used to generate fake movies or audio
recordings that might destroy a person's reputation. Legal action for defamation and libel
may emerge if the content is spread with the goal to harm someone's reputation.

 Copyright Infringement: Deepfake technology frequently manipulates existing content,


raising worries about copyright infringement. Legal frameworks must adapt to meet these
problems and develop guidelines for the ethical use of AI-generated information.

Cybersecurity Implications:

6
Singh, A. (2024, February 6). DEEPFAKE AND ITS LEGAL IMPLICATIONS - The Amikus Qriae. The Amikus
Qriae. https://theamikusqriae.com/deepfake-and-its-legal-implications/
 Identity Theft: Deepfake technology is a big danger to cybersecurity because it
enables sophisticated identity theft. Malicious actors can employ modified content
to impersonate individuals, resulting in fraudulent activity and risking online
security.
 Manipulation of Information: The ability to make realistic-looking fake content
raises worries about information manipulation. Deepfakes can be used to
propagate misinformation, sway public opinion, and even disrupt political
processes, posing a direct danger to cybersecurity and democratic institutions.
 Phishing Attacks: Cybercriminals might use deepfake technology to boost their
phishing attempts. Attackers might trick people into providing sensitive
information or engaging in extremely dangerous behavior by impersonating
reputable individuals or organizations via convincing audio or video messaging.
 Social Engineering: Deepfakes can aid in social engineering attacks by distorting
digital content to gain confidence. Employees or individuals may be duped into
performing acts that risk security, such as transferring payments or disclosing
sensitive information.
 Authentication Challenges: Deepfake technology is challenging standard
authentication mechanisms. Systems that rely on facial recognition or voice
verification may become insecure, necessitating the implementation of stronger
cybersecurity protections.

Deepfake technology's legal status and cybersecurity 7 effects in the digital media
landscape are complicated and varied. As technology advances, policymakers, legal
professionals, and cybersecurity experts must collaborate to create comprehensive
frameworks that address the ethical use of deepfakes, protect individuals' rights, and
ensure cybersecurity. Striking a balance between innovation and regulation is critical for
realizing the potential benefits of AI while limiting the threats posed by malicious actors
in the rapidly evolving digital world. The current legal framework in India for deepfake
technology requires significant upgrades in order to adequately address the numerous
difficulties posed by synthetic media. First and foremost, existing privacy rules must be
changed to specifically target the creation and spread of deepfake content without
authority. These reforms should result in a solid and transparent permission framework
that defines when someone's likeness can be used in artificial media. Furthermore,
imposing severe fines for privacy infractions related to deepfakes would serve as
prevention. Finally, the legal system should include processes for holding individuals or
organizations accountable for the creation and dissemination of harmful deepfake
information. The establishment of culpability standards and sanctions for those found
guilty of using deepfakes for harmful reasons, such as defamation of character
assassination, would be critical in preventing the incorrect use of this technology.
7
Singh, A. (2024, February 5). DEEPFAKE AND ITS LEGAL IMPLICATIONS - The Amikus Qriae. The Amikus
Qriae. https://theamikusqriae.com/deepfake-and-its-legal-implications/
CASES OF CYBERSECURITY DUE TO DEEPFAKES
In the digital era, the saying "seeing is believing" is becoming less reliable. Recent instances in
Kerala and other parts of India have highlighted a growing problem: the increasing number of
deepfakes8. These are hyper-realistic video frauds generated using artificial intelligence, with the
potential to trick individuals, slander public personalities, and disrupt community trust. One of
the most serious incidents came from Kerala, where a person was deceived and financially
exploited using a deepfake video. Even high-profile figures including as Prime Minister
Narendra Modi, actress Rashmika Mandanna, and cricket star Sachin Tendulkar have been
victim to this harmful technology.
9
Rashmika Mandana, a well-known actress in the entertainment industry, found herself at the
center of a deepfake-induced cybersecurity threat. The deepfake video depicted Rashmika
Mandana engaging in inappropriate behavior and making insulting remarks, resulting in rapid
and extensive flow of the modified information across numerous internet platforms. This resulted
in instant harm to her personal and professional reputation, affecting relationships with fans,
colleagues, and advertisers. The deepfake video was intentionally uploaded on social media
channels, where it quickly gained attention due to its spectacular content. As a result, the
modified content reached a large audience, and the actress received harsh criticism and public
scrutiny. Social media manipulation enhanced the cybersecurity danger by speeding up the
distribution of incorrect information.

Deepfakes are soon becoming a real threat to the uninformed and naive, as demonstrated by the
recent example of a senior citizen. According to police, this is one of the first incidents in India
of cyber criminals using AI-generated deepfakes for fatal purposes. According to a TOI report,
on November 30, criminals extorted a 76-year-old man using a video featuring the face and voice
of a retired IPS officer from the Uttar Pradesh Police. The senior citizen ended up making
repeated payments to the thieves out of fear that the authorities would take action against him for
what appeared to be soliciting sex. Arvind Sharma, a Govindpuram (Ghaziabad) native, recently
received his first smartphone and set up a Facebook account. On November 4, fraudsters started
contact via Facebook video call. When he saw a naked photo, he hastily ended the call, but the

8
Rana, V. (2024, March 7). Deepfakes and breach of personal data – a bigger picture - social media - India.
https://www.mondaq.com/india/social-media/1395304/deepfakes-and-breach-of-personal-data--a-bigger-
picture#:~:text=The%20actor%27s%20face%20unwittingly%20had,manipulate%20the%20images%20and
%20videos.

9
The Hindu Bureau. (2024, March 5). Delhi Police files FIR in Rashmika Mandanna deepfake case. The Hindu.
https://www.thehindu.com/news/cities/Delhi/delhi-police-files-fir-in-rashmika-mandanna-deepfake-case/
article67522778.ece
criminals had enough time to catch him. Sometime later, he received a video call on WhatsApp,
this time from a person in police uniform, who threatened him. The uniformed individual in the
video threatened to file a complaint against Sharma's father unless he paid. The fraudsters
demanded money and threatened to reveal a false video of Sharma to his family. Sharma
deposited Rs 5,000 out of fear of being embarrassed. Subsequently, he paid more and more,
reaching Rs 74,000, and even had to borrow from the company where he works as a clerk. In
November, a 59-year-old woman in Hyderabad was fooled into putting Rs 1.4 crore to a fake
caller who "mimicked" the woman's nephew, who lives in Canada, and said that he needed the
money immediately. Deepfake films portraying at least four popular actresses have gone viral in
recent months. In another case, P S Radhakrishnan, a retired central government employee from
Kozhikode, lost Rs 40,000 in July in what was described as a deepfake fraud. Radhakrishnan, 68,
received a video call from someone who "looked like" a former colleague, requesting money for
a relative's surgery.

Experts believe that measures for dealing with these crimes will require a comprehensive
strategy, increased public awareness of the exploitation of AI applications, and technology-
enabled policing.

M A Saleem, DGP, Criminal Investigation Department10 (CID), Economic Offences and Special
Units, Karnataka, emphasizes the acknowledgment of this reality. "The tactics used in online
financial frauds are becoming increasingly predictable. Criminals are turning to AI applications;
there will be more crimes involving identity tampering using images and videos, such as those
used to make deepfakes," he says. They remark that criminals have historically been early
adopters of technology. To build effective counter-strategies, it's important to recognize that AI,
like any other technical innovation, is a tool that may be applied to various aspects of human
activity, including crime. According to the 2023 State of Deepfakes study published by US-based
cybersecurity firm Home Security Heroes, it now takes less than 25 minutes and costs $0 to
generate a 60-second deepfake pornographic video using "just one clear face image". According
to the survey, India is sixth on the list of countries most vulnerable to deepfake pornography
(2%), following only South Korea (53%), the United States (20%), Japan (10%), England (6%),
and China (3%).

Liability establishing
Establishing liability in cases involving deepfake technology and its consequences for
cybersecurity in digital media can be difficult due to the dynamic nature of the technology and
the numerous parties involved. Liability can be assigned to a variety of entities, including
10
Now, E. (2024, March 7). Part time works scam, AI and audio deepfakes: how to survive cyber crime? The
Economic Times. https://economictimes.indiatimes.com/markets/expert-view/part-time-works-scam-ai-and-audio-
deepfakes-how-to-survive-cybe-crime/articleshow/100119892.cms?from=mdr
deepfake developers, platforms hosting the content, and individuals or organizations injured by
illegal deepfakes. Here are some considerations for establishing liability:

 Deepfake Creators: Individuals or groups responsible for the creation and distribution of
malicious deepfakes may face liability. Legal action may be launched based on
intellectual property violation, privacy invasion, defamation, or fraud, depending on the
nature and impact of the deepfake.
 Hosting Platforms: Social networking platforms, video-sharing websites, and other
internet platforms that host deepfake content may be held liable. Platforms may face
consequences for facilitating the spread of damaging deepfakes and may be required to
install content control and removal mechanisms.
 Technology Developers: Companies or individuals producing deepfake technology may
be held liable if their tools are exploited for unlawful purposes. Legal actions may centre
on the responsibilities of technology developers to create precautions against misuse or to
provide secure authentication procedures.
 Users Spreading Deepfakes: Individuals who deliberately distribute deepfakes with
malicious intent may be held liable. Legal implications could include claims of
defamation, violation of privacy, or fraud.
 Employers or Organizations: If deepfakes are developed and distributed within the course
of employment, the employer or organization may be held liable. Employers may be held
liable for their employee’s activities if it can be proved that the deepfakes were developed
during the course of their employment.
 Legal Frameworks and Regulations: Governments may enact legal frameworks and rules
to allocate responsibility for the creation and spread of deepfakes. This legislation may
specify the roles of technology providers, internet platforms, and persons involved in the
creation and distribution of deepfake content.
 Authentication and Verification Systems: Liability concerns may centre on the lack of
effective authentication and verification procedures for distinguishing between authentic
and altered content. Technology suppliers may be held liable for failing to implement
adequate precautions against the misuse of their capabilities.

Liability is frequently established through judicial actions, investigations, and the examination of
evidence. In the case of deepfakes, digital forensics, metadata analysis, and expert testimony
may be critical in detecting the origin and intent of the altered video. As the legal environment
changes, addressing culpability in deepfake-related instances11 will most likely necessitate
collaboration among legal professionals, technologists, and regulators to find effective and
equitable solutions.

11
Zanon, N. B., & Eichenberger, U. (2022, March 4). Deepfakes - New legal challenges due to technological
progress. Lexology. https://www.lexology.com/library/detail.aspx?g=ee8e20ad-a308-4e10-819a-6530d948e443

You might also like