You are on page 1of 8

INTRODUCTION

Deepfake is a face-swapping technique in which photographs of a person are utilized by


artificial intelligence technology to create digital doppelgängers (look-alikes), which are then
placed onto different bodies. Deepfakes created from a single source image are frequently
visible as fakes, but ones created from hundreds of photographs or video clips can be rather
convincing.
Deepfakes pose a number of systemic socio-political concerns, including civil discourse
manipulation, election meddling, and national security threats, as well as a loss of trust in
journalists and public institutions in general. False endorsements, fraudulent submissions of
documentary proof, loss of creative control over audiovisual output, extortion, harassment,
and reputational damage are just a few examples of the harm that can be done to persons and
businesses.
Deepfakes, on the other hand, have a lot of good uses. Advances in AI-Generated Synthetic
Media, also known as Deepfakes, have obvious advantages in fields including accessibility,
education, film creation, criminal forensics, and artistic expression.

SHOULD THE COPYRIGHT SYSTEM TAKE COGNIZANCE OF DEEP FAKES


AND, SPECIFICALLY, IS COPYRIGHT AN APPROPRIATE VEHICLE FOR THE
REGULATION OF DEEP FAKES?

Role of copyright laws in the regulation of deep fakes


In general, the law does not change at the same rate as technology. That stated, in the case of
videos made by deepfake technology, numerous causes of action already in place in our
current laws may be applicable (or expanded to include) in addressing the wrongs perpetrated
by a person's exploitation or abuse of deepfake technology. A copyright owner has sole
authority to make or duplicate a work in any material form under the Copyright Act.
Furthermore, a work's author possesses moral rights, or the right to the work's integrity,
which are violated when the work is distorted, mutilated, or otherwise altered. Undoubtedly,
some deepfake films will break copyright laws based on the alteration and republication of
copyrighted movies and photographs, especially those that are derived from copyrighted
videos and photos.
The Indian viewpoint is one of fair dealing, as defined by Section 52 of the Copyright Act of
1957, which contains an exhaustive list of what is not deemed copyright infringement.
Because deepfakes are not included in this list, it is easier to hold the developer liable.
Furthermore, the Copyright Act of 1957, Section 57(1)(b), provides for the right of integrity
as well as paternity. A copyrighted work is protected against distortion, mutilation, and
modification.
Furthermore, under Section 14 of the Copyright Act of 1957, the author retains the right to
make derivative works.23 Sections 55 24 and 63 25 impose civil and criminal penalties for
infringement of exclusive rights. Due to the current legal position following Myspace Inc. v.
Super Cassettes Industries Ltd 26 and Section 79 of the Information Technology Act, 2000, it
is also easier to impose liability on intermediaries as a result of the existence of these laws.
As intellectual property is designed to promote creativity and stimulate further innovation,
ownership of this copyright must be extended to the individual who utilises the generative
adversarial network technology to create the deepfake material. To properly understand this
method, the legal personality of artificial importance must be evaluated. The ability to hold
rights and execute obligations is the most significant prerequisite for granting legal
personality to any entity. If copyright law is modified to match the demands and standards of
different jurisdictions, it can still be an effective tool for regulating deepfakes. 

ASSUMING DEEP FAKES SHOULD BENEFIT FROM COPYRIGHT, TO WHOM


SHOULD THE COPYRIGHT IN THE DEEP FAKE BELONG? 

Issues arising from deep fakes are not quickly evaluated or solely evaluated within the
copyright law framework. Deep fakes mean to manipulated videos, other digital
representations produced by artificial intelligence, creating fabricated visuals and sounds that
appear natural. The most profound issues of personal identity, the right to publicity, the right
to privacy, and the capability to control the usage of one’s image for purpose appear more
suitable to human rights issues rather than purely or even primarily copyright issues. Indeed,
the question should perhaps be whether copyright should even be accorded to deep fake
imagery, rather than to whom copyright in a deep fake should belong. Suppose the deep fake
imagery depicts a human subject in a manner or light wholly inconsistent with the subject’s
life, life’s work, or status. In that case, it seems incongruent that this deep fake should be
rewarded with copyright protection.
On the other hand, it may be helpful to consider instances where deep fake imagery may
deserve copyright protection. In such a case, the copyright might properly belong to the
humans from whom the design and algorithm of the AI program that creates the imagery
originate. For example, an audiovisual producer may develop an AI program in-house to
recreate the image of a deceased actor for its use in a new film. The copyright in the resulting
deep fake may be accorded to the audiovisual producer. It might also be a case that deep fake
may be produced utilizing a commercially available AI algorithm. The human actor uses the
AI algorithm to accomplish his creative vision, much like a photographer uses a camera to
bring forth his perspective. In this latter case, copyright ownership could be accorded to the
human actor employing the AI algorithm as a tool.
As described in section 52 of the Copyright Act, India contains exhaustive information on
what is not called copyright infringement. Since deep fake is not included in the exemption,
the developer would be held liable A copyright work is protected from modification,
distortion, and mutilation. For the violations of rights, Section 55 and 63 enforces civil and
criminal liability.

IF IT IS AN AI GENERATED DEEP FAKE, CAN ONE INSTITUTE A SUIT OF


COPYRIGHT INFRINGEMENT IF THE AI IS FOUND TO HAVE USED WORK
THAT WAS NOT IN THE PUBLIC DOMAIN TO PRODUCE THE DEEP FAKE?

DEEPFAKES AND AI
Deepfakes are fabricated images, audios and videos made using artificial intelligence. The
common method of generating deepfake content is by the use of generative adversarial
networks1. The genesis of generative adversarial networks can be traced back to 2014. They
are algorithmic architectures which primarily consist of two neural networks. While the
generator neural network generates new synthesized or fake data, the discriminator neural
network tries to detect the same. The process is repeated until the discriminator neural
networks cannot differentiate between the original and synthesized data. This adversarial
training in machine learning has been considered to be one of the most interesting
developments in the arena of machine learning.

1 Chris Nicholson, A Beginner's Guide to Generative Adversarial Networks (GANs), PATHMIND


Concerns regarding Deepfakes

There are a plethora of concerns revolving around deepfakes. News laws are being framed to
stop people from making and distributing them. Deepfake technology can seamlessly stitch
anyone in the world into a video or photo they never were a part of. Moreover, the
technology is evolving at a rapid pace that deepfake technologies, whether new automatic
computer-graphics or machine-learning systems, can synthesize images and videos much
more quickly.

In fact, it has become a catchall to describe everything from state-of-the-art videos generated
by AI to any image that seems potentially fraudulent.1While the addition of AI simplifies the
process, it still takes time for it to yield a believable composite that places a person into an
entirely fictional situation. The creator must also manually tweak many of the trained
program’s parameters to avoid tell-tale blips and artifacts in the image.

The most prominent deepfake examples mostly tend to come out of university labs and the
start-ups they seed: for instance a trending video of David Beckham speaking fluently in nine
languages, only one of which he actually speaks, is a version of code developed at the
Technical University of Munich, Germany. Another common example is the release of an
uncanny video by the MIT researchers of former U.S. President Richard Nixon delivering the
alternate speech he had prepared for the nation had Apollo 11 failed.

There are huge ramifications to various entities due to the AI-generated deepfakes. It is a
major cause of concern about the detrimental role of deepfakes in super-changing scams.
There have been unconfirmed reports of deepfake audio being used in CEO scams to swindle
employees into sending money to fraudsters. It is also pertinent to point out the biggest threat
it can pose to various governments. They have an uncontrolled power to influence anything
right from manipulating videos concerning politicians standing for elections to attempted
coup.

Copyright as a regulation of Deepfakes

The United States


The position of deepfakes under copyright law in the United States is ambiguous due to the
possibility that deepfakes can be protected under the doctrine of fair use as enshrined in 17
USC 1072.
The purpose and character of use, including commercial nature, the nature of the copyrighted
work, the substantiality of copying of the copyrighted work, and the impact on the potential
market value of the copyrighted work are among the various considerations under this
provision. The concept of transformative use, which was first laid out in Campbell v. Acuff
Rose, falls under purpose and character. In this case, it was determined that when a new
meaning or expression is discovered in a work, even if a substantial portion of the
copyrighted work that is the heart of the copyrighted work is copied, the doctrine of fair use
can be extended to protect the work.

Transformational use refers to changing the purpose and characteristics of a copyrighted


work to create content with new expression, meaning, or information. When creating a deep
forgery, the purpose and nature of the deep forgery are different from the original
copyrighted work, so it will not affect the market value of the original copyrighted work. The
US court also clearly stipulated that even if there are a large number of copies, if the work is
transformative, it can still be protected under fair use3.
It can be said that this liberal stance on transformative use allows the fair use principle to be
extended to most deepfake content, regardless of whether it was created in good faith or
maliciously4. This may allow maliciously created deep forgeries to be protected as imitations
based on the fair use principle, thereby enacting various protective measures, such as
notification and deletion, and broker liability, in accordance with section 512 of the DMCA
and section 230 of the Communication Specification Law5 is not available.

Deepfakes can be secured beneath the tenet of reasonable managing in numerous


circumstances within the Joined together States as an argument can continuously be made
that the nature of the work is totally diverse from the copyrighted work and thus the
probability of it causing any hurt to the potential advertise of the first copyrighted work is

2 The Digital Millennium Copyright Act, 17 USC §107 (1998)


3 Danielle K. Citron and Robert Chesney, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and
National Security’ (2019) 107 California Law Review 1753
4 Patrick Cariou v Richard Prince, 714 F.3d 694 (2013); Rogers v Koons, 960 F.2d 301 (1992); Leibovitz v
Paramount Pictures, 137 F.3d 109 (1998); Seltzer v Green Day, 725 F.3d 1170 (2013); Blanch v Koons, 467
F.3d 244 (2006); Bill Graham Archives v Dorling Kindersley Ltd, 448 F.3d 605 (2006).
5 Communication Decency Act 47 U.S.C. § 230 (1996) (USA)
amazingly low. Where deepfake substance is hostile and defamatory other legislations can
continuously be turned to for inconvenience of risk. The issue with regard to deepfake and
copyright law emerges with regard to genuine utilization of this innovation and in such
circumstances it gets to be troublesome to direct deepfake substance when the reasonable
utilization convention is amplified.

India
In India, the principle of fair treatment under Article 52 of the Indian Copyright Act of 1957
(ICA)6 deals with works that are excluded from consideration for infringing works. Unlike
the position of the United States, the principle of fair treatment is an exception to copyright
infringement. The law has established an exhaustive list of actions that are not considered
infringement. Although India’s position on fair trade is often criticized as rigid 7, it is
convenient to solve the maliciously created Deepfake technology, because the use of this
technology is not included in any of the actions mentioned in Section 52. International Civil
Aviation Organization. However, this clause may not protect the use of deepfake technology
for real purposes.
In addition, as observed by the University of Oxford and Ols, the Indian courts have begun
to follow ICA8 Article 52 (1) (a) (ii) Article 52 (1) (a) (ii) The term "review" adopts the
concept of conversion. v. Narendra Press and Ols.9 The court has incorporated the fair use
principle into the concept of fair treatment as an exception to protect certain types of work
because it benefits society as a whole 10. The existing Indian precedents on conversion use
mainly only refer to guidelines in the category of literary works, and this explanation cannot
be applied to deep forgery.
ICA Article 57 establishes paternity and integrity in accordance with the 1886 Berne
Convention on moral rights. When considering deep forgery, (b) of ICA, the right of integrity
provided in Article 57 (1), plays a fundamental role, because deep forgery can be regarded as
a distortion, destruction or modification of a person's work. ICA Article 55 and Article 63
provide for civil and criminal liability, which stipulate damages, preventive measures,
imprisonment and fines for offenders. It can be said that these regulations provide sufficient

6 ‘Using Copyright and Licensed Content: Copyright & Fair Use’ (Indian Institute of Management)
7 Ayush Sharma, ‘Indian Perspective of Fair Dealing under Copyright Law: Lex Lata or Lex Ferenda?’ (2009)
14 Journal of Intellectual Property Rights 523, 529.
8 ICA 1957, s 52.
9 University of Oxford v Narendra Publishing House, ILR (2009) 2 Del 221.
10 Super Cassettes Industries Ltd v Mr. Chintamani Rao and Ors (2011) SCC OnLine Del 4712.
deterrence to deal with deep forgeries created for malicious purposes, but they do not extend
protection to deep forgeries created for legitimate purposes.

According to section 79 of the Information Technology Act of 2000 (IT Act) 11, the
responsibility of the intermediary lies with Myspace Inc. v. Super Cassettes Industries Ltd 12.
The Delhi High Court interprets the terms of the ICA and IT bills harmoniously, and
determines that in the case of copyright infringement, the intermediary is obliged to delete the
content infringers after receiving a private notice, even if there is no Court order. However,
because the technology is still weak and ignores the content review policy of the middleman
while deleting the deep-fake content, problems related to deep-fake detection may still arise.

Distortion, mutilation, and modification of a copyrighted work are prohibited. Sections 55


and 63 further impose civil and criminal penalties for infringement of exclusive rights.
Deep fakes, it is suggested, cannot be regarded solely through the lens of property rights,
because they also require the active participation of some personal rights.

The assertion of copyright infringement fails to defeat deceptive deep fakes in the United
States and other countries with positions close to the fair use doctrine. Therefore it is more
fitting for victims of malicious deep fakes to use grounds of privacy, data protection, and
online abuse to tackle deep fake if it is not copyright infringement in that country.

In the United States and other nations with attitudes similar to the fair use doctrine, claiming
copyright infringement fails to combat misleading deep fakes. If there is no copyright
infringement in that country, it is more appropriate for victims of harmful deep fakes to use
grounds of privacy, data protection, and online abuse to combat deep fake.

CONCLUSION
Deepfake technology is widely used around the world, and the amount of deepfake content
generated is likely to increase rapidly as new applications are developed to make the
technology more accessible to the general public. There are some issues with detecting
deepfakes because no viable technology has been developed in the current scenario that can
affect intermediary liability and notice and takedown measures under copyright law.
However, if copyright law is tailored to meet the needs and standards of different

11 The Information Technology Act 2000 No.21 Acts of Parliament 2000 (India).
12 Myspace Inc v Super Cassettes Industries Ltd (2016) SCC OnLine Del 6382.
jurisdictions, it can still be an effective tool for regulating deepfakes. Finally, it is critical to
remember that no one-size-fits-all formula can be created and applied globally.

You might also like