Professional Documents
Culture Documents
Issues arising from deep fakes are not quickly evaluated or solely evaluated within the
copyright law framework. Deep fakes mean to manipulated videos, other digital
representations produced by artificial intelligence, creating fabricated visuals and sounds that
appear natural. The most profound issues of personal identity, the right to publicity, the right
to privacy, and the capability to control the usage of one’s image for purpose appear more
suitable to human rights issues rather than purely or even primarily copyright issues. Indeed,
the question should perhaps be whether copyright should even be accorded to deep fake
imagery, rather than to whom copyright in a deep fake should belong. Suppose the deep fake
imagery depicts a human subject in a manner or light wholly inconsistent with the subject’s
life, life’s work, or status. In that case, it seems incongruent that this deep fake should be
rewarded with copyright protection.
On the other hand, it may be helpful to consider instances where deep fake imagery may
deserve copyright protection. In such a case, the copyright might properly belong to the
humans from whom the design and algorithm of the AI program that creates the imagery
originate. For example, an audiovisual producer may develop an AI program in-house to
recreate the image of a deceased actor for its use in a new film. The copyright in the resulting
deep fake may be accorded to the audiovisual producer. It might also be a case that deep fake
may be produced utilizing a commercially available AI algorithm. The human actor uses the
AI algorithm to accomplish his creative vision, much like a photographer uses a camera to
bring forth his perspective. In this latter case, copyright ownership could be accorded to the
human actor employing the AI algorithm as a tool.
As described in section 52 of the Copyright Act, India contains exhaustive information on
what is not called copyright infringement. Since deep fake is not included in the exemption,
the developer would be held liable A copyright work is protected from modification,
distortion, and mutilation. For the violations of rights, Section 55 and 63 enforces civil and
criminal liability.
DEEPFAKES AND AI
Deepfakes are fabricated images, audios and videos made using artificial intelligence. The
common method of generating deepfake content is by the use of generative adversarial
networks1. The genesis of generative adversarial networks can be traced back to 2014. They
are algorithmic architectures which primarily consist of two neural networks. While the
generator neural network generates new synthesized or fake data, the discriminator neural
network tries to detect the same. The process is repeated until the discriminator neural
networks cannot differentiate between the original and synthesized data. This adversarial
training in machine learning has been considered to be one of the most interesting
developments in the arena of machine learning.
There are a plethora of concerns revolving around deepfakes. News laws are being framed to
stop people from making and distributing them. Deepfake technology can seamlessly stitch
anyone in the world into a video or photo they never were a part of. Moreover, the
technology is evolving at a rapid pace that deepfake technologies, whether new automatic
computer-graphics or machine-learning systems, can synthesize images and videos much
more quickly.
In fact, it has become a catchall to describe everything from state-of-the-art videos generated
by AI to any image that seems potentially fraudulent.1While the addition of AI simplifies the
process, it still takes time for it to yield a believable composite that places a person into an
entirely fictional situation. The creator must also manually tweak many of the trained
program’s parameters to avoid tell-tale blips and artifacts in the image.
The most prominent deepfake examples mostly tend to come out of university labs and the
start-ups they seed: for instance a trending video of David Beckham speaking fluently in nine
languages, only one of which he actually speaks, is a version of code developed at the
Technical University of Munich, Germany. Another common example is the release of an
uncanny video by the MIT researchers of former U.S. President Richard Nixon delivering the
alternate speech he had prepared for the nation had Apollo 11 failed.
There are huge ramifications to various entities due to the AI-generated deepfakes. It is a
major cause of concern about the detrimental role of deepfakes in super-changing scams.
There have been unconfirmed reports of deepfake audio being used in CEO scams to swindle
employees into sending money to fraudsters. It is also pertinent to point out the biggest threat
it can pose to various governments. They have an uncontrolled power to influence anything
right from manipulating videos concerning politicians standing for elections to attempted
coup.
India
In India, the principle of fair treatment under Article 52 of the Indian Copyright Act of 1957
(ICA)6 deals with works that are excluded from consideration for infringing works. Unlike
the position of the United States, the principle of fair treatment is an exception to copyright
infringement. The law has established an exhaustive list of actions that are not considered
infringement. Although India’s position on fair trade is often criticized as rigid 7, it is
convenient to solve the maliciously created Deepfake technology, because the use of this
technology is not included in any of the actions mentioned in Section 52. International Civil
Aviation Organization. However, this clause may not protect the use of deepfake technology
for real purposes.
In addition, as observed by the University of Oxford and Ols, the Indian courts have begun
to follow ICA8 Article 52 (1) (a) (ii) Article 52 (1) (a) (ii) The term "review" adopts the
concept of conversion. v. Narendra Press and Ols.9 The court has incorporated the fair use
principle into the concept of fair treatment as an exception to protect certain types of work
because it benefits society as a whole 10. The existing Indian precedents on conversion use
mainly only refer to guidelines in the category of literary works, and this explanation cannot
be applied to deep forgery.
ICA Article 57 establishes paternity and integrity in accordance with the 1886 Berne
Convention on moral rights. When considering deep forgery, (b) of ICA, the right of integrity
provided in Article 57 (1), plays a fundamental role, because deep forgery can be regarded as
a distortion, destruction or modification of a person's work. ICA Article 55 and Article 63
provide for civil and criminal liability, which stipulate damages, preventive measures,
imprisonment and fines for offenders. It can be said that these regulations provide sufficient
6 ‘Using Copyright and Licensed Content: Copyright & Fair Use’ (Indian Institute of Management)
7 Ayush Sharma, ‘Indian Perspective of Fair Dealing under Copyright Law: Lex Lata or Lex Ferenda?’ (2009)
14 Journal of Intellectual Property Rights 523, 529.
8 ICA 1957, s 52.
9 University of Oxford v Narendra Publishing House, ILR (2009) 2 Del 221.
10 Super Cassettes Industries Ltd v Mr. Chintamani Rao and Ors (2011) SCC OnLine Del 4712.
deterrence to deal with deep forgeries created for malicious purposes, but they do not extend
protection to deep forgeries created for legitimate purposes.
According to section 79 of the Information Technology Act of 2000 (IT Act) 11, the
responsibility of the intermediary lies with Myspace Inc. v. Super Cassettes Industries Ltd 12.
The Delhi High Court interprets the terms of the ICA and IT bills harmoniously, and
determines that in the case of copyright infringement, the intermediary is obliged to delete the
content infringers after receiving a private notice, even if there is no Court order. However,
because the technology is still weak and ignores the content review policy of the middleman
while deleting the deep-fake content, problems related to deep-fake detection may still arise.
The assertion of copyright infringement fails to defeat deceptive deep fakes in the United
States and other countries with positions close to the fair use doctrine. Therefore it is more
fitting for victims of malicious deep fakes to use grounds of privacy, data protection, and
online abuse to tackle deep fake if it is not copyright infringement in that country.
In the United States and other nations with attitudes similar to the fair use doctrine, claiming
copyright infringement fails to combat misleading deep fakes. If there is no copyright
infringement in that country, it is more appropriate for victims of harmful deep fakes to use
grounds of privacy, data protection, and online abuse to combat deep fake.
CONCLUSION
Deepfake technology is widely used around the world, and the amount of deepfake content
generated is likely to increase rapidly as new applications are developed to make the
technology more accessible to the general public. There are some issues with detecting
deepfakes because no viable technology has been developed in the current scenario that can
affect intermediary liability and notice and takedown measures under copyright law.
However, if copyright law is tailored to meet the needs and standards of different
11 The Information Technology Act 2000 No.21 Acts of Parliament 2000 (India).
12 Myspace Inc v Super Cassettes Industries Ltd (2016) SCC OnLine Del 6382.
jurisdictions, it can still be an effective tool for regulating deepfakes. Finally, it is critical to
remember that no one-size-fits-all formula can be created and applied globally.