You are on page 1of 12

ED DAY-89 LUKMAAN IAS

1
ED DAY-89 LUKMAAN IAS

2
ED DAY-89 LUKMAAN IAS

TOPIC- A RESPONSIBLE AI, DEEPFAKES AND RELATED CONCERNS


THE CONTEXT: The deepfakes controversy involving Indian celebrities highlight the urgent need for AI
regulations and safeguards, as these technological advancements pose significant risks, influencing
the demand for legal recourse, vigilance, and the development of AI-based solutions to combat such
threats. This article explains in detail the various aspects of deepfakes related issues from the UPSC
perspective.

ABOUT DEEPFAKES TECHNOLOGY

 Deepfakes are a type of digital media manipulation that leverages artificial intelligence (AI) to
alter or replace an individual’s likeness in images, videos, or audio recordings.
 This advanced technology exploits deep learning algorithms – specifically Generative
Adversarial Networks (GANs) – to create highly realistic and convincing representations of
someone else’s face, voice, or behavior.
 Although this technology has been praised for its potential uses in entertainment and
advertising industries, it also raises significant ethical concerns due to the ease with which
creators can manipulate content.

HOW DOES DEEPFAKE TECHNOLOGY WORK?

The technology involves modifying or creating images and videos using a machine learning technique
called generative adversarial network (GAN). The AI-driven software detects and learns the subjects’
movements and facial expressions from the source material and then duplicates these in another
video or image. The deepfake created is as close to real as possible, creators use a large database of
source images. The dataset is then used by first software to create a fake video, while the other
software is used to detect signs of forgery in it. Through the collaborative work of the two software of
AI, the fake video is rendered until the second software package can no longer detect the forgery. This
is known as “unsupervised learning”, when machine-language models teach themselves. The method
makes it difficult for other software to identify deepfakes.

HOW ARE DEEPFAKES DIFFERENT FROM OTHER


KINDS OF MANIPULATED MEDIA?
Deepfakes aren't just any fake or misleading images. The AI-generated fake scenes of Donald Trump
being arrested that circulated shortly before his indictment, are AI-generated, but they're not
deepfakes. (Images like these, when combined with misleading information, are commonly referred
to as "shallowfakes.") What separates a deepfake is the element of human input. When it comes to
deepfakes, the user only gets to decide at the very end of the generation process if what was created
is what they want or not; outside of tailoring training data and saying "yes" or "no" to what the
computer generates after the fact, they don't have any say in how the computer chooses to make it.

3
ED DAY-89 LUKMAAN IAS

THE CONCERNS REGARDING THE DEEPFAKES

The first case of malicious use of deepfake was detected in pornography. According to a sensity.ai,
96% of deepfakes are pornographic videos, with over 135 million views on pornographic websites
alone. Deepfake pornography exclusively targets women. Pornographic deepfakes can threaten,
intimidate, and inflict psychological harm. It reduces women to sexual objects causing emotional
distress, and in some cases, lead to financial loss and collateral consequences like job loss.
 It can depict a person as indulging in antisocial behaviors and saying vile things that they never
did. Even if the victim could debunk the fake via alibi or otherwise, that fix may come too late
to remedy the initial harm.
 It can also cause short-term and long-term social harm and accelerate the already declining
trust in traditional media. Such erosion can contribute to a culture of factual relativism, fraying
the increasingly strained civil society fabric.
 It could act as a powerful tool by a malicious nation-state to undermine public safety and create
uncertainty and chaos in the target country. Deepfakes can undermine trust in institutions and
diplomacy.
 It can be used by non-state actors, such as insurgent groups and terrorist organisations, to show
their adversaries as making inflammatory speeches or engaging in provocative actions to stir
anti-state sentiments among people.
 Another concern from deepfakes is the liar’s dividend; an undesirable truth is dismissed as
deepfake or fake news. The mere existence of deepfakes gives more credibility to denials.
Leaders may weaponise deepfakes and use fake news and alternative-facts narrative to dismiss
an actual piece of media and truth.

SIGNIFICANCE OF AI IN DAILY LIFE

Personal Assistants: AI-powered digital assistants on smartphones like Siri, Google Assistant, or Bixby
can help manage daily tasks, set reminders, provide information, send texts, and more.

Online Shopping: AI is used in online shopping platforms to provide personalized recommendations


based on your browsing and buying history.

Navigation and Traffic: AI is used in apps like Google Maps to analyze real-time traffic data and provide
the fastest routes.

Email Filtering: AI helps filter out spam emails, categorizing incoming emails, and even suggesting
quick replies in some email platforms.

Security and Fraud Detection: AI can identify patterns of fraudulent activity in banking and online
transactions. It can also enhance home security systems through facial recognition technology.

Entertainment: Streaming services like Netflix and Spotify use AI to recommend shows, movies, or
music based on users’ previous viewing or listening habits.

4
ED DAY-89 LUKMAAN IAS

Learning and Education: AI is used in education through personalized learning platforms, which adapt
to a student’s strengths and weaknesses. It’s also used to automate grading, freeing up time for
teachers to spend with students.

HOW TO MAKE AI RESPONSIBLE FOR DEEPFAKES?

ENHANCE DEEPFAKE  Develop AI-powered tools that can accurately identify and flag
DETECTION deepfakes.
TECHNIQUES  This can involve analyzing video artifacts, facial features, audio
patterns, and other subtle cues that distinguish genuine content
from manipulated media.
PROMOTE  Implement mechanisms to track the origin and manipulation history
TRANSPARENCY AND of digital content.
TRACEABILITY  This could involve embedding tamper-evident watermarks or using
blockchain technology to create an immutable record of content
creation and modification.
IMPLEMENT LEGAL  Develop legal and regulatory frameworks to address the misuse of
AND REGULATORY deepfakes.
FRAMEWORKS  This may involve establishing penalties for creating and distributing
misleading or harmful deepfakes, as well as setting standards for
content moderation and takedown procedures.
SUPPORT HUMAN  Ensure that AI systems for deepfake detection remain under human
OVERSIGHT AND oversight and control.
CONTROL  This is crucial to prevent AI from arbitrarily censoring or suppressing
legitimate content.
PROMOTE MEDIA  Foster media literacy education to empower individuals to critically
LITERACY AND evaluate information sources, recognize potential biases, and make
CRITICAL THINKING informed decisions about the content they consume and share.
CONTINUOUSLY  Establish mechanisms for continuous monitoring and evaluation of
MONITOR AND deepfake technologies and their potential impacts.
ADAPT  This includes staying abreast of emerging deepfake techniques and
adapting detection and mitigation strategies accordingly.

WHAT ARE OTHER COUNTRIES DOING TO COMBAT DEEPFAKES?

The recent world’s first ever AI Safety Summit 2023 involving 28 major countries, including the US,
China, and India, agreed on the need for global action to address AI's potential risks. The Bletchley
Park Declaration at the summit acknowledged the risks of intentional misuse and the loss of control
over AI technologies.
Here are some examples of initiatives being undertaken in various countries:
Germany:
 In 2019, the German government launched the Deepfakes Detection Challenge, a competition
to develop AI-powered tools that can accurately identify deepfakes.
France:

5
ED DAY-89 LUKMAAN IAS

 France is part of the AVATAR project, a European research initiative aimed at developing tools
to detect and counter deepfakes and other forms of synthetic media manipulation.
United Kingdom:
 The UK's Centre for Data Ethics and Innovation (CDEI) has published a report on
deepfakes, outlining the risks and potential harms associated with the technology. The CDEI is
also working with industry partners to develop responsible AI guidelines for deepfake detection
and prevention.
European Union:
 The EU has issued guidelines for the creation of an independent network of fact-checkers to
help analyse the sources and processes of content creation. The EU’s code also requires tech
companies including Google, Meta, and X to take measures in countering deepfakes and fake
accounts on their platforms.
United States:
 In the United States, a bipartisan group of lawmakers has introduced the Defending Digital
Democracy Act, which aims to combat disinformation and deepfakes by providing funding for
research and development of detection tools, as well as by strengthening consumer protections
against misleading online content.
Japan:
 Japan has enacted an Act on Countermeasures against Deepfakes, which prohibits the creation
and distribution of deepfakes that are intended to harm individuals or society. The law also
requires platform providers to take steps to remove deepfakes from their platforms.
Australia:
 Australia's National Artificial Intelligence Strategy includes a focus on developing AI-powered
tools for detecting and combating misinformation, including deepfakes.
Tech Companies:
 Big tech companies like Meta and Google have announced measures to address the issue of
deep fake content. However, there are still vulnerabilities in their systems that allow the
dissemination of such content.
 Google has introduced tools for identifying synthetic content, including watermarking and
metadata.

THE GOVERNMENT OF INDIA ADVISORY TO SOCIAL MEDIA INTERMEDIARIES


TO IDENTIFY MISINFORMATION AND DEEPFAKES
 Ensure that due diligence is exercised, and reasonable efforts are made to identify
misinformation and deepfakes, and in particular, information that violates the provisions of
rules and regulations and/or user agreements and
 Such cases are expeditiously actioned against, well within the timeframes stipulated under the
IT Rules 2021, and
 Users are caused not to host such information/content/Deep Fakes and
6
ED DAY-89 LUKMAAN IAS

 Remove any such content when reported within 36 hours of such reporting and
 Ensure expeditious action, well within the timeframes stipulated under the IT Rules 2021, and
disable access to the content / information.

INDIAN LAWS AGAINST DEEPFAKE TECHNOLOGY


India lacks a specific law for deepfake technology however several provisions in the existing acts are
charged by the agencies upon the culprits.
 Section 66D of the Information Technology Act, 2000 (IT Act):
o The section punishes cheating by personation using communication device or computer
resource.
o Imprisonment under this section may extend to three years and the culprit shall also be
liable to a fine which may extend to one lakh rupees.
 Section 66E of the IT Act, 2000:
o It states that, whoever intentionally or knowingly captures, publishes or transmits the
image of a private area of any person without his or her consent, under circumstances
violating the privacy of that person.
o Imprisonment under this section may extend to three years or with a fine not exceeding
two lakh rupees, or with both.
 Section 51 of the Indian Copyright Act. 1957:
o It covers the conditions of infringing copyrights without any license granted by the owner.
 Section 66C of the IT Act, 2000:
o Deepfake can be used for identity theft.
o Hence, the section punishes the act of identity theft with imprisonment of either
description for a term which may extend to three years and fine with may extend to
rupees one lakh.
 Section 294 of Indian Penal Code, 1860 (IPC):
o Obscene materials can be created by using deepfake.
o Hence, the section punishes obscene acts and songs with imprisonment of either
description for a term that may extend to three months, or with a fine, or with both.
 Article 21 of the Constitution of India, 1950:
o Morphing someone else’s private content leads to severe violation of privacy and will also
be a threat to the person’s bodily integrity.
o Article 21 covers Right to Privacy and bodily integrity as its integral part.

ETHICAL, LEGAL AND SOCIAL CONSIDERATIONS WITH DEEPFAKES

Using deepfake technology unethically can lead to misuse and potential harm, and has legal and social
implications.
 Misuse And Potential Harm
7
ED DAY-89 LUKMAAN IAS

o Deepfake technology has the potential for misuse, resulting in harm to both individuals
and society. As deepfakes become more accessible and convincing, they can be used to
create fake news or disinformation campaigns aimed at manipulating public opinion.
o There is also a risk of deepfakes being used to blackmail or extort individuals by creating
fake content that appears genuine.
o As deepfake algorithms continue evolving, it’s essential for marketers and consumers
alike to remain vigilant, especially when analyzing media content online. Online tools
exist that help detect deepfakes but they’re not foolproof yet; therefore It is crucial
everyone uses this technology with responsibility.
 Legal And Social Implications
o The use of deepfake technology brings along concerns regarding legal and social
implications. The technology has the potential to be used maliciously, resulting in serious
consequences for individuals, businesses, and society as a whole.
o Moreover, using someone’s image or voice without their permission is considered illegal
under many laws.
o As AI-powered deepfake algorithms continue to evolve rapidly with advancements in
machine learning and artificial intelligence technologies, detecting fake media content
has become increasingly challenging.
Solutions for Deepfakes Technology-
 Understand the ethical considerations of using deepfakes – be aware of any potential harm or
misuse.
 Use deepfake technology only for legitimate purposes, such as in marketing campaigns, public
service announcements or education.
 Always disclose when a video has been manipulated with deepfake technology.
 Do not use deepfake technology to create fake news or to harm someone’s reputation.
 Be transparent about how you obtained the source material or images used in creating the
deepfake, and get appropriate permissions if necessary.
 Always verify the authenticity of videos before sharing them online to avoid spreading
misinformation inadvertently.
 Stay informed about new developments in deepfake technology that can help you better
understand its impact and potential risks.

THE CHALLENGES

Deepfake technology, which involves using artificial intelligence (AI) to create realistic-looking fake
videos or audio recordings, presents several challenges. Here are some of the key challenges
associated with deepfakes:
Misinformation and Fake News:
 Deepfakes can be used to create convincing videos of individuals saying or doing things they
never did. This can be exploited to spread false information, create fake news, and damage
reputations.
Privacy Concerns:

8
ED DAY-89 LUKMAAN IAS

 Deepfakes can be used to generate fake content by superimposing an individual's face onto
explicit or compromising images or videos, violating their privacy and potentially causing harm
to their personal and professional lives.
Security Risks:
 Deepfake technology poses a security risk as it can be used to impersonate individuals, gaining
unauthorized access to secure systems or conducting social engineering attacks.
Erosion of Trust:
 The widespread use of deepfakes can erode trust in visual and audio evidence, making it more
challenging for society to rely on recorded media as a source of information.
Impact on Journalism:
 Deepfakes can be used to create fabricated interviews or statements from public figures, which
can have a detrimental impact on journalism and the public's ability to discern credible sources.

Way Forward
 Enhance Deepfake Detection Techniques: Develop AI-powered tools that can accurately
identify and flag deepfakes. This can involve analyzing video artifacts, facial features, audio
patterns, and other subtle cues that distinguish genuine content from manipulated media.
 Encourage Responsible AI Development: Encourage AI developers and companies to adopt
responsible AI practices throughout the development lifecycle, incorporating ethical
considerations and risk mitigation strategies from the outset.
 Continuous Monitoring and Adaptation: Establish mechanisms for continuous monitoring and
evaluation of deepfake technologies and their potential impacts. This includes staying abreast
of emerging deepfake techniques and adapting detection and mitigation strategies accordingly.
 Government Regulation: The government will come out with new rules and regulations to
control the spread of deepfake content and it will draw up actionable items in 10 days on ways
to detect deepfakes, to prevent their uploading and viral sharing and to strengthening the
reporting mechanism for such content, thus allowing citizens recourse against AI-generated
harmful content on the internet.

The Conclusion:
The use of AI in the new world, the rise of deepfake technology demand immediate attention and
strategic interventions. The potential risks posed by deepfakes, especially in influencing public opinion
and creating misinformation, necessitate a robust framework for legal recourse, vigilant monitoring,
and the development of advanced AI-based solutions in future.

UPSC MAINS PRACTICE QUESTIONS


Q. What is Deepfake technology? How to make AI responsible for Deepfakes technology? Discuss
the ethical, legal, and socio-economic dimensions associated with deepfakes.
Q. Examine the role of international cooperation and the development of responsible AI practices
for preventing the misuse of deepfakes technology.
9
ED DAY-89 LUKMAAN IAS

UPSC Civil Services Examination Previous Year Question (PYQ)


Mains
Q. What are the main socio-economic implications arising out of the development of IT industries
in major cities of India? (2022)
Source
 https://www.thehindu.com/news/national/deepfake-alarm-ais-shadow-looms-over-
entertainment-industry-after-rashmika-mandanna-speaks-out/article67565970.ece
 https://www.thehindu.com/sci-tech/technology/the-danger-of-
deepfakes/article66327991.ece
 https://inc42.com/buzz/pm-modi-raises-concerns-over-deepfakes-calls-for-global-
regulation-of-ai/
 https://www.livemint.com/news/deepfakes-major-violation-of-it-law-harm-women-in-
particular-rajeev-chandrasekhar-11699358904728.html
 https://www.financialexpress.com/india-news/pm-modi-raises-concern-over-deepfakes-
urges-responsible-ai-use/3309807/
 https://www.theguardian.com/technology/2023/nov/17/microsoft-azure-ai-video-
deepfakes
 https://www.firstpost.com/tech/pm-modi-cautions-public-against-deepfakes-has-a-stern-
warning-for-ai-companies-13397722.html
 https://timesofindia.indiatimes.com/blogs/voices/beyond-imagination-indias-quest-to-
harness-generative-ai/
 https://www.thehindu.com/sci-tech/technology/meta-breaks-up-responsible-ai-team-
report/article67553243.ece
 https://www.thehindu.com/opinion/op-ed/can-ai-be-ethical-and-
moral/article67227803.ece

10
ED DAY-89 LUKMAAN IAS

11
ED DAY-89 LUKMAAN IAS

12

You might also like