You are on page 1of 14

Cyber Law 7th Semester Project

RUSSIA’S INFORMATION WARS – IN CONTEXT OF RUSSIA-


UKRAINE WAR

CYBER LAW 7th SEMESTER PROJECT


Submitted to Rajiv Gandhi National University of Law, Patiala

PROJECT SUPERVISOR -
DR. IVNEET KAUR WALIA
ASSOCIATE PROFESSOR OF LAW,
RGNUL

SUBMITTED BY -
BHAVANDEEP SINGH
ROLL NO. - 20047
2023

I
Cyber Law 7th Semester Project

ACKNOWLEDGEMENT CERTIFICATE

This project required a lot of guidance and feedback which was duly provided by my project
supervisor Dr. Ivneet Kaur Walia. Therefore, I would like to utilize this opportunity to express
my gratitude and reverence towards her.

BHAVANDEEP SINGH

II
Cyber Law 7th Semester Project

TABLE OF CONTENTS

I. INTRODUCTION

II. THE ROLE OF SOCIAL MEDIA AND NATIONAL

DISINFORMATION CAMPAIGNS

III. THE IMPACT OF ARTIFICIAL INTELLIGENCE IN ONLINE

DISINFORMATION CAMPAIGNS

IV. GOVERNMENT AND SOCIAL MEDIA DISINFORMATION


POLICIES
V. LOOKING AHEAD

III
Cyber Law 7th Semester Project

1. INTRODUCTION

In the lead-up to Russia’s invasion of Ukraine, and throughout the ongoing conflict, social
media has served as a battleground for states and non-state actors to spread competing
narratives about the war and portray the ongoing conflict in their own terms.1 As the war drags
on, these digital ecosystems have become inundated with disinformation. Strategic propaganda
campaigns, including those peddling disinformation, are by no means new during warfare, but
the shift toward social media as the primary distribution channel is transforming how
information warfare is waged, as well as who can participate in ongoing conversations to shape
emerging narratives.2

Examining the underlying dynamics of how information and disinformation are impacting the
war in Ukraine is crucial to making sense of, and working toward, solutions to the current
conflict. To that end, this research paper uncovers three critical components:

- How social media platforms are being leveraged to spread competing national
narratives and disinformation;

- The role of artificial intelligence (AI) in promoting, and potentially combating,


disinformation; and,

- The role of social media companies and government policies on limiting


disinformation.

1
Chotiner, I. (2022), “Vladimir Putin’s Revisionist History of Russia and Ukraine”, The New
Yorker, https://www.newyorker.com/news/q-and-a/vladimir-putins-revisionist-history-of-russia-and-
ukraine (accessed on 1 June 2022).
2
Alliance for Securing Democracy (2022), War in Ukraine:
Dashboard, https://securingdemocracy.gmfus.org/war-in-ukraine/ (accessed on 16 April 2022).

IV
Cyber Law 7th Semester Project

2. THE ROLE OF SOCIAL MEDIA AND NATIONAL


DISINFORMATION CAMPAIGNS

Russia and Ukraine both use social media extensively to portray their versions of the events
unfolding, and amplify contrasting narratives about the war, including its causes,
consequences, and continuation. Government officials, individual citizens, and state agencies
and have all turned to an array of platforms, including Facebook, Twitter, TikTok, YouTube,
and Telegram, to upload information. It is difficult to pinpoint the exact amount of content
uploaded by these various actors, but the scale of information being uploaded on social media
about the war is immense. For instance, in just the first week of the war, videos from a range
of sources on TikTok with the tag #Russia and #Ukraine had amassed 37.2 billion and 8.5
billion views, respectively. 3

At their core, the narratives presented by Russia and Ukraine are diametrically opposed. Russia
frames the war in Ukraine, which Putin insists is a “special military operation,” as a necessary
defensive measure in response to NATO expansion into Eastern Europe. Putin also frames the
military campaign as necessary to “de-nazify” Ukraine and end a purported genocide being
conducted be the Ukrainian government against Russian speakers. In contrast, Ukraine’s
narrative insists the war is one of aggression, emphasizes its history as a sovereign nation
distinct from Russia, and portrays its citizens and armed forces as heroes defending themselves
from an unjustified invasion.

Ukraine and Russia are not the only state actors interested and engaged in portraying the war
on their own terms. Countries such as China and Belarus have engaged in efforts to portray the
conflict on their own terms, and they have launched coordinated disinformation campaigns on
social media platforms. These campaigns have broadly downplayed Russia’s responsibility for
the war and have promoted anti-U.S. and anti-NATO posts. The mix of narratives, both true
and false, originating from different state actors as well as millions of individual users on social
media has enlarged tech platforms’ roles in shaping the dynamics of the war and could
influence its outcomes.

3
Bidochko, L. (2022), “Focus Ukraine: Fighting a Hybrid War with Hybrid Means: Zelensky Sanctions Pro-
Russia Media and Parties”, Wilson Center, https://www.wilsoncenter.org/blog-post/fighting-hybrid-war-hybrid-
means-zelensky-sanctions-pro-russia-media-and-parties (accessed on 16 April 2022).

V
Cyber Law 7th Semester Project

The scale of information uploaded to social media and the speed with which it proliferates
create novel and complex challenges to combating disinformation campaigns. It is often hard
to identify the origin of a campaign or its reach, complicating efforts to remove false content
in bulk or identify false posts before they reach mass audiences. For example, the active
“Ghostwriter” disinformation campaign, attributed to the Belarusian government, uses a
sophisticated network of proxy servers and virtual private networks (VPNs), which enabled it
to avoid detection for years. Before the operation was uncovered in July 2021, it effectively
hacked the social media accounts of European political figures and news outlets and spread
fabricated content critical of the North Atlantic Treaty Organization (NATO) across Eastern
Europe. The level of sophistication that these types of modern state-backed disinformation
campaigns possess makes them exceedingly difficult to detect early and counter effectively.
Russia, in particular, has spent decades developing a propaganda ecosystem of official and
proxy communication channels, which it uses to launch wide-reaching disinformation
campaigns. For instance, “Operation Secondary Infektion,” one of Russia’s longest ongoing
campaigns, has spread disinformation about issues such as the COVID-19 pandemic across
over 300 social media platforms since 2014. 4

The range of social media platforms in use, and the variation in their availability across
different countries, hinders the ability to coordinate efforts to combat disinformation, while
creating different information ecosystems across geographies. The narratives about the war
emerging on social media take different forms, depending on the platform and the region,
including within Russia and Ukraine. Facebook and Twitter are both banned within Russia’s
borders, but Russian propaganda and disinformation aimed at external audiences still flourishes
on these platforms. Within Russia, YouTube and TikTok are still accessible to everyday
citizens, but with heavy censorship. The most popular social media platform used within Russia
is VKontakte (VK), which hosts 90 percent of internet users in Russia, according to the
company’s self-reported statistics. It was previously available and widely used in Ukraine until
2017, but the Ukrainian government blocked access to VK and other Russian social media such

4 Brown, S. (2020), MIT Sloan Research About Social Media, Misinformation, and Elections,
MIT, https://mitsloan.mit.edu/ideas-made-to-matter/mit-sloan-research-about-social-media-misinformation-
and-elections.

VI
Cyber Law 7th Semester Project

as Yandex in an effort to combat online Russian propaganda. In 2020, Ukrainian president


Volodymyr Zelenskyy extended the ban on VK until 2023, so it has not facilitated
communications between Russians and Ukrainians throughout the war this year.

The government-imposed restrictions placed on these major social media platforms leave
Telegram as the main social media platform currently accessible to both Russians and
Ukrainians. Telegram is an encrypted messaging service created and owned by Russian tech
billionaire Pavel Durov, which is being used in the war for everything from connecting
Ukrainian refugees to opportunities for safe passage to providing near-real-time videos of
events on the battlefield. Critically, in the fight against disinformation, Telegram has no official
policies in place to censor or remove content of any nature. While some channels on Telegram
have been shut down, the company does not release official statements on why, and it generally
allows the majority of content posted by users to continue circulating, regardless of its nature.
This allows Telegram to serve as a mostly unfiltered source of disinformation within Russia
and Ukraine and reaches audiences that Western social media platforms have been cut off from.
While Telegram does not filter content like many other platforms, it also does not use an
algorithm to boost certain posts, and it relies on direct messaging between users. 5 This design
makes it difficult for AI tools to effectively boost disinformation. In contrast, on other
platforms such as Twitter and Facebook, AI is further enabling the rapid spread of
disinformation about the war.

5
Carvin, S. (2022), Deterrence, Disruption and Declassification: Intelligence in the Ukraine
Conflict, https://www.cigionline.org/articles/deterrence-disruption-and-declassification-intelligence-in-the-
ukraine-conflict/.

VII
Cyber Law 7th Semester Project

3. THE IMPACT OF ARTIFICIAL INTELLIGENCE IN ONLINE


DISINFORMATION CAMPAIGNS

AI and its subcomponents, such as algorithms and machine learning, are serving as powerful
tools for generating and amplifying disinformation about the Russia-Ukraine war, particularly
on social media channels. The underlying algorithms that social media platforms use to
determine what content is allowed, and what posts become the most viewed, are driving
differences in users’ perception of the events unfolding. Before the war, there was significant
controversy over how social media platforms prioritized and policed content on all kinds of
political and social issues. In recent years, both Facebook and YouTube have come under
scrutiny from regulators in the U.S. and EU, concerned that their algorithms prioritize extremist
content, and for failing to adequately remove disinformation despite some improvements to
automated and human-led procedures.

Throughout the Russia-Ukraine war, similar concerns have risen across a range of platforms.
For example, researchers found that TikTok directed users to false information about the war
within 40 minutes of signing up. New users on TikTok were shown videos claiming that a press
conference given by Vladimir Putin in March 2020 was “Photoshopped” and that clips from a
videogame was real footage of the war. 6 Likewise, Facebook’s algorithm routinely promoted
disinformation about the war, including the conspiracy theory that the U.S. is funding
bioweapons in Ukraine. A study by the Centre for Countering Digital Hate (CCDH) found that
Facebook failed to label 80 percent of posts spreading this conspiracy theory about U.S.-funded
bioweapons as disinformation. 7

Social media platforms also host popular AI-driven tools for spreading disinformation such as
chatbots and deepfakes. Bots—AI-enabled computer programs that mimic user accounts on
social media networks—are one of the most effective ways that disinformation about the war
spreads. Russia has extensive experience effectively using bots to spread disinformation. For

6
Ilyushina, M. (2022), “Putin’s war propaganda becomes ‘patriotic’ lessons in Russian schools”, The Washington
Post, https://www.washingtonpost.com/world/2022/03/20/putin-russia-schools-
ukraine/?utm_campaign=wp_todays_worldview&utm_medium=email&utm_source=newsletter&wpisrc=nl_tod
ayworld&carta-url=https%3A%2F%2Fs2.washingtonpost.com%2Fcar-ln-
tr%2F365dfdd%2F6237f9bb3e6ed13ade311ff1%2F596be3339bbc0f403f99f8b4%2F41%2F54%2F6237f9bb3e
6ed13ade311ff1 (accessed on 31 May 2022).
7
US Department of State (2022), Disarming Disinformation: Our Shared
Responsibility, https://www.state.gov/disarming-disinformation/ (accessed on 30 April 2022).

VIII
Cyber Law 7th Semester Project

example, Russian government agencies and their affiliates previously used them to spread
disinformation during the U.S.’s 2016 election as well as throughout the COVID-19 pandemic.
Russia is continuing to use bots, and since the start of the war in Ukraine earlier this year,
Twitter has reported removing at least 75,000 suspected fake accounts linked to online Russian
bots for spreading disinformation about Ukraine. However, the scale and speed at which
disinformation can be produced and spread using bots make it nearly impossible to monitor or
remove all false accounts and posts.
In addition to bots, deepfakes—videos that use AI to create false images and audio of real
people—have circulated online throughout the conflict. Beginning in March 2022, deepfakes
portraying both Vladimir Putin and Volodymyr Zelenskyy giving fabricated statements about
the war have repeatedly appeared on social media. A deepfake of Vladimir Putin declaring
peace widely circulated through Twitter, before being removed, while a deepfake portraying
Volodymyr Zelenskyy circulated on YouTube and Facebook. Beyond deepfakes, experts have
expressed concern that AI could be leveraged for more sophisticated disinformation
techniques. These include using AI to better identify targets for disinformation campaigns, as
well as using techniques such as Natural Language Processing (NLP), which allows AI to
produce fake social media posts, articles, and documents that are nearly indistinguishable from
those by human posters.

While AI is contributing to the spread of disinformation across social media, AI tools also show
promise for combating it. The sheer volume of information uploaded to social media daily
makes developing AI tools that can accurately identify and remove disinformation essential.
For example, Twitter users upload over 500,000 posts per minute, well beyond what human
censors can monitor. Social media platforms are beginning to combine human censors with AI,
to monitor false information more effectively. Facebook developed an AI tool called
SimSearchNet at the start of the COVID-19 pandemic to identify and remove false posts.
SimSearchNet relies on human monitors to first identify false posts, and then uses AI to identify
similar posts across the platform. AI tools are significantly more effective than human content
moderators alone. According to Facebook, 99.5 percent of terrorist-related content removals
and 98.5 percent of fake accounts are identified and removed primarily using AI trained with
data from their content-moderation teams. Currently, AI aimed at combatting disinformation
on social media still relies on both human and computer elements. This limits AI’s ability to
detect novel pieces of mis- and disinformation, and means that false posts routinely reach large
audiences before they are identified and removed using AI. The current technical limitations

IX
Cyber Law 7th Semester Project

on being able to proactively identify and remove false information, combined with the scale of
information uploaded online, pose a continuing challenge for limiting disinformation on social
media in the Russia-Ukraine war and beyond.

X
Cyber Law 7th Semester Project

4. GOVERNMENT AND SOCIAL MEDIA DISINFORMATION


POLICIES

Social media companies and governments have enacted a range of policies to limit the spread
of disinformation, but their application has been fragmented, depending on the platform and
geography, with varying effect. The different policies that social media platforms apply, the
extent of their efforts to combat disinformation, and their availability within countries, all help
shape the way the public understands the Russia-Ukraine war.8 Critically, social media
companies are privately controlled, and their interests may or may not be aligned with varying
state interests, including the states where these companies are registered and headquartered as
well as others.

In the Russia-Ukraine war, social media companies have taken a range of different measures.
Facebook is deploying a network of fact checkers in Ukraine in attempts to eliminate
disinformation, and YouTube has blocked channels associated with Russian state media
globally.9 Both of these platforms enacted restrictions beyond the legal requirements under
U.S. and EU sanctions on Russia. In contrast, Telegram and TikTok have not taken as
significant steps to limit disinformation on their platform, beyond complying with EU sanctions
on Russian state media within the EU. The differences in responses among the platforms reflect
the government and public pressures that varying platforms are subject to. In general, platforms
based in the U.S. have taken stricter stances on limiting Russian disinformation than their
international counterparts, such as Telegram and TikTok. The difference in social media
platforms’ policies, their efforts to limit disinformation, and their geographic access are all
becoming powerful drivers of not only how individuals consume news about the Russia-
Ukraine war globally, but also the narratives—including information, misinformation, and
disinformation—that they are exposed to and thereby the views that they may adopt.

8
Wardle, C. and H. Derakshan (2017), Information Disorder: Towards an interdisciplinary framework for
research and policy making, Council of Europe report DGI (2017) 09, http://tverezo.info/wp-
content/uploads/2017/11/PREMS-162317-GBR-2018-Report-desinformation-A4-BAT.pdf.
9
Zakrzewski, C. and G. De Vync (2022), “The Ukrainian Leader Who Is Pushing Silicon Valley to Stand
up to Russia”, The Washington
Post, https://www.washingtonpost.com/technology/2022/03/02/mykhailo-fedorov-ukraine-
tech/ (accessed on 16 April 2022).

XI
Cyber Law 7th Semester Project

The growing role of social media channels in shaping narratives on geopolitical issues,
including conflicts, is generating pushback from governments, both democratic and autocratic.
This, in turn, has contributed to the trend of some governments placing restrictions on the
public’s use of social media and the internet more generally. For example, Russia has restricted
its internet activity since 2012, but increased the intensity of its crackdown on dissidents, online
dissent, and independent media coverage leading up to, and since, the invasion of Ukraine.
Russia recently passed new laws targeting foreign internet companies, such as the 2019
Sovereign Internet Law and the federal “Landing Law” signed in June 2021. These laws grant
the Russian state extensive online surveillance powers and require foreign internet companies
operating in Russia to open offices within the country. Additionally, Russia has completely
banned Facebook, Twitter, and Instagram within its borders. Ukraine cracked down on online
expression in late 2021, in response to fears that Russia was sponsoring Ukrainian media outlets
and preparing to invade. However, since the beginning of the invasion, Ukraine has turned
toward openly embracing social media as a means to broadcast messages outside its borders
and garner public support for its resistance efforts.

Globally, this follows a number of existing trends, including numerous countries’ regulatory
efforts to enforce digital and data sovereignty. A range of countries are now attempting to
regulate social media outlets and restrict online speech domestically, while using these same
platforms to shape narratives internationally. For instance, China, Iran, and India, have all
enacted restrictive legislation on internet and social media use domestically while
simultaneously using social media channels to spread targeted disinformation campaigns
globally.

The effectiveness of governments’ efforts globally to curve access to social media and prevent
disinformation, both in the Russia-Ukraine war and overall, has been limited thus far.
Regulatory efforts have neither curbed disinformation in robust and systematic ways, nor
reigned in the role of social media platforms as domains of political polarization and vitriolic
social interactions. While governments have been more effective at curbing access to
information within their domestic jurisdictions, many individuals can still circumvent
restrictions through virtual private networks (VPNs). These networks allow users to hide the
origin of the internet connection and offer access websites that may be blocked within a specific
countries’ borders. After Russia’s invasion of Ukraine, VPN downloads within Russia spiked

XII
Cyber Law 7th Semester Project

to over 400,000 per day, illustrating the extent of the challenge to completely block access to
online spaces.

XIII
Cyber Law 7th Semester Project

5. LOOKING AHEAD

Technical and regulatory strategies for combating disinformation are evolving rapidly but are
still in their early stages. In modern conflicts, social media platforms control some of the main
channels of information, and their policies can have an outsized effect on public sentiment. In
the Russia-Ukraine conflict, the largest global social media platforms have broadly agreed to
attempt to limit Russian propaganda messages, but they have placed far fewer restrictions on
official content from the Ukrainian government.10 This type of broad power that social media
companies exercise by choosing which voices are amplified during conflicts is driving
governments to push for increased control over these channels of information. Among others,
China, Russia, and Iran all have onerous restrictions on what content can be posted online and
have banned most U.S.-based social media companies. Further, both Russia and China are
taking measures to move their populations onto domestic social media channels, such as
WeChat in China or VKontakte in Russia, which can be heavily censored, in addition to
intensive government oversight and interference. The EU and India have also placed regulatory
restrictions on U.S.-based social media platforms, with the intent of developing their own
domestic platforms. These developments create challenges for existing international social
media platforms and continue to complicate efforts to fight disinformation. As social media
channels become more fragmented, and users are subject to differing policies restricting
content and disinformation, coherently coordinating efforts to fight disinformation across
platforms will become increasingly difficult.

10
The Moscow Times (2022), “Billions for propaganda. Budget spending on state media tripled against the
backdrop of the war”, https://www.moscowtimes.ru/2022/04/12/milliardi-na-propagandu-rashodi-byudzheta-na-
gossmi-podskochili-vtroe-na-fone-voini-a19511 (accessed on 14 May 2022).

XIV

You might also like