You are on page 1of 55

ts

gh
Addressing Deepfake-Enabled Attacks

Ri
Using Security Controls

ll
Fu
ns
ai
GIAC (GSEC) Gold and RES5500

et
rR
Author: Jarrod Lynn, jdlynn@protonmail.com

ho
Advisor: Russell Eubanks

ut
,A
te
itu

Accepted: January 3, 2022


st

Abstract
In
NS

Attacks enabled by computer-generated media known as deepfakes are an emergent


problem requiring urgent attention from the security community. This paper reframes the
SA

understanding of the issue and offers a methodology for handling the problem.
e

Much of what has been written and publicly discussed about deepfakes has concerned
Th

their increasingly realistic portrayals of real people and potentially dangerous


consequences such as crime and disinformation. With growing visibility, legislatures
22

have enacted laws, and researchers continue to work on developing technical


countermeasures. However, very little attention has been focused on the systemic nature
20

of the problem. At present, there is no technical countermeasure that can effectively


defeat deepfake-enabled attacks due to a unique combination of factors: malicious actors
©

can choose the means of communication, media, and audience, and whether the attack is
prerecorded or conducted in real time; deepfake technology is advancing quickly, and the
software to create deepfakes is ubiquitous; and as of printing, there has not yet been
publicly reported even a simple implementation of a counter deepfake technology that
reliably addresses a single type of deepfake (e.g., prerecorded, real time, audio, video)
communicated between devices outside of a controlled setting. These factors combine to
prevent meaningful technical countermeasures at present.

This paper provides a background on deepfakes, countermeasures, legislation, use cases,


and interrelated threats, as well as an outline of the unique dilemma created by deepfakes.
After providing this context, the paper introduces a qualitative methodology that
defenders can implement to assist in assessing, planning, training, and responding to
deepfake-enabled attacks. The paper explores several case studies using the proposed
methodology and concludes with suggested areas for further research.

© 2022 The SANS Institute Author retains full rights.


ts
Addressing Deepfake-Enabled Attacks Using Security Controls 2

gh
Ri
1. Introduction

ll
Fu
Altered images, disinformation, propaganda, social engineering, theft, coercion,

ns
prank phone calls— these are a few of the many phenomena, sometimes disparate and

ai
sometimes related, that fit within the broad umbrella of deepfake-enabled attacks. An

et
example of applied artificial intelligence (AI), deepfakes have become well-known in the

rR
last few years, both for their entertainment value and their malicious uses. Two aspects of

ho
deepfakes have been the primary focus of most discussion: first, the most prominent,

ut
mesmerizing, and alarming feature of the technology—its dangerous potential to deceive

,A
by serving as increasingly realistic digital puppets of actual human beings; and second,
te
the possible malevolent uses of this technology to pursue all sorts of nefarious ends, such
itu

as theft, revenge porn, extortion, and disinformation, among others.


st

Increasingly frequent public examples of deepfakes and growing awareness of the


In

potential for problems around this technology have brought this topic to the forefront,
NS

with urgent calls for action in the form of technical countermeasures and legislation.
SA

Despite this attention, thus far, there has been relatively little focus on the
systemic nature of the growing threat. Further, there has not yet been a widescale
e
Th

recognition of the significant problem that exists in addition to the difficulties around
technical countermeasures.
22

At present, there are no sufficient technical countermeasures to address the threat


20

from deepfake-enabled attacks. This stems from several facts: 1. Deepfake technology is
©

evolving rapidly. 2. The means by which an attacker transmits a deepfake is flexible and
therefore unpredictable. An attacker can transmit a deepfake via any effective channel of
communication to which the attacker has access. 3. Deepfakes can be created in various
media: audio, video, text, and still image, real time and prerecorded. These facts leave
defenders at a distinct disadvantage.
This paper will provide the reader with a contextual background on deepfakes,
including an introduction to the underlying technology, a discussion of technical
countermeasures, an overview of relevant legislation, and examples of use cases.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 3

gh
Ri
It is important to reframe the discussion around deepfake-enabled attacks at the

ll
Fu
outset. While the technical problem is undoubtedly real, a technical solution cannot solve
it alone. To illustrate, even if a technical solution provided accurate detection of

ns
deepfakes on a single device or set of devices, attackers could circumvent this specific

ai
device or set of devices and solution. The dispersed nature of the threat leaves defenders

et
rR
with the need to develop a strategy.
This paper’s original contribution is a methodology comprised of a set of

ho
qualitative measures that organizations can take to confront the threat posed by deepfake-

ut
enabled attacks. The methodology begins with an original questionnaire that guides

,A
organizations in assessing their security posture in light of the threat. The methodology
te
then walks through additional steps and reaches the final tool in the process, which is a
itu

truncated version of the NIST Cybersecurity Framework (CSF) (NIST, 2018).


st

Organizations can use the tools in the methodology for assessing their security
In

posture and responding to incidents as well as for planning and designing scenario-based
NS

drills.
SA

After introducing the methodology, the paper explores a series of case studies that
demonstrate key issues around deepfake-enabled attacks.
e
Th

Deepfakes present a dynamic problem, and there is not a one-size-fits-all solution.


However, this methodology is proposed as a starting point for security professionals as
22

they design responses to this serious threat. This methodology is meant to work with an
20

organization’s existing security programs. It is not rigidly prescriptive, but there is


©

enough common language, including reliance on the well-known NIST Cybersecurity


Framework (CSF), that professionals who choose to work with this model will be able to
share ideas. Due to the fact that the threat from deepfake-enabled attacks is new, the
methodology will necessarily grow with the learned experience of the security
community. The threat from deepfake-enabled attacks is a serious and pressing one that is
on the near horizon, and organizations need to prepare now to meet it. By recognizing the
dispersed nature of the threat across devices and media and by realizing that technical
countermeasures will not be the solution, at least in the short term, defenders can begin to

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 4

gh
Ri
work on realistic plans that can help prepare their organizations for this approaching

ll
Fu
challenge.

ns
2. General Background on Deepfakes

ai
et
rR
2.1 Deepfakes Defined

ho
The term “deepfakes” originated on Reddit in 2017 in reference to videos in

ut
which celebrity faces were realistically superimposed on the bodies of actors in

,A
pornographic films using artificial intelligence (Beridze & Butcher, 2019). It has since
te
come to refer more broadly to other forms of media created with artificial intelligence. A
itu

definition that allows for this more common usage is “Believable media generated by a
st

deep neural network” (Mirsky & Lee, 2020). Similarly, the National Security
In

Commission on Artificial Intelligence (NSCAI) has defined a deepfake as “Computer-


NS

generated video or audio (particularly of humans) so sophisticated that it is difficult to


SA

distinguish from reality” (NSCAI, 2021). Others have moved away from using the term
deepfake to describe this phenomenon. Meredith Somers quotes Henry Ajder of
e
Th

Deeptrace as referring to it as “artificial intelligence-generated synthetic media” (Somers,


2020). For the purposes of this discussion, deepfakes are not limited to prerecorded
22

videos but can include any of the following: audio recordings, still images, text, video
20

only, and any combination of real-time audio-video. The key to whether a given piece of
©

media constitutes a deepfake lies in the technology used to create it.

Prior to a discussion of the technical aspects of deepfakes, consider a few


preliminary points on the nature of deepfakes. First, deepfakes represent the first hybrid
form of cyberattack that combine the computer and the human. The human
victim/perpetrator is inextricably linked to the computer in this attack. The human is
attacked as the vector. In other words, the image that is “hacked” is the likeness of a real
human. Likewise, the human is also attacked as the detector. This has important
ramifications with regard to identity and access management. People are used to being
able to detect the identity of another known person. In most situations, the mechanism

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 5

gh
Ri
used to authenticate the identity of a known colleague’s likeness is simple familiarity.

ll
Fu
With deepfakes, it is precisely this natural phenomenon that is undermined. At present,
there is no effective means of detecting and preventing this in practical terms. That is, an

ns
effective deepfake will circumvent this form of “authentication.” That can be in many

ai
instances the purpose of their use.

et
rR
Second, deepfakes represent the first wave of a new trend. It is tempting to view

ho
deepfakes as a discrete problem. However, this is an unduly limited view. While they are
indeed a challenge to be dealt with in the short term, they also offer an opportunity.

ut
,A
There is a current trend towards a new version of the internet that involves what is
te
being called “web3” and the so-called metaverse, which includes virtualized reality
itu

(Marr, 2022). Likewise, another emergent technology, augmented reality, creates a


st

blended three-dimensional experience for users by overlaying “computer-generated


In

elements” onto the user’s field of vision (Johnson, 2020). While fictional, Keiichi
NS

Matsuda’s 2016 short film “Hyper-Reality” provides a vision of how an augmented


reality could eventually look. Matsuda notes that his film endeavors to “explore [the]
SA

exciting but dangerous trajectory” in which “physical and virtual realities are becoming
e

increasingly intertwined” (Matsuda, 2016). The entirety of his character’s experience as


Th

he goes about his life in the film is augmented, primarily by advertisements (Vincent,
22

2016).
20

These technologies are the beginning of what is to come in terms of applied


©

artificial intelligence. In thinking about how to approach security around deepfakes, it is


worth considering that the paradigm will be shifting towards a situation where the
objective reality of what people interact with will not be as tangible as they are
accustomed to. In this context, deepfakes are far from mere aberrations and nuisances.
Deepfakes present security professionals and organizational leaders with a valuable
practical opportunity to begin to work with security while this area remains in its earliest
stages.

Third, one use of deepfakes falls within a very broad sphere that includes
disinformation, misinformation, and other related concepts. Viewed from this angle,

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 6

gh
Ri
deepfakes can be discussed in the context of digital forgeries, manipulated images,

ll
Fu
propaganda, and hybrid warfare, among other phenomena. Some examples of how
deepfakes are being used uniquely within the overall themes of disinformation and

ns
misinformation will be discussed. The primary obvious difference between deepfakes and

ai
their predecessors in the forms of doctored analogue images and digital forgeries is that

et
rR
deepfakes are created through the use of artificial intelligence.

ho
Finally, the criminal potential of deepfakes is very clear. As evidence, in recent
years, numerous leading thinkers and law enforcement authorities have released strong

ut
,A
statements related to deepfakes (Federal Bureau of Investigation, 2021). In 2020, the
Dawes Centre for Future Crime at University College London rated deepfakes as the
te
“most dangerous crime of the future” (Smith, 2020). Both the FBI and Europol have
itu

issued warnings related to deepfakes, with Europol calling for increased investment
st

(Stolton, 2020). Europol recently published a report predicting an expanded role for
In

deepfakes in organized crime (Coker, 2022). Among other trends, the report predicts that
NS

deepfakes will be used in document fraud and that a new market will emerge for
SA

deepfakes as a service (Europol, 2022). The next section of the paper will look at the
e

technical aspects of deepfakes.


Th

2.2 Technical Aspects


22
20

The primary technology used to create deepfakes is a form of machine learning


©

called generative adversarial networks (GANs) (Mirsky & Lee, 2020). GANs work by
“pitting neural networks against one another” to “learn” (Giles, 2018). GANs produce
various types of deepfakes using large stores of data (images, videos, etc.) of the victim
as well as a second set of data with which to compare and generate images (Vincent,
2019).

Aside from the need for large data sets, another notable current limitation around
deepfake technology is the continued presence of visible artifacts in photos and videos
(FBI, 2021). This can be obvious to the human eye—not just to programs designed to
detect subtle differences. The “Detect Fakes” project run by the Massachusetts Institute

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 7

gh
Ri
of Technology Media Lab and Applied Face Cognition Lab presents this experientially.

ll
Fu
The project is an online research study in which users are presented with 32 samples of
video, audio, and text and asked whether they believe the sample is real or fake (MIT

ns
Media Lab, 2022).

ai
et
At present, basic limitations around realism and the need for large stores of data

rR
can make some of the most damaging implementations of deepfakes more challenging for

ho
adversaries. For instance, due to technical hurdles, it would be difficult for a malicious
actor to create a real-time deepfake that gives the impression of coming from an

ut
,A
organization’s physical location and from within a company’s actual network, given the
inherent limitations of the technology as it currently stands.
te
itu

However, the technology is evolving quickly. Videos made using GANs are
st

becoming more realistic as problems around digital artifacts are resolved. At the same
In

time, the availability of a variety of powerful applications is growing more widespread


NS

(Fowler, 2021). With the number of apps proliferating, one example of a problematic
program is DeepFaceLive, which is among the programs that allow users to create real-
SA

time deepfakes (Anderson, 2021). This program is based on popular deepfake creation
e

software and has strong community support. Many other programs are available to make
Th

prerecorded and live deepfakes. Likewise, there are programs to make other sorts of
22

deepfakes. For example, the program “This Person Does Not Exist” allows users to create
20

realistic still images of nonexistent people (Vincent, 2019).


©

2.3 Countermeasures

There is considerable research on both prevention and detection of deepfakes that


parallels developments in deepfakes themselves.

As deepfakes become more adept at subverting both machine and human


detection, researchers continue to find new weaknesses and develop new technologies.
Among many examples, one such method is to use techniques such as “optical flow
fields” involving convolutional neural networks (CNNs) (Caldelli et al., 2021). There are
dozens of proposed approaches involving different methods. Yisroel Mirsky and Wenke

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 8

gh
Ri
Lee provide an excellent overview detailing many of the methods in existence as of

ll
Fu
January 2020 (Mirsky & Lee, 2020). Other examples abound. One promising recent
study shows a very high rate of detection, focusing on facial expressions (Ober 2022).

ns
However, Ober quotes one of the authors of this study, Amit Roy-Chowdhury, who

ai
notes, “What makes the deepfake research area more challenging is the competition

et
rR
between the creation and detection and prevention of deepfakes which will become
increasingly fierce in the future.”

ho
While all of these methods are focused on the detection or prevention of

ut
,A
deepfakes, most are not tied to concrete practical applications. Further, none of these
methods has suggested a solution that would work across platforms with real-time
te
deepfakes. In other words, if an attacker were to attack a platform that were effectively
itu

protected by a given countermeasure, he or she could simply circumvent that


st

countermeasure by directing the exploit (deepfake) via another mode of communication


In

on which the victim does not have the countermeasure deployed. Further, an attacker
NS

always has the option of directing a deepfake towards third parties (e.g., the public) using
SA

means of communication over which the victim has no control at all.


e

Beyond forensic detection, studies of prevention have included the use of


Th

blockchain and watermarking, for example (Newman, 2019). Authentication can be


22

problematic in that it can be cumbersome or require participation of devices with given


20

software on both sides of the interaction. Again, the attacker can circumvent the
©

countermeasure very easily.

Despite the challenges described above, there are commercial efforts to


operationalize deepfake detection (Lomas, 2020). As of printing it is not clear how
existing solutions would deal with real-time deepfakes outside of controlled
environments or attackers who work across different devices and media.

Having looked at the technical aspects, the next step is to consider examples of
use cases.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 9

gh
Ri
2.4 Use Cases

ll
Fu
While the deepfake problem is growing, there are thus far very few publicly

ns
documented real-world examples of malicious deepfake-enabled (audio-video) attacks

ai
being carried out against organizations. This section will address a handful of publicized

et
examples as well as proofs of concept. Further, this portion of the paper addresses

rR
categories of phenomena for which there are many examples: deepfake-enabled static

ho
photo attacks, attacks against individuals, and proofs of concept of these forms of

ut
deepfakes that could be used to conduct successful deepfake-enabled attacks. How and

,A
why deepfakes have been used in given instances are germane to the ultimate goals of
te
prevention and mitigation. This includes the types of implementations—prerecorded, real
itu

time, audio, and video. This also includes the reasons for which they have been used,
st

such as theft and disinformation, among other reasons.


In

The prototypical deepfake is the prerecorded video of an individual engaged in


NS

compromising activity (Beridze & Butcher, 2019). As noted, programs for the creation of
SA

deepfakes are readily available. The continued usage of this technology for illicit
purposes has prompted legislation (Clark, 2022). In its most benign form, this technology
e
Th

is used to create amusing videos in which people’s faces are swapped with celebrities’
(Hirwani, 2021). This technology is of particular interest to organizations because it can
22

be used to coerce employees and create insider threats. It can also be used to create
20

videos that would damage the organization’s reputation.


©

The next category is static deepfake/AI-generated photos. Again, these are


ubiquitous. The most common employment of this type of attack in an organizational
setting is in conjunction with spear phishing. This risk was highlighted by the use of an
AI-generated photo on a false LinkedIn profile associated with an unidentified user who
attempted to connect with various individuals in sensitive positions (Satter, 2019).

Thus far, the most successful and damaging publicly disclosed deepfake-enabled
attacks against organizations have involved the use of voice cloning. In one attack,
thieves tricked an employee into transferring $243,000 in corporate funds by pretending
to be the company’s CEO (Stupp, 2019). In a second case, thieves stood in for a company

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 10

gh
Ri
director and convinced a bank manager to transfer $35 million in company funds

ll
Fu
(Brewster, 2021).

ns
While there have been few publicized examples, the recently reported case of
French documentary filmmaker Yzabel Dzisky highlights the viability of real-time video

ai
et
deepfakes as a credible threat (Kasapoglu, 2022). Dzisky was reportedly the victim of a

rR
romance scam in which the attacker convincingly used real-time deepfakes to disguise his

ho
true identity. Although many aspects of the attacker’s story and modus operandi fit
common patterns for this type of romance scam (an inability to meet in person,

ut
,A
unexplained trips, requests to wire cash), the fraudster in this case was able to use real-
time deepfake technology to overcome Dzisky’s doubts with great effect (Dellinger,
te
2019). The technology allowed him to play a psychological game in which he told half
itu

truths and made half confessions about his identity, leading Dzisky to believe that he had
st

been honest at first, when in fact he was remaining deceitful. The fraudster initially told
In

Dzisky that he was a doctor in Los Angeles. When she began to see through his story, he
NS

“admitted” that he was actually Turkish and located in Istanbul. This appears to have
SA

been a calculated part of his plan. The name he chose was the same as Dzisky’s ex-
e

husband. In reality, the fraudster was a young man living in Nigeria. He used the
Th

deepfake to concoct two false identities in conjunction with typical social engineering,
22

appeals to emotions, and romance scam tactics. This example shows that real-time
20

deepfakes can be used effectively against individuals.


©

Moving from a real-world attack to two categories of proof of concept, there are
the ubiquitous videos of celebrities being made to say and do ridiculous things. One of
the most famous (because it is meant to instruct on this precise point) is a deepfake of
former President Obama made by comedian Jordan Peele (Romana, 2018). In addition,
there are proofs of concept of real-time deepfakes occurring during live teleconferences.
The most famous of these is a “Zoom-bomb” by a real-time deepfake imposter posing as
Elon Musk (Greene, 2020). Together, these examples show the viability of the future use
of deepfakes for malicious purposes. Real-time deepfakes are extremely dangerous for
victims and should give organizations pause. Given the lack of control over the medium
of delivery, malicious actors can wreak havoc using a real-time deepfake.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 11

gh
Ri
A general point regarding the technology bears repeating. As of now, to be

ll
Fu
successful, deepfake programs require stores of images for “training” purposes. That is,
in order to work, a GAN needs a database of images of a person who will be the “victim”

ns
of the deepfake. This is a limitation on the implementation of the deepfake. While there

ai
is a possibility that the technology will evolve so that the data required for a successful

et
rR
attack will decrease, the present need highlights one aspect of the market for deepfake as
a service. Fraudsters could use ready-made deepfake packages to target victims.

ho
The examples above show not only that deepfakes can and have been used in

ut
,A
malicious circumstances, but that individuals are vulnerable. Organizations can fall prey
in that their employees can be shown individually or collectively to be taking actions or
te
making statements that run counter to organizational interest. Individuals can be
itu

vulnerable in that deepfakes can depict them or those close to them engaged in
st

compromising activity.
In
NS

Also, deepfakes can be used in conjunction with crimes such as virtual


kidnappings to coerce and extort victims. In virtual kidnappings, malicious actors call the
SA

victim and claim to have kidnapped the victim’s loved one (Kushner, 2022). They use
e

recordings or actors to play the role of the loved one. The entire kidnapping is a hoax that
Th

relies on social engineering (FBI, 2017). The criminal’s goal is typically to obtain money
22

from the victim. The bad actor usually insists that the victim not contact anyone for the
20

duration of the interaction. The criminal relies on the victim’s attention during this
©

interaction. If the victim contacts his or her loved one, the con ends. According to reports,
there has been some degree of randomness in targeting, in that criminals contact large
numbers of people looking for victims who do not hang up. Using deepfakes, criminals
could craft highly targeted virtual kidnappings with devastating effectiveness. Young
people are particularly vulnerable as they post more online content that can be used to
generate GANs.

Any time an individual is coerced presents a potential risk for an organization. A


bad actor could replace his or her demand for money with a demand for an act against the
organization—great or subtle. Once the employee has complied and learns that the

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 12

gh
Ri
kidnapping was only virtual, it raises the question of whether the organization’s culture

ll
Fu
will encourage reporting, especially if the act done by the employee on behalf of the bad
actor was subtle enough to go undetected. A smart adversary will compel the victim to

ns
engage in small, undetectable acts (relatively low stakes) against the organization by

ai
coercing the victim with high-stakes negative consequences—the release of unflattering

et
rR
fake videos, a fake kidnapping, or other acts. A smart adversary who knows that there is
no means of reporting such an incident without consequence can also use that against the

ho
victim. An employee compromised in such a way may be inclined not to report if there is

ut
a guaranteed high cost compared to what he or she perceives to be a relatively low-stakes

,A
act. However, an adversary can compromise multiple employees to devastating effect.
te
Deepfake-enabled attacks overlap with insider threat on numerous levels. The next topic
itu

is the use of deepfakes in the realm of disinformation and misinformation.


st

Deepfakes can be used intentionally as disinformation or in the context of


In

misinformation. These dangers are illustrated by a 2019 video of Speaker of the House
NS

Nancy Pelosi that was significantly altered to make it appear as though she was
SA

intoxicated during a press conference. This video was widely circulated on social media.
e

It remained online and generated significant attention even after being disproven
Th

(Denham, 2020). This video is at the edge because it is not a deepfake per se, but rather
22

manipulated. However, it demonstrates the potential use of a deepfake as disinformation


20

in this context (Lima, 2021). While the video was manipulated, not a deepfake, a
convincing deepfake could presumably do the same.
©

More recently, a deepfake of what purported to be a news reporter reporting


inflammatory stories about French involvement in Mali circulated on Malian social
media (Bennett, 2022). The piece was created using a deepfake program called Synthesia.
According to France 24, the fake computer-generated “reporter” claimed that France paid
Malian political parties to stay out of certain political activities. He also claimed that a
well-known French figure was a spy selling information on Malian military units to
jihadists (Thomson, 2022). These stories were false.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 13

gh
Ri
Interestingly, there have been other instances in which the mere possibility that

ll
Fu
deepfakes might be used has become a factor. In April 2021, two Russian men known as
pranksters within Russia conducted meetings with multiple European leaders, during

ns
which one of the men pretended to be Leonid Volkov, former chief of staff to imprisoned

ai
Russian opposition politician Alexei Navalny (Vincent, 2021). Part of the disinformation

et
rR
appears to have been the notion that the call was a deepfake. It later became clear that the
video was not a deepfake, but that one of the men was an actor in disguise. This fake

ho
deepfake caused quite a stir. Subsequent reporting that there had been a successful

ut
deepfake attack against senior EU leaders led to embarrassment and confusion.

,A
As this paper was written during the period of the leadup to Russia’s invasion of
te
Ukraine, there were reports and speculation about the potential use of deepfakes in
itu

connection with a possible “false flag” operation. As reports came out from the border
st

area between Russia and Ukraine, the ever-present potential for a “false flag” and
In

deepfake caused a great deal of skepticism and doubt around the information being
NS

presented (Haltiwanger, 2022). This demonstrates both the powerful potential of this
SA

technology (use as part of a false flag) as well as the power created by the possibility of
its use (the need to account for the prospect of its use by an adversary). After the invasion
e
Th

in mid-March 2022, a video purporting to be of President Vlodymyr Zelensky allegedly


asking the Ukrainians to surrender appeared online. In fact, this was a deepfake. The
22

video itself was not very well done. Social media sites such as Facebook/Meta quickly
20

removed it from their platforms (Saxena, 2022). It is possible to imagine that if the
©

technology had been better, if there had been less reporting around the likelihood of such
an incident, or if social media companies had been less proactive about removing the
content, the video might have been more influential. That being said, in and of itself, the
video may have served a propaganda purpose even in its limited release by contributing
to the disinformation environment. Any deepfake purporting to show someone acting
against their own interests can have an effect.

It is worth noting that there are at least two examples of the intentional use by
candidates for political office of deepfakes of themselves during political campaigns.
While campaigning, Indian politician Manoj Tiwari was the willing subject of a deepfake

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 14

gh
Ri
produced by his own party, the end product of which was that his words were translated

ll
Fu
into another language to communicate with voters from that language group (Christopher
2020). During his 2022 campaign, President Yon Suk-yeol of South Korea also became

ns
the willing subject of a deepfake. In an effort to appeal to young voters, his campaign

ai
used the video to answer questions from the public online (Jin-kyu, 2022).

et
rR
In a seminal paper around deepfakes, legal scholars Danielle Citron and Robert

ho
Chesney coined the term “liar’s dividend” to describe the phenomenon in which
wrongdoers can point to the existence of deepfakes to deny having engaged in activity

ut
,A
they were clearly caught engaging in (Citron & Chesney, 2019). That is, the fact that
there are deepfakes gives malefactors plausible deniability. It seems that there is also
te
something of a reverse liar’s dividend at play in the atmosphere of doubt and mistrust
itu

around the possibility that one might use a deepfake, the anticipation thereof, or that
st

anyone might have used one. This murky environment can be a strategic weapon of sorts
In

that causes a sense of questioning around all media, and that in and of itself can be an
NS

advantage to a malicious actor.


SA

An inadvertent benefit of this atmosphere to bad actors may become that honest
e

people are more likely to pay ransom or cede to demands than to allow compromising
Th

deepfakes to be released. Eventually, when the phenomenon of deepfakes becomes


22

commonplace and denials of legitimate activity by guilty parties also becomes common,
20

the public will grow weary of the “liar’s dividend,” and there will be a backlash in which
©

denials will be treated as suspect. This will benefit bad actors who may be able to count
on people who simply do not want to risk having to defend themselves against deepfakes
that are eventually presumed real.

2.5 Why Is This Such a Problem?

The examples above of deepfakes being used by bad actors demonstrate some of
the dangers of deepfake-enabled attacks. However, deepfake-enabled attacks are a
compound problem, and this should be explored in more detail.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 15

gh
Ri
Most literature concerning the dangers of deepfakes is limited to the technology

ll
Fu
around the deepfake itself—e.g., producing, preventing, and detecting GANs. Much of
the rest of public discussion focuses on the effects of deepfakes, such as aspects of crime

ns
or disinformation.

ai
et
The first level of this analysis stems from the fact that the technology behind

rR
deepfakes is difficult to counter. As discussed, there have been and are ongoing serious

ho
efforts to develop technical countermeasures. Broadly speaking, some of the categories
include forensic detection after the fact, authentication at the time a message is sent,

ut
,A
sending messages through closed systems, and looking for network anomalies.
te
The second level of this analysis is that deepfakes can be live or prerecorded.
itu

Further, a prerecorded deepfake can be transmitted live.


st

Third, deepfakes can be dispersed. They can be transmitted via any channel of
In

communication that the bad actor chooses to whomever the bad actor targets. This means
NS

that that he or she can interact directly via a live deepfake with the intended victim. Or,
SA

he or she can address a group live. He or she can prepare a prerecorded deepfake and let
it be played by third parties on a social media site. Or, he or she can cause a prerecorded
e
Th

deepfake to be played live at a certain time for a specific person, a group, or a widescale
audience. There is no way to predict the means or mode of communication.
22

Fourth, deepfakes are becoming increasingly realistic, meaning that digital


20

artifacts that make them look unrealistic today are likely to become less common.
©

Taken together, these facts mean that current technical countermeasures are not
only insufficient to counter the simplest implementation of a deepfake, which is a known
point-to-point message, but also have no way at all to confront a more sophisticated
implementation, which could include switching from a prerecorded to real-time fake,
changing from a one-on-one conversation to addressing a group, or switching modes of
communication. There is no technical countermeasure that can handle these possibilities
at present.

It is possible that a given countermeasure that detects deepfakes might be able to


detect a prerecorded deepfake that is replayed for that countermeasure. It is less likely,

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 16

gh
Ri
given the current state of technology, that the same countermeasure could detect a

ll
Fu
deepfake during a live call. However, even if such a countermeasure existed that worked
against a live call, it would not work against all deepfake-enabled attacks directed against

ns
a given victim because detection software cannot be everywhere at all times at present.

ai
This is a major vulnerability.

et
rR
Next, there is also a significant likelihood of combining a deepfake-enabled attack

ho
with a cyberattack. A bad actor could attempt to use a deepfake to gain access to a system
or could launch a cyberattack concurrently with or as part of a deepfake-enabled attack—

ut
,A
for instance, by using a deepfake as a distraction or by initiating a denial of service,
among other scenarios.
te
itu

The next level of the compound challenge is the interrelation between deepfake-
st

enabled attacks and other major security concerns. Some of the areas with a large overlap
In

or potential for overlap are insider threat, work from home/remote work, and physical
NS

security. These areas of overlap create complications, such as the potential for employees
who are working remotely to be put in harm’s way, manipulated, and coerced within the
SA

context of deepfake-enabled attacks.


e
Th

Note also that deepfakes can potentially be deployed in conjunction with a host of
other attacks. For example, this could include social engineering and blackmail, such as
22

in examples in the previous section. Of particular note, there is the possibility of


20

combining a deepfake attack with a demand for payment to prevent release of the
©

compromising deepfake in what has been dubbed a “ransomfake” (Poremba, 2021).


Building on this, one can imagine the use of deepfakes as coercive measures in so-called
“tiger kidnappings,” or conversely and more likely, the use of tiger kidnappings as a
means to effect the cooperation of an insider in the implementation of a deepfake
(Campbell, 2008). Another possibility is that deepfakes might be incorporated into a
virtual kidnapping. With deepfakes becoming more realistic and the technology to make
them more ubiquitous, their implementations will become more routine and damaging
(Smith, 2022).

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 17

gh
Ri
When it comes to technical countermeasures, the challenge for defenders from a

ll
Fu
technical perspective will be to find a means that works across platforms to detect and
prevent deepfakes—at the sending and receiving ends, and for the viewing public. This is

ns
a tall order. The stakes are high.

ai
et
This profound challenge demands a solution. The methodology in section 3 offers

rR
some practical suggestions for approaching the problem. Prior to turning to discussion of

ho
the methodology, the final background area to discuss is a brief review of law around
deepfakes.

ut
2.6 Law and Policy Around Deepfakes ,A
te
itu

Legislatures around the world are paying increasing attention to artificial


st

intelligence, including deepfakes. Broadly speaking, this involves wider policy issues
In

such as those affecting national security, intellectual property, and privacy rights, as well
NS

as legislation around civil and criminal law.


SA

In the United States, there has been a spate of legislation dealing with widescale
issues around AI and deepfakes. Among other examples, this has included the
e
Th

development of reporting mechanisms and task forces through the Deepfake Report Act
22

of 2019 (S. 2065), the Deepfake Task Force Act of 2021 (S.2559), and creation of the
National Security Commission on Artificial Intelligence (created through the National
20

Defense Authorization Act of 2019 Pub. L. 115-232).


©

Much of the harm caused by deepfakes can be addressed by existing civil and
criminal law. For instance, victims of defamatory material can sue. Likewise, there are
criminal laws at both the state and federal levels dealing with various types of computer
crimes, theft, and fraud. However, victims have not been able to seek justice in all cases,
such as in situations involving deepfake-enabled revenge porn. Due to these
shortcomings, a number of US states have enacted statutes that are purpose-built for
deepfakes (Clark, 2022). One such example is Florida’s pending Senate Bill 1798
(Florida Senate, 2022). The Florida bill stems in part from the personal experiences of a
state senator (Coble, 2022).

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 18

gh
Ri
International approaches to deepfakes have varied. The European Union’s (EU)

ll
Fu
General Data Protection Regulation (GDPR) does not mention deepfakes. However, it
does indirectly afford a degree of civil recourse to victims of malicious deepfakes

ns
through some of the rights it confers (Colak, 2021). The EU’s pending Artificial

ai
Intelligence (AI) Act, proposed in 2021, explicitly mentions deepfakes. It takes a “risk-

et
rR
based approach” to AI that would require notifying users when they are interacting with
manipulated media, including deepfakes (Europol, 2022). Neither bill criminalizes

ho
deepfakes or would have much of an apparent deterrent effect against deepfake-enabled

ut
attacks.

,A
One more international approach that bears mention is China’s planned deepfake
te
law, which nominally bans deepfakes made without the consent of those depicted and
itu

requires removal of some deepfake apps from online app stores (Qureshi, 2022). It
st

remains to be seen how this will play out.


In
NS

The legal system is working to respond to this growing threat. However, there is
not yet a legal response that sufficiently takes into account the dispersed nature of the
SA

threat posed by deepfake-enabled attacks. Legal defenders, like technical defenders, need
e

to appreciate and deal with the compound nature of the problem that is coming.
Th

Malicious actors can attack via various modes of communication and devices, speaking
22

to the audience of their choosing, either live or prerecorded. This will require an
20

innovative legal response.


©

3. Methodology

3.1 Overview of the Methodology

Based on the discussion above, it should be clear that deepfake-enabled attacks


are a serious threat. It should also be clear that as of now and likely for the foreseeable
future, no technical measure will be able to address all the eventualities around
deepfakes. Some technologies may be able to deal with some of the instances some of the

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 19

gh
Ri
time. However, until there is a technical solution that directly addresses the problem,

ll
Fu
organizations need to develop a plan to confront this emerging threat.

ns
The dynamism and potential destructiveness of deepfake-enabled attacks demands
that security practitioners take a proactive approach in assessing and planning with the

ai
et
specific nature of the threat in mind. In this section, this paper offers suggestions for

rR
confronting the threat posed by deepfake-enabled attacks.

ho
The methodology is comprised of several steps. The first is a wholly original

ut
checklist (Appendix 1) that assists organizational leaders and security practitioners in

,A
assessing their organization’s posture in terms of the threat. The second step is a ranking
te
of threats, to include those identified in the first step, those from known deepfake-enabled
itu

attacks, and those derived from any other relevant source. The next element of the
st

methodology is a truncated version of the NIST Cyber Security Framework (NIST,


In

2018). Inclusion of the framework in the methodology offers a comprehensive,


NS

systematic, and organized way of viewing and discussing security, both within
organizations and between organizations.
SA

Organizations can use the methodology in several ways. As part of a proactive


e
Th

process, they can approach the tools sequentially, beginning their assessment with the
checklist in Appendix 1. Their assessment can continue through to the framework. The
22

framework itself includes a number of recommended steps that cover assessment and
20

planning. Planning is the next step in the overall methodology. Organizations that are
©

being proactive can strengthen their security posture based on specific deepfake-related
threats identified during the assessment process.

The methodology can be used as a training aid. The questions in Appendix 1 and
the method for brainstorming to be discussed in Section 3.3 lend themselves well to
tabletop exercises. They can also assist planners in coming up with training scenarios.
With this in mind, a benefit of the way Appendix 1 and Section 3.3 are set up is that
planners can design training scenario injects that emphasize particular points. For
example, if there is a concern about scenarios involving employee vulnerability in remote

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 20

gh
Ri
work settings, it would be possible to emphasize this by weighing it more heavily in the

ll
Fu
answers to the questions in Appendix 1 and Section 3.3.

ns
Finally, the methodology can be used to respond to deepfake-enabled attacks.
Ideally, an organization has proactively planned by working through an assessment,

ai
et
adjusted its posture in light of that assessment, trained with specific threats in mind, and

rR
readjusted. However, regardless of whether this is the case, the methodology is written in

ho
a way that it can be used by an organization that is facing a deepfake-enabled attack. The
obvious caveat here is that an organization dealing with a real-time attack (either an

ut
,A
attacker in real time or a prerecorded attack being played live) will not likely have a great
deal of time to work with the tools in the methodology, given the nature of the crisis.
te
itu

3.1.1 Initial Threshold Question


st
In

Before an organization begins its analysis, leaders need to decide whether they
NS

want to invest time, money, and energy on deepfakes as a threat. The primary question
SA

for any organization is whether it will analyze its security in light of the threat posed by
deepfakes. If an organization is considering whether to pursue this analysis, it can review
e
Th

the documented real-world cases and proofs of concept above as a threshold test. It is
possible that some organizations may be hesitant to view deepfakes and deepfake-
22

enabled attacks as a unique category of problem. Some organizations may view


20

deepfakes as manifestations or subsets of existing problems (e.g., social engineering,


©

blackmail).

3.2 Checklist for Analyzing Risk

If an organization wishes to proceed with the analysis, the recommended first step
is a process of reviewing the threat and risk exposure using the checklist for analyzing
risk exposure in Appendix 1. This is a series of questions that break down various aspects
of deepfakes and deepfake-enabled attacks. This is clearly not an exhaustive list. Rather,
it is a qualitative tool, the purpose of which is to allow organizations to consider where
risk exposure might lie. The questions are written as though they are about an actual

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 21

gh
Ri
attack. Written this way, the questions allow security personnel and other organizational

ll
Fu
leaders to consider whether an attack with these characteristics is possible.

ns
This initial analysis is used to brainstorm attack vectors and risk exposure.
Nevertheless, security personnel might consider soliciting buy-in and input from other

ai
et
relevant departments. This could include a variety of offices such as legal, compliance,

rR
human resources, public relations, and privacy, among others. For example, given the

ho
capacity of deepfakes to cause immediate reputational damage, public relations might be
able to spot issues related to these specific risks and threats that security and technical

ut
,A
personnel cannot see at the outset. This said, it is not necessary to expand the discussion
group at the initial brainstorming stage because there is an opportunity to do so within the
te
context of the framework, which follows. Security personnel may wish to keep the
itu

discussion streamlined at this point in order to more quickly move the process forward
st

toward a wider internal audience.


In
NS

Security personnel and any other participating organizational units should


consider the questions in light of the various use cases and proofs of concept above as
SA

well as any other relevant information about the organization’s circumstances. The
e

questionnaire is designed to prompt thought and discussion and elicit answers, not to
Th

reach a specific and definitive objective truth. It may likely turn out that an organization
22

believes it has exposure to deepfakes from multiple angles. For instance, an organization
20

may see that its employees are vulnerable as individuals and that the organization can
©

also be a target. Likewise, an organization may be the target of malicious actors who are
driven by multiple motivations: theft, vandalism, etc. An organization may also be able to
draw on relevant real-world experience with other crises.

3.3 Most Likely/Most Dangerous Matrix

Following a review of the checklist, organizations should conduct a second


qualitative review, this time working to rank threats. For this, it is helpful to borrow
language and methodology from the US Army in evaluating possible enemy courses of
action (US Army, 2019). The goal is for organizations to come up with what they believe

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 22

gh
Ri
are the adversary’s most likely and most dangerous courses of action. This starts with

ll
Fu
brainstorming on courses of action. During this process, participating team members
simply list whatever they believe to be feasible adversary courses of action. In order to

ns
assist with the qualitative assessment of these courses of action, organizations can create

ai
a basic chart with x and y axes, with one axis representing likelihood and the other

et
rR
danger. By plotting these risks and threats to a matrix, organizations will have a visual
representation of priorities to confront when it comes to deepfakes.

ho
This matrix can be used for a variety of purposes, from the allocation of resources

ut
,A
to the development of scenarios involving these specific risks that can be run through the
framework. While any relevant information can be fed into this matrix, it would be
te
helpful to include at a minimum reference to the items in the checklist for analyzing risk.
itu

Again, that list is not meant to be prescriptive. Organizations are facing their own threats
st

and risks. However, the list is an effort to begin to compile themes common to deepfake-
In

enabled attacks. It may serve as a useful starting point in assessing risks in this area. The
NS

next step for an organization in this process is to turn to the framework. Before moving to
SA

the framework, it is worth examining briefly how the zero trust model can apply to this
e

discussion of deepfake-enabled attacks (Forrester Research, 2019).


Th

3.4 Zero Trust Mindset


22
20

While zero trust is intended to apply to computer networks, and that model can
©

certainly be brought to bear on aspects of the deepfake problem, there is something


general that can be extracted from the philosophy underlying zero trust. Essentially, zero
trust rests on the notion that no element can be trusted—all hardware, connections, users,
etc. must be continually verified, given the lowest level of privilege necessary, and
otherwise treated with the utmost skepticism. This applies to everything that touches the
system, whether or not it is owned by the organization and whether or not it is within the
organization’s defined perimeter. In the case of deepfakes, it would be a good idea to
apply this same concept to human interactions.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 23

gh
Ri
In pursuing this zero trust line of thinking, a question that arises is whether certain

ll
Fu
transactions or roles (e.g., financial, leadership) should be flagged as high risk for
scrutiny when it comes to deepfakes. While there is no right answer and each

ns
organization needs to consider these questions based on its own circumstances, there are

ai
some inherent dangers in over-relying on automatically heightened scrutiny. Zero trust’s

et
rR
primary benefit in this context is to apply a level of skepticism to all roles and
interactions, since deepfakes are a hybrid attack in which people are part of the attack.

ho
The benefits of applying the zero trust mindset become clearer as organizations begin to

ut
consider how the framework applies to their circumstances. The next step in the
discussion is the framework itself.
,A
te
itu

3.5 Modified Framework


st
In

These recommendations are based on the NIST Cybersecurity Framework (CSF)


NS

(NIST, 2018). This analysis assumes that a given organization is using the CSF. As a first
step, organizations should review the CSF to include the introductory sections, whether
SA

or not the organization already uses the CSF. An organization not currently using the
e

framework should attempt to understand its security posture in the context of the CSF.
Th

There is some precedent for adapting the CSF to a specific use (Barker et al., 2022).
22

The framework (Appendix 2) as modified draws from all of the CSF’s five
20

functions, 15 of its 23 categories and 49 of its 108 subcategories. Specifically, 12


©

subcategories are from the identify function, 17 from protect, three from detect, 11 from
respond, and six from recover.

The framework and overall methodology have been developed with a focus on
deepfake-enabled attacks. The 48 subcategories included in the framework reflect those
CSF areas that are most relevant, considering the various issues identified around
deepfakes. Each of the functions and many of the subcategories have unique implications
for deepfake-enabled attacks, some of which will become clear in the case studies.

A goal for further research is to develop a column in the modified framework


titled “deepfake relevance” that will list more information on how and why the given

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 24

gh
Ri
subcategory applies to deepfake-enabled attacks. The idea for this column, specifically,

ll
Fu
would be akin to the “ransomware application” column in NIST 8374 (Barker et al.,
2022).

ns
Prior to moving into the case studies, note that in discussing the cases, reference

ai
et
to the categories and subcategories will include standard notation of CSF sections, which

rR
reflect the function, category, and subcategory. For instance, “(ID.AM-3)” corresponds

ho
with the third subcategory under “identify, asset management,” which is “organizational
communication and data flows are mapped.” Throughout the case studies, the

ut
,A
subcategories will be cited this way. They are all from the NIST CSF.
te
Having discussed several modes of analysis, including the checklist, most
itu

likely/most dangerous course of action analysis, zero trust, and framework, the paper will
st

now look at several case studies.


In
NS

4. Case Studies
SA

Organizations can use the checklist for analyzing risk exposure in Appendix 1 and
e

the most likely/most dangerous matrix to develop hypothetical fact patterns for tabletop
Th

exercises and training scenarios.


22

In considering how to use these tools in designing scenarios, it is possible to say


20

that there are varying levels of complication around deepfake attacks. This corresponds in
some ways with the most likely/most dangerous analysis. Whether an organization is
©

designing scenarios or considering its actual risk profile, planners can write out a variety
of potential adversary courses of action and then plot them on the axes. Planners can then
combine a number of features to create the type of attack they wish to train with.
For example, a prerecorded deepfake circulated internally within an organization,
not accompanied by any other avenues of attack, that does not hold credibility within the
organization could be relatively low impact. On the opposite end of the spectrum, a live
deepfake, directed outwards from the organization (or inwards with significant effect on
operations), launched in conjunction with other avenues of attack such as a denial of
service, with a highly detrimental and credible message, would be very high impact.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 25

gh
Ri
If designing a scenario, planners can take these fact patterns and build around

ll
Fu
them. If an organization has these facts as its actual operating picture, a next step would
be to apply them to the framework.

ns
Planners can use this methodology to come up with virtually any possible

ai
scenario by changing business type, attack vector, attacker motivation, and other factors.

et
rR
For example, scenarios can focus on types or levels of threats. Likewise, one could
develop scenarios that are very specific to a type of business with specific characteristics

ho
and a particular risk/threat profile.

ut
The first case study demonstrates an organization working through all stages of

,A
the methodology. The remaining five case studies show organizations working only with
te
the framework and only while they are experiencing a crisis. In terms of scenario design,
itu

the scenarios in this section tend towards cases on the higher-impact side. The reason for
st

this is that these scenarios are illustrative of important concepts around deepfakes.
In

Entities come from across industry sectors, facing attackers with a variety of motivations,
NS

using various types of deepfakes and attack vectors, and relying on witting and unwitting
SA

third-party participant/victims. Taken together, this set of fact patterns offers an


opportunity to review a broad array of eventualities and touches on a number of concepts
e
Th

from the methodology and framework.


22

4.1 Case Study of Full Cycle of Methodology


20
©

The organization in this example is a privately held regional chain of convenience


stores that handles its own distribution. It operates throughout most of one state as well as
sections of adjoining states that are functionally part of the same region. This chain is the
de facto trusted brand in the region, despite the existence of national brands in the
market. This company employs approximately 10,000 people. As part of a national
industry association, the company follows industry-standard cybersecurity best practices.
In line with its forward-leaning strategy, it has decided to implement the
recommendations around deepfakes.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 26

gh
Ri
This organization’s first step is to go through the checklist in Appendix 1. Given

ll
Fu
that the organization is reviewing this checklist proactively, it assesses its risk exposure.
It must consider where it is most vulnerable to the types of known threats from deepfake-

ns
enabled attacks. This series of questions results in a number of possibilities, and the

ai
organization plots them against the matrix of most likely/most dangerous threats. During

et
rR
this stage of planning, members of the team also begin to cross-reference their work to
framework sections. The organization’s review identifies the need to strengthen and

ho
validate internal communication across multiple bands in emergencies/contingencies, to

ut
engage in company-wide training regarding deepfake identification, to bolster and test

,A
response and recovery plans (PR.IP-9 and PR.IP-10), and to ensure that their public
te
relations strategy is prepared for the variety of deepfake contingencies they have
itu

identified (RS.CO-1 and RC.CO-1). During its preparation, the company identifies its
st

greatest vulnerabilities as the potential for compromise of customer or employee data,


In

any attack that jeopardizes its compliance with PCI-DSS standards, and overall, anything
NS

that damages its reputation and the trust it has among the public of the region.
SA

As part of this planning, the organization can also use the framework to guide its
e

preparation. Numerous subcategories from the identify (ID) and protect (PR) categories
Th

of the framework can aid the organization in proactive development of its defensive
22

strategy. For example, (PR.AT-4) ensures that senior executives understand their roles
20

and responsibilities.
©

It is possible to discuss this organization’s use of the methodology in the planning


process in greater detail; however, for the purposes of this section, the analysis will stop
here.

Despite having upgraded its security in light of the threat of deepfakes, the
company ends up the victim of a deepfake-enabled attack. The chain’s executive office
receives an email with an attachment (deemed safe) of a video showing individuals
taking barrels from a truck that is clearly marked with the company’s logo and dumping
what appears to be some type of industrial material from the barrels into a lake. While the
video plays, there is an audio overlay of what seems to be a phone call between an

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 27

gh
Ri
unknown third party and a deepfake of the unmistakable and very well-known voice of a

ll
Fu
member of the executive team, discussing with disdain towards the local populace the
dumping of waste in this local nature preserve. The email delivering the deepfake could

ns
be accompanied by a demand for ransom (ransomfake) or might just be a preview of the

ai
fact that the deepfake will be released to the public. This implicates framework sections

et
rR
related to public relations (RC.CO-2) and information sharing with external stakeholders
(RS.CO-5). Perhaps the sender might demand customer data as ransom. If the

ho
organization has been thorough in planning based on the most likely/dangerous threats it

ut
identified, it will have plans to address these eventualities.

,A
This example showcases the entire methodology from the assessment questions
te
through the initial application of the framework after a deepfake-enabled attack.
itu
st

The purposes of the next five examples are to demonstrate a broad view of the
In

threat landscape, identify salient points in each case, and cover a number of the
NS

framework’s points.
SA

4.2 Case Study – Financial Institution


e
Th

An employee of a large multinational financial institution receives a call from


22

what he or she believes to be his or her manager. Nothing about the interaction raises any
concerns for the employee. The manager asks the employee to take certain actions with
20

regard to an account that would result in the release of funds, and that until one year ago
©

would have been routinely permitted based only on the verbal authorization of this
manager by phone.

A number of subcategories in the framework are useful both in preparing for and
responding to such a situation. In terms of preparation, organizations should train their
employees on the threat and recommended responses using known threat examples
(PR.AT-1). Organizations should consider internal and external threats (ID.RA-3),
business impacts (ID.RA-4), and risk responses (ID.RA-6). In light of known threats of
the type described in articles cited above, financial institutions are undoubtedly instituting
additional checks and verifications prior to the release of funds. As part of its preparation,

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 28

gh
Ri
the organization would need to continue to stay abreast of any new information related to

ll
Fu
this type of scam through organizations such as the FS-ISAC. Staying up to date on
strategic information through industry groups is a best practice that should be adopted

ns
across the board when preparing to mitigate deepfakes.

ai
et
This scenario makes the employee into an unwitting insider threat. Awareness and

rR
training are the keys to ensure that an employee does not fall prey. Organizations should

ho
have procedures in place instructing employees on how to respond in such instances—to
include internal communication and verification. These procedures should be drilled.

ut
,A
Organizations should consider reporting mechanisms for employees who might get drawn
into this kind of scam. Depending on the nature of the organization, the organization
te
should consider whether it can allow a no-fault system for immediate reporting. There
itu

should also be a system for the employee to report it contemporaneously with little to no
st

notice to the malicious actor. Any such reporting system should take into account that an
In

actor might have live video communication with the victim.


NS
SA

4.3 Case Study – Publicly Held Corporation


e
Th

The next scenario involves a publicly held corporation. This case is a two-part
attack. In the first part, an attacker gains access by using a deepfake to pose as a member
22

of the sysadmin team. The attacker then convinces personnel to take actions that lead to a
20

serious degradation or outright halt to internal communication, such as a denial of


©

service. In the second stage, the attacker makes a public announcement while posing as a
member of the company’s leadership. The announcement will be intended to further the
attacker’s goal, whether that is to manipulate stock prices, achieve an activist purpose, or
otherwise.

As in the first scenario, employee training (PR.AT-1) is key. In both of the first
two scenarios, the relevance of a zero trust mindset is clear. Employees will not typically
or naturally question what appear to be routine instructions from superiors to release
funds or from sysadmins to take certain actions with regard to information systems.
Absent a technical countermeasure that alerts employees to the presence of computer-

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 29

gh
Ri
generated images or that prevents deepfakes, employees need to be aware of the

ll
Fu
possibility that a given image might be a deepfake. Likewise, organizations need to
implement processes to safeguard against the possibility that an attacker might

ns
successfully trick employees, or that would mitigate damage if an attacker is able to make

ai
contact with employees through the use of a deepfake.

et
rR
An employee’s own senses would typically be his or her means of verifying

ho
identity in this case. If employees are unable to rely chiefly on their own senses and on
the integrity of information systems, organizations must develop other reliable means. An

ut
,A
important part of an organization’s preparation for this type of eventuality is to have
mapped out communication flows (ID.AM-3). With the organization’s ability to
te
communicate internally and externally compromised, it is important that the organization
itu

execute its response plan (RS.RP-1) and that personnel know their roles and the order of
st

operations (RS.CO-1). Organizations should consider how they can establish reliable
In

means of identity and access management, in particular authentication, for their backup
NS

communications after a successful deepfake compromise.


SA

The moment communication breaks down, the organization is in crisis mode.


e

Ideally, employees would prevent stage one from becoming successful. However, if they
Th

do not, the key is for the organization to have effective backup communication plans in
22

place, preventing or mitigating the success of the attacker’s outward-facing message.


20

4.4 Case Study – Critical Infrastructure


©

An employee of a critical infrastructure facility is contacted at home by malicious


actors who claim to have kidnapped his daughter. The employee hears what sounds like
his daughter in the background. The attackers tell the employee that he must not make
any effort to contact anyone, including attempting to call his daughter’s phone, or they
will harm her. They then issue instructions, which include taking actions that, if followed,
will lead to damage to the facility and potentially to the surrounding community.

This is an example of a virtual kidnapping augmented by a deepfake of the


employee’s daughter’s voice.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 30

gh
Ri
This scenario also involves an insider, this time a witting insider, although one

ll
Fu
who is under duress. As in the first scenario, the organization should consider insider
threats in light of deepfake-enabled attacks. Likewise, as in the previous examples,

ns
training should include the possibility of this type of attack. Employees should be aware

ai
of the potential for being targeted with this type of material (PR.AT-1). Organizations

et
rR
should consider that between remote work and the increasing prevalence of this
technology, the risk of employees being the target of a “ransomfake” and other forms of

ho
AI-related extortion is growing exponentially. With that in mind, organizations should

ut
find ways to build employee reporting of such incidents into their response plans (PR.IP-

,A
9) to attempt to undermine the potential impact of virtual kidnapping and extortion
te
directed at employees.
itu
st

4.5 Case Study – Media and Nation-State Disinformation


In
NS

An international media conglomerate receives an unsolicited video purporting to


show an interview with a divisive figure in an ongoing conflict. By its very nature, this
SA

video is newsworthy. However, the individual makes some particularly strong statements
e

that are guaranteed to anger parties to the conflict and that may aggravate the already
Th

very fraught situation. The media organization is unable to immediately verify the
22

authenticity of the video, but it came through normal channels in the country of origin,
20

and it appears to be authentic. It is not, however.


©

This scenario involves the posting and proliferation (“going viral”) of


inflammatory deepfakes that are designed to stoke passions around a political issue. In
particular, the example here is of deepfakes meant to fan flames around a conflict
situation. However, this could easily be applied to other divisive issues in which graphic
representations or strong statements by a party would be likely to create a strong visceral
response among members of the audience, affecting their attitudes or inciting them
directly to action.

There are several ways in which the media scenario could play out. Probably the
most likely is that the media could be fed disinformation to run as though it is news. The

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 31

gh
Ri
second most likely scenario is that the media itself could be spoofed (as in the previously

ll
Fu
cited example of France24). Third, many media organizations run online fora where
members of the public post unmoderated or lightly moderated content, on which it would

ns
be very easy for someone to post a deepfake.

ai
et
In the first scenario, the media organization is an unwitting partner in

rR
promulgating disinformation. In the second, the organization is the victim of a deepfake

ho
attack. The third example is more complicated. Focusing on the first example, the
obvious point is that media should have a process of vetting information: stories,

ut
,A
incoming video, interviewees, etc. The existence of deepfakes does not obviate the
media’s inherent obligation to ascertain the authenticity of the information it puts out.
te
That being said, it is very likely that deepfakes will eventually be presented as true on air,
itu

even with established media organizations.


st
In

Notwithstanding other relevant sections of the framework, for the purposes of this
NS

scenario, emphasis should be placed on governance (ID.GV-1-4). Governance includes


items such as establishing cybersecurity policy, coordinating roles and responsibilities
SA

internally and externally, and managing legal and regulatory requirements around
e

cybersecurity, including those related to privacy and civil liberties (NIST, 2018). Clearly,
Th

media organizations need to be highly attuned to this threat. Their staff must be able to
22

vet deepfakes better than most. The public relations consequences of running deepfakes
20

as truth are very high for the media.


©

In the scenario described at the outset, if the media organization were to publish
the video, later determined to have been false, this would obviously become a major
response and recovery problem.

4.6 Case Study – Small Municipality, Research Institution, or


State Election Infrastructure

A final scenario involves either a small municipality, a research institution, or a


state’s election infrastructure. The institution is the target of an attack in which attackers
release a prerecorded deepfake showing workers engaged in some type of unethical

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 32

gh
Ri
behavior in the case of the municipality or research institution, or tampering with ballots

ll
Fu
in the election example. While this fact pattern can be expanded to include potential
avenues of attack such as compromised insiders, the key element for the purposes of this

ns
paper is to consider the end goal in a situation like this, in which such an organization—

ai
any of which relies on public trust—has been the target of an attack whose sole purpose

et
rR
is to undermine that trust. The framework subcategories discussed in previous sections
apply equally here. An important point to emphasize in this section is recovery. Each of

ho
the organizations mentioned in this scenario stands to suffer greatly from the reputational

ut
damage described. If the attack takes place, part of the recovery process will be managing

,A
public relations (RC.CO-1), repairing reputation (RC.CO-2), and communicating with
te
internal and external stakeholders about recovery efforts (RC.CO-3). More so than in
itu

most cyberattacks, deepfakes intentionally target reputation. The recovery effort may
st

need to be more intense. If the public is convinced that members of an organization have
In

truly done something wrong, it may be very difficult to repair the organization’s
NS

reputation. Mere proof that it was in fact a fake video may not always be enough to
SA

overcome realistic images.


e
Th

4.7 Wrap-Up
22

The scenarios above include a number of organizations dealing with witting and
20

unwitting insider threats, real-time and prerecorded deepfakes, and adversaries who
©

attack for a variety of reasons. The methodology can provide a good starting point for
organizations looking to review their security postures in light of this serious threat.
Organizations would be well-advised to consider the unique risks that deepfakes might
pose to their particular businesses based on the ways in which deepfakes can be used, by
comparing them to existing attack vectors, and by considering existing mitigations they
have in place. The methodology described in this paper can help organizations to identify
gaps in their technical and administrative controls.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 33

gh
Ri
5. Discussion

ll
Fu
The case studies offered an opportunity to walk through the methodology much in

ns
the way an organization might with its own fact patterns. It is worth noting at this point

ai
that once an organization has decided to adopt this methodology, the various pieces or

et
tools begin to work in concert as an ongoing cycle. For instance, the checklist at

rR
Appendix 1 and most likely/most dangerous matrix can be part of the risk assessment the

ho
framework discusses in the identify, risk assessment (ID.RA) category.

ut
Some common themes that arise from the case studies are that organizations

,A
should have ways of communicating during an incident, should know their information
te
flows, and should proactively develop public relations plans. In addition, training and
itu

intelligence are key.


st

Organizations need to consider that deepfake attacks can hit any form of
In

communication through any medium. An attack can be directed at anyone. Employees


NS

can be targeted as a group or as individuals, whether working in the office, on the road, or
SA

from home or other remote locations. The general public can also be targeted by attackers
purporting to be company employees or claiming to show images of company employees
e
Th

engaged in compromising activity. Deepfake messages can be prerecorded or in real time,


22

and can be audio, video, audio-video, or text only. They can be transmitted through any
means of communication, meaning they can come through any application or
20

communications system. Finally, attackers can act for any of the reasons that traditionally
©

motivate malicious actors: ego, money, activism, terrorism, espionage, vandalism, etc.

As organizations begin to consider the problem of deepfake-enabled attacks in


terms of the framework, they should look at the systems that require protection.

Deepfake-enabled attacks can affect organizational information systems narrowly


(e.g., organizationally owned systems), more broadly (personally owned devices
connected to the network), or most broadly (employees communicating completely
outside the organization’s systems). Likewise, this could include communications with

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 34

gh
Ri
the public or anyone else external to the organization. In all cases, they will in some way

ll
Fu
implicate communications involving the organization.

ns
This has implications for how organizations consider the scope of information
flows. For example, a denial of service could range from the inability of one or several

ai
et
employees in non-critical roles to access the network at the low end to the inability of

rR
anyone in the company to communicate with the public via corporate information

ho
systems at the opposite end of the spectrum.

ut
As organizations consider their information flows in the context of deepfakes,

,A
they should also think about the well-known “CIA triad,” that is confidentiality, integrity,
te
and availability. All three of these can and will be affected by deepfake-enabled attacks.
itu

However, the primary one that is most affected is integrity. Perhaps less obvious, but
st

equally affected, is availability. Naturally, if a bad actor is taking up airtime and sitting in
In

for a given person, that affects availability. In some instances, this may be by design. In
NS

others, it may simply be a helpful side effect for an attacker. Some attackers may intend
to include a denial of service in order to frustrate the victim’s ability to counter the
SA

attacker’s message, so that the attacker can effectively further his or her plan to steal or
e

otherwise move forward. Finally, and to a lesser degree, confidentiality can be affected.
Th

This would happen, for instance, when an attacker manages to convince an unwitting
22

insider to provide privileged information, among other examples. Depending on factors


20

such as the nature of the information, the jurisdiction in which this occurs, and an
©

organization’s industry, such a situation could have devastating consequences.

All three of these items can be addressed through appropriate planning and
implementation of aspects of the framework. Up front, note that organizations must be
realistic and cognizant of the problem and its effects on their systems in terms of the CIA
triad. This can inform a proper evaluation of information flows, responsibilities, response
plans, and recovery in various possible scenarios.

Another important consideration for organizations as they review the framework


is the interrelation between deepfake-enabled attacks and other security considerations
such as physical security, insider threats, and remote work. With this in mind,

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 35

gh
Ri
organizations should refer to their plans for these threats and work to integrate them with

ll
Fu
their deepfake response strategy.

ns
Additionally, because of the reputational harm associated with deepfakes, it will
always be better not to become the victim of a deepfake than to have to argue against the

ai
et
authenticity of one. Organizations should keep this in mind in designing their strategies

rR
for confronting deepfakes.

ho
Finally, it is important to keep in mind that deepfake-enabled attacks are not yet

ut
widespread and the damage they can cause is not yet widely documented. As these

,A
attacks take place moving forward, the security community can work this learned
te
experience into the methodology.
itu

6. Areas for Further Research and Ideas for Practice


st
In

Given that the topic of deepfakes is emerging, there are many areas for further
NS

research. There is a tremendous amount of research in the area of technical


SA

countermeasures. It would be good to see increased collaboration between researchers


e

who are working on machine learning and the security personnel who will be
Th

implementing the tools. Given the lack of practical solutions, this may help.
22

Another general area for further research would be the interrelation between
20

deepfake-enabled attacks and physical security, insider threat, and remote work.
©

In terms of this specific project, two next steps include building out of the
framework to include the “deepfake relevance” section and cross-referencing the
framework to other frameworks and compliance systems.

As noted, deepfakes are only one aspect of the wider, growing area of AI/ML and
the so-called metaverse. It is very important to consider security in this area now, as it
remains in its earliest stages. As with the development of any software or hardware, it is
far better to build security into the product than to treat security as an afterthought.
Deepfakes are an entrée to this area that presents a wide-open opportunity.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 36

gh
Ri
7. Conclusion

ll
Fu
This is a transitional time. Deepfake attacks are in their relative infancy. The

ns
overall reality they represent in terms of AI/ML is new. This means that the technology is

ai
not yet settled—either for attackers or defenders. The problem is that attackers have the

et
upper hand. As always, attackers only have to get it right some of the time. The problem

rR
of deepfake-enabled attacks is approaching quickly, and the stakes are exceptionally

ho
high, as is evidenced by the voice cloning cases this paper covered. The security

ut
community should not wait for this problem to arrive at its doorstep before it acts. Real-

,A
time deepfakes are a particular concern. The fact that they can be affected from any
te
location on any device at any time and combined with other attacks, such as denial of
itu

service, should be alarming to security personnel and executives alike. Smart adversaries
st

will preposition resources for complex attacks. They will attempt to attack even when the
In

technology is not optimal. The suggestions in this paper are not meant to replace
NS

technical measures. Rather, they are an effort to begin to develop a way of acting and
SA

thinking about this burgeoning problem proactively and effectively, using available
security controls. One strength of the methodology is that it includes venerable existing
e
Th

systems. This should allow it to remain flexible enough to encompass new technologies
as they emerge. The “deepfake relevance” column on the framework will grow with
22

lessons learned. As of now, there is not a deep body of experience dealing with deepfake-
20

enabled attacks. What appears to be the case, based on the technology, trends, and
©

guidance of experts, is that the security community will see AI-powered cyberattacks,
including deepfakes, proliferate. A goal of this research is to assist CISOs and other
network defenders as they prepare their organizations to meet this new threat. No one
knows with certainty what the landscape will look like once deepfake-enabled attacks
begin to make an impact. Based on current trends, it appears that this will happen soon.
When it does, no one will be able to deny it.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 37

gh
Ri
The views expressed in this article are my own and not those of the Department of State

ll
Fu
or the US government.

ns
This article is intended as general educational information, not as legal advice with
respect to any specific situation. If the reader needs legal advice on a specific situation or

ai
et
issue, the reader should consult with an attorney.

rR
ho
ut
,A
te
itu
st
In
NS
SA
e
Th
22
20
©

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 38

gh
Ri
References

ll
Fu
Anderson, M., (2021, August 8). Real-Time DeepFake Streaming with DeepFaceLive,

ns
UniteAI, https://www.unite.ai/real-time-deepfake-streaming-with-deepfacelive/

ai
et
Barker, W., Fisher, W., Scarfone, K., & Souppaya, M. (2022, February). Ransomware

rR
Risk Management: A Cybersecurity Framework Profile, (NISTIR 8374). National
Institute of Standards and Technology, https://doi.org/10.6028/NIST.IR.8374

ho
ut
Bennett, C., (2022, January 10), Fake Videos Using Robotic Voices and Deepfakes

,A
Circulate In Mali, https://observers.france24.com/en/tv-shows/truth-or-
te
fake/20220110-truth-or-fake- debunked-mali-robot-voices-deepfakes
itu

Beridze, I. & Butcher, J., (2019, August). When Seeing Is No Longer Believing, Nature
st

Machine Intelligence, Vol. 1, Aug. 2019, 332–334,


In

https://doi.org/10.1038/s42256-019-0085-5
NS

Brady, M., Howell, G., Franklin, J., Sames, C., Schneider, M. Snyder, J., & Weitzel., D.
SA

(2021, March). Cybersecurity Framework Election Infrastructure Profile, (Draft


NISTIR 8310). National Institute of Standards and Technology,
e
Th

https://doi.org/10.6028/NIST.IR.8310-draft
22

Brewster, T., (2021, October 14)., Fraudsters Cloned Company Director’s Voice In $35
20

Million Bank Heist, Police Find, Forbes, https://www.forbes.com/sites/


thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-
©

millions

Caldelli, R., Galteri, L., Amerini, I., & Del Bimbo, A., (2021, June). Optical Flow based
CNN for detection of unlearnt deepfake manipulations. Pattern Recognition
Letters, Volume 146, 2021, Pages 31-37,
https://doi.org/10.1016/j.patrec.2021.03.005

Campbell, D., (2008, January 28). The Tiger Kidnapping, The Guardian,
https://www.theguardian.com /uk/2008/jan/28/ukcrime.duncancampbell2

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 39

gh
Ri
Christopher, N., (2020, February 18), We’ve Just Seen the First Use of Deepfakes In An

ll
Fu
Indian Election Campaign, Vice, https://www.vice.com/en/article/jgedjb/the-first-
use-of- deepfakes-in-indian-election-by-bjp

ns
Clark, K. ‘Deepfakes’ Emerging Issue in State Legislatures, State Net Capitol Journal,

ai
et
Retrieved May 4, 2022 from https://www.lexisnexis.com/en-us/products/state-

rR
net/news/ 2021/06/04/Deepfakes-Emerging-Issue-in-State-Legislatures.page

ho
Coble, S. (2022, January 27) Florida Considers Deepfake Ban

ut
https://www.infosecurity-magazine.com/news/florida-considers-deepfake-ban/

,A
Coker, J., (2022, April 28), Europol: Deepfakes Set to Be Used Extensively In Organized
te
Crime, https://www.infosecurity-magazine.com/news/europol-deepfakes-
itu

organized-crime/
st
In

Citron, D. & Chesney, R., (2019). Deep Fakes: A Looming Challenge for Privacy,
NS

Democracy, and National Security, 107 California Law Review 1753 (2019).
https://scholarship.law.bu.edu/faculty_scholarship/640
SA

Colak, B., (2021, January 19). Disinformation: Legal Issues of Deepfakes, Institute for
e

Internet and the Just Society, January 19, 2021,


Th

https://www.internetjustsociety.org/legal-issues-of-deepfakes
22

Deepfake Report Act of 2019, S. 2065, 116th Cong. (2019). https://www.congress.gov/


20

bill/116th-congress/senate-bill/2065
©

Deepfake Task Force Act of 2021, S.2559, 117th Cong. (2021).


https://www.congress.gov/bill/117th-congress/senate-bill/2559

Dellinger, AJ. (2019, November 25). Anatomy of a Scam: Nigerian Romance Scammer
Shares Secrets. Forbes. https://www.forbes.com/sites/ ajdellinger/
2019/11/25/anatomy-of-a-scam-nigerian-romance-scammer-shares-secrets/

Denham, H., (2020, August 3) Another Fake Video of Pelosi Goes Viral on Facebook,
Washington Post, https://www.washingtonpost.com/
technology/2020/08/03/nancy-pelosi-fake-video-facebook/

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 40

gh
Ri
EUROPOL, (2022, April 28) Facing Reality? Law Enforcement and the Challenge of

ll
Fu
Deepfakes, https://www.europol.europa.eu/cms/sites/default/
files/documents/Europol_Innovation_Lab_Facing_Reality

ns
_Law_Enforcement_And _The_Challenge_Of_Deepfakes.pdf

ai
et
Federal Bureau of Investigation (FBI). (2017, October 16). Virtual Kidnapping, A New

rR
Twist on a Frightening Scam. https://www.fbi.gov/news/stories/virtual-

ho
kidnapping

ut
Federal Bureau of Investigation (FBI). (2021, March 10). Private Industry Notification:

,A
Malicious Actors Almost Certainly Will Leverage Synthetic Content for Cyber
te
Foreign Influence Operations. https://www.ic3.gov/Media/News/2021/210310-
itu

2.pdf May 10, 2021


st

Florida Senate, Minority Office. (2022, November 15). Leader Book Advances
In

Legislation Targeting Cyber Trafficking [Press Release].


NS

https://www.flsenate.gov/ Media/PressReleases/Show/4098
SA

Forrester Research. (2019). Five Steps to Zero Trust Security.


e

Fowler, G., (2021, March 25). Anyone with an iPhone Can Now Make Deepfakes. We
Th

Aren’t Ready for What Happens Next, Washington Post,


22

https://www.washingtonpost.com/ technology/2021/03/25/deepfake-video-
20

apps/
©

Giles, M., (2018, February 21). The GANfather The Man Who’s Given Machines the
Gift of Imagination. MIT Technology Review.
https://www.technologyreview.com/ 2018/02/21/145289/the-ganfather-the-man-
whos-given-machines-the-gift-of-imagination/

Greene, T., (2020, April 21). Watch: Fake Elon Musk Zoom-bombs Meeting Using Real-
time Deepfake AI, https://thenextweb.com/news/watch-fake-elon-musk-zoom-
bombs-meeting-using-real-time-deepfake-ai;

Haltiwanger, J., (2022, February 3), US Says Russia Planned to Use A “Graphic” Fake
Video with Corpses and Actors to Justify an Invasion of Ukraine, Business

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 41

gh
Ri
Insider, https://www.businessinsider.com/us-says-russia-planned-fake-video-

ll
Fu
create-pretext- ukraine-invasion-2022-2

ns
Hirwani, P., (2021, May 27). Scarily Authentic New Deep Fake of Tom Cruise Attracts
Millions of Views, The Independent, https://www.independent.co.uk/celebrity-

ai
et
news/tom-cruise-deep-fake-tik-tok-b1853256.html

rR
Jin-kyu, Kang, (2022, February 13), Deepfake Democracy: South Korean Virtual for

ho
Votes, https://www.france24.com/en/live-news/20220214-deepfake-democracy-

ut
south-korean-candidate-goes-virtual-for-votes

,A
John S. McCain National Defense Authorization Act for Fiscal Year 2019, Public Law
te
115-232, 115th Cong. (2018) https://www.congress.gov/115/plaws/
itu

publ232/PLAW-115publ232.pdf
st

Johnson, D., (2020, December 4). What Is Augmented Reality? Here’s What You Need
In

to Know About the 3D Technology. Business Insider.


NS

https://www.businessinsider.com/what-is-augmented-reality
SA

Kasapoglu, C., (2022, February 9). Me enamoré de un 'deepfake' de un sitio de citas que
e

me estafó, BBC News – Mundo, https://www.bbc.com/mundo/noticias-60326052


Th

Kushner, D. (2022, March 20). ‘We Have Your Daughter’: The Terrified Father Paid the
22

Ransom. Then He Found His Kid Where He Least Expected Her. Business
20

Insider, https://www.businessinsider.com/virtual-kidnappers-scamming-terrified-
©

parents-out-of-millions-fbi-2022-3#

Lima, C., (2021, August 6). The Technology 202: As Senators Zero In On Deepfakes,
Some Experts Fear Their Focus Is Misplaced, Washington Post,
https://www.washingtonpost.com /politics/2021/08/06/technology-202-senators-
zero-deepfakes-some-experts-fear-their-focus-is-misplaced/

Lomas, N., (2020, September 14). Sentinel Loads Up With $1.35M In the Deepfake
Detection Arms Race, Techcrunch, https://techcrunch.com/2020/09/14/sentinel-
loads-up-with-1-35m-in-the-deepfake-detection-arms-race/

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 42

gh
Ri
Marr, B., (2022, February 22). The Important Difference Between Web3 and The

ll
Fu
Metaverse, Forbes, https://www.forbes.com/sites/bernardmarr/2022/02/22/the-
important-difference- between-web3-and-the-metaverse/

ns
Matsuda, K., (2016). Hyper-Reality. http://hyper-reality.co/

ai
et
Mirsky, Y. & Lee, W., (2020, January). The Creation and Detection of Deepfakes: A

rR
Survey, ACM Computing Surveys, Vol. 1, No. 1, Article 1. at page 1:3,

ho
https://arxiv.org/pdf/2004.11138.pdf

ut
MIT Media Lab and Applied Face Cognition Lab. (2022). Detect Fakes

,A
https://detectfakes.media.mit.edu/ (No author is listed, but Matt Groh is listed on a
te
linked site as the “project contact.”)
itu

National Institute of Standards and Technology. (2018, April 16). Framework for
st

Improving Critical Infrastructure Cybersecurity (Version 1.1),


In

https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04162018.pdf
NS

National Security Commission on Artificial Intelligence (NSCAI), (2021, March 1), Final
SA

Report, https://reports.nscai.gov/final-report/table-of-contents/
e

Newman, L., (2019, May 28). To Fight Deepfakes, Researchers Built a Smarter Camera,
Th

Wired, https://www.wired.com/story/detect-deepfakes-camera-watermark/
22

Ober, H., (2022, May 3). New Method Detects Deepfake Videos With Up To 99%
20

Accuracy. UC Riverside News. https://news.ucr.edu/articles/2022/05/03/new-


©

method-detects-deepfake-videos-99-accuracy

Poremba, S. (2021, July 20) Deep Fakes: The Next Big Threat, Security Boulevard,
https://securityboulevard.com/2021/07/deepfakes-the-next-big-threat/

Qureshi, S. (2022, January 29). China Prepares to Crack Down on Deepfakes. Jurist,
https://www.jurist.org/news/2022/01/china-cyberspace-regulator-issues-draft-
rules-on-deep-fakes/

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 43

gh
Ri
Romano, A. (2018, April 18). Jordan Peele’s Simulated Obama PSA Is A Double-Edged

ll
Fu
Warning Against Fake News, Vox, https://www.vox.com/2018/4/18/17252410
/jordan-peele-obama-deepfake-buzzfeed

ns
Satter, R. (2019, June 13), Experts: Spy Used AI-Generated Face to Connect with

ai
et
Targets, AP, https://apnews.com/article/ap-top-news-artificial-intelligence-social-

rR
platforms-think-tanks-politics-bc2f19097a4c4fffaa00de6770b8a60d

ho
Saxena, A., (2022, March 17), “Despicable Zelensky deepfake ordering Ukrainians to

ut
‘lay down arms’ taken offline,” Express, https://www.express.co.uk/news/

,A
world/1581928/ukraine-volodymyr-zelensky-deepfake-video-ont
te
Smith, A., (2020, August 5). Deepfakes Are the Most Dangerous Crime of the Future,
itu

Researchers Say, Independent, https://www.independent.co.uk/life-style/gadgets-


st

and-tech/news/deepfakes-dangerous-crime-artificial-intelligence-a9655821.html
In

Smith, A., (2022, February 17). Deepfake Faces Are Even More Trustworthy Than Real
NS

People, Study Warns, The Independent, https://www.independent.co.uk/


SA

tech/deepfake-faces-real-ai-trustworthy-b2017202.html
e

Somers, M., (2020, July 21). “Deepfakes, explained,” Ideas Made to Matter –
Th

Cybersecurity, MIT Sloan School of Management at https://mitsloan.mit.edu/


22

ideas-made-to-matter/deepfakes-explained
20

Stolton, S., (2020, November 20). EU Police Recommend New Online ‘Screening Tech’
©

to Catch Deepfakes, Euractive, https://www.euractiv.com/section/digital/news/eu-


police- recommend-new-online-screening-tech-to-catch-deepfakes/

Stupp, C. (2019, August 31), Fraudsters Used AI to Mimic CEO’s Voice in Unusual
Cybercrime Case, Wall Street Journal, https://www.wsj.com/articles/fraudsters-
use-ai-to-mimic-ceos- voice-in-unusual-cybercrime-case-11567157402.

Thomson, D, (2022, January 31). Truth or Fake – Deepfake News Videos Circulate In
Mali Amid Tensions with France, France 24, https://www.france24.com/en/tv-
shows/truth-or-fake/20220131-deepfake-news-videos-circulate-in-mali-amid-
tensions-with-france

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 44

gh
Ri
US Army, (2019, March) ATP 2-01.3, Intelligence Preparation of the Battlefield,

ll
Fu
https://home.army.mil/wood/application/files/8915/5751/8365/ATP_2-
01.3_Intelligence_Preparation_of_the_Battlefield.pdf

ns
Vincent, J. (2016, May 20). This Six-minute short film plunges you into an augmented

ai
et
reality hellscape. The Verge. https://www.theverge.com/2016/5/20

rR
/11719244/hyper-reality-augmented-short-film

ho
Vincent, J. (2019, February 15) TL;DR, ThisPersonDoesNotExist.com Uses AI to

ut
generate endless fake faces, The Verge, https://www.theverge.com/tldr

,A
/2019/2/15/18226005/ai-generated-fake-people-portraits-thispersondoesnotexist-
te
stylegan
itu

Vincent, J. (2021, April 30), ‘Deepfake’ That Supposedly Fooled European Politicians
st

Was Just A Look-Alike, Say Pranksters, The Verge, https://www.theverge.com/


In

2021/4/30/22407264/deepfake-european-polticians-leonid-volkov-vovan-lexus
NS
SA
e
Th
22
20
©

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 45

gh
Ri
Appendix 1 – Checklist for Assessing Risk Exposure

ll
Fu
1 – Who is the target of the deepfake? (As distinguished from the motivation and purpose

ns
which may well be tied to a different target for the overall attack).

ai
Individual

et
Organization

rR
Individual within Organization
Third party (Coercion/Tiger kidnapping/etc)

ho
2 – Who is the victim of the overall crime? This is distinguished from the target of the

ut
deepfake. They may be one and the same. However, the victim of the crime is related to

,A
the attacker’s motivation and purpose. The attacker may have multiple targets and
victims.
te
itu

2 – What is the physical location of the affected person or people within the organization
(subjects and intended audience of deepfake – audience question does not apply in cases
st

involving intended widespread public disclosure)?


In

In office
NS

Remote (organization-controlled/travel/etc.)
Personal residences
SA

3 – What type of device(s) and means of communication is the deepfake being


e

transmitted to?
Th

Phone (audio)
22

Text message (SMS)


20

Smartphone (various – apps, etc.)


Zoom/other commercial VTC
©

Proprietary VTC
Chat app (WhatsApp, Signal, etc.)
Social media (Facebook, YouTube, TikTok, Instagram)
Broadcast media (news organizations, television, radio, etc.)

4 – What type of deepfake is it?

Real-time (audio/video/audio-video)
Pre-recorded (audio/video/audio-video)
Text
Photo

5 – Is there a second level of transmission?

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 46

gh
Ri
For instance, if it is a pre-recorded video, is the attacker’s intent that it be played on live

ll
news media?

Fu
6 – What is the attacker’s motivation?

ns
Theft, espionage, vandalism, activism, terrorism, ego/“bragging rights,” other?

ai
et
7 – Is there insider involvement?

rR
Witting? Unwitting? Coerced?

ho
8 – If insider involvement is coerced, how is the insider being coerced?

ut
,A
Directly?
Threats to loved ones?
te
Virtual kidnapping?
itu
st

9 – Is the attacker gaining access through the use of other hacker tools/exploitation
In

means? E.g., is the attacker accessing internal corporate networks as part of the attack –
either to conduct the deepfake itself or to accomplish another portion of the plan.
NS

If the attacker has used other hacker tools/exploits, did he or she access the network
SA

through user involvement such as phishing/spearphishing, social engineering, or a client-


side attack? Was the exploit part of the attack itself or incidental?
e
Th

10 – If not answered yes in nine above, is social engineering involved?


22

11 – How many layers are involved in the attack?


20

For example, is the deepfake itself the extent of the attack? Is the deepfake a means to an
©

end (e.g. the way to get a party to commit a follow-on crime)? Or, is there a second wave
deepfake?

12 – Will the organization’s systems’ availability be affected? Will the overall system
integrity be affected (beyond the individual message)? Will there be any form of outage?
Will systems be taken offline directly or indirectly?

13 – Who is the intended audience of the message? Internal to the organization?


External?

14 – Will the attacker offer the organization an opportunity to prevent release of the
deepfake?

E.g. through the payment of ransom or adherence to another demand?

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 47

gh
Ri
ll
15 – Will the attack involve any form of physical interaction with systems or personnel?

Fu
16 – Is there any indication that organizational systems have been breached?

ns
17 – Is there any indication that confidential data may be affected by the attack?

ai
et
This may trigger breach reporting requirements depending on an organization’s industry,

rR
the nature of the data affected, and/or the jurisdictions involved. The organization’s
counsel and privacy or data protection team should be involved in any discussion on this

ho
question.

ut
,A
Additional questions for organizations to consider:
te
1 – Are there any recent successful or unresolved incidents involving any of the
itu

following: social engineering, ransomware, insider threats, kidnappings, physical security


of facilities or personal residences, blackmail, threats against personnel or facilities, or
st

any other unexplained or suspicious incidents that bear consideration here?


In

2 – Does the organization engage in any activity that closely mirrors any of the specific
NS

known or likely scenarios? (This may seem like an obvious point, but these known/most
likely scenarios are the known and most likely because they are the low hanging fruit of
SA

this area and are likewise the low hanging fruit for us as security practitioners).
e

3 – Does the organization have an insider threat program in place?


Th

If so, is there a mechanism by which employees are able to report in-progress situations
22

involving duress or coercion? Are employees encouraged to come forward when they are
20

the targets or victims of blackmail or other kinds of external pressure?


©

4 – Does the organization have a physical security program in place for remote workers?

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 48

gh
Ri
Appendix 2 – Truncated NIST Cybersecurity Framework

ll
Fu
This table is derived from material in: National Institute of Standards and Technology,

ns
Framework for Improving Critical Infrastructure Cybersecurity (Version 1.1), April 16,
2018, available at https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04162018.pdf

ai
et
Function Category Subcategory

rR
Identify (ID) (12 Subcategories)
Develop an organizational

ho
understanding to manage

ut
cybersecurity risk to systems,
people, assets, data, and

,A
capabilities.
The activities in the Identify
te
Function are foundational for
itu

effective use of the


Framework. Understanding the
st

business context, the resources


In

that support critical functions,


and the related cybersecurity
NS

risks enables an organization


to focus and prioritize its
SA

efforts, consistent with its risk


management strategy and
e

business needs. (NIST CSF Page


Th

7)
Asset Management (ID.AM): The data, ID.AM-3: Organizational
22

personnel, devices, systems, and communication and data flows


20

facilities that enable the organization to are mapped


achieve business purposes are identified
©

and managed consistent with their


relative importance to organizational
objectives and the organization's risk
strategy.

Business Environment (ID.BE): The ID.BE-4: Dependencies and


organization's mission, objectives, critical functions for delivery of
stakeholders, and activities are critical services are established
understood and prioritized; this
information is used to inform
cybersecurity roles, responsibilities, and
risk management decisions.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 49

gh
Ri
ID.BE-5: Resilience requirements

ll
to support delivery of critical

Fu
services are established for all
operating states (e.g. under

ns
duress/attack, during recovery,
normal operations)

ai
Governance (ID.GV): The policies, ID.GV-1: Organizational

et
procedures, and processes to manage cybersecurity policy is

rR
and monitor the organization's established and communicated
regulatory, legal, risk, environmental,

ho
and operational requirements are
understood and inform the management

ut
of cybersecurity risk.

,A
te
ID.GV-2: Cybersecurity roles and
itu

responsibilities are coordinated


st

and aligned with internal roles


and external partners
In

ID.GV-3: Legal and regulatory


NS

requirements regarding
cybersecurity, including privacy
SA

and civil liberties obligations, are


understood and managed
ID.GV-4: Governance and risk
e

management processes address


Th

cybersecurity risks
Risk Assessment (ID.RA): The ID.RA-2: Cyber threat
22

organization understands the intelligence is received from


20

cybersecurity risk to organizational information sharing forums and


operations (including mission, functions, sources
©

image, or reputation), organizational


assets, and individuals

ID.RA-3: Threats, both internal


and external, are identified and
documented
ID.RA-4: Potential business
impacts and likelihoods are
identified
ID.RA-5: Threats, vulnerabilities,
likelihoods, and impacts are
used to determine risk

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 50

gh
Ri
ID.RA-6: Risk responses are

ll
identified and prioritized

Fu
Risk Management Strategy (ID.RM): The
organization's priorities, constraints, risk

ns
tolerances, and assumptions are
established and used to support

ai
operational risk decisions

et
rR
Supply Chain Risk Management (ID.SC)

ho
Protect (PR) (17 Subcategories)

ut
Develop and implement
appropriate safeguards to
ensure delivery of critical
,A
te
services.
The Protect Function supports
itu

the ability to limit or contain


st

the impact of a potential


cybersecurity event. (NIST CSF
In

Page 7)
NS

Identity Management, Authentication PR.AC-1: Identities and


and Access Control (PR.AC): Access to credentials are issued, managed,
SA

physical and logical assets and verified, revoked, and audited


associated facilities is limited to for authorized devices, users and
authorized users, processes, and processes
e

devices, and is managed consistent with


Th

the assessed risk of unauthorized access


to authorized activities and transactions.
22
20
©

PR.AC-2: Physical access to


assets is managed and protected
PR.AC-3: Remote access is
managed
PR.AC-4: Access permission and
authorizations are managed,
incorporating the principles of
least privilege and separation of
duties
PR.AC-5: Network integrity is
protected (e.g., network
segregation, network
segmentation)

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 51

gh
Ri
PR.AC-6: Identities are proofed

ll
and bound to credentials and

Fu
asserted in interactions
PR.AC-7: Users, devices, and

ns
other assets are authenticated
(e.g. single-factor, multi-factor)

ai
commensurate with the risk of

et
the transaction (e.g. individuals'

rR
security and privacy risks and
other organizational risks)

ho
Awareness and Training (PR.AT): The PR.AT-1: All users are informed
organization's personnel and partners and trained

ut
are provided cybersecurity awareness

,A
education and are trained to perform
their cybersecurity-related duties and
te
responsibilities consistent with related
itu

policies, procedures, and agreements.


st
In

PR.AT-2: Privileged users


NS

understand their roles and


responsibilities
SA

PR.AT-3: Third-party
stakeholders (e.g., suppliers,
customers, partners) understand
e

their roles and responsibilities


Th

PR.AT-4: Senior executives


understand their roles and
22

responsibilities
20

PR.AT-5: Physical and


cybersecuity personnel
©

understand their roles and


responsibilities
Data Security (PR.DS): Information and
records (data) are managed consistent
with the organization's risk strategy to
protect the confidentiality, integrity,
availability of information.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 52

gh
Ri
Information Protection Processes and PR.IP-8: Effectiveness of

ll
Procedures (PR.IP): Security policies protection technologies is

Fu
(that address purpose, scope, roles, shared
responsibilities, management

ns
commitment, and coordination among
organizational entitites), processes, and

ai
procedures are maintained and used to

et
manage protection of information

rR
systems and assets.

ho
ut
PR.IP-9: Response plans

,A
(Incident Response and Business
te Continuity) and recovery plans
(Incident Recovery and Disaster
Recovery) are in place and
itu

managed
st

PR.IP-10: Response and recovery


plans are tested
In

PR.IP-11: Cybersecurity is
NS

included in human resources


practices (e.g. deprovisioning,
SA

personnel screening)
PR.IP-12: A vulnerability
management plan is developed
e
Th

and implemented
Detect (DE) (3 Subcategories)
Develop and implement
22

appropriate activities to
20

identify the occurrence of a


cybersecurity event.
©

The Detect Function enables


timely discovery of
cybersecurity events. (NIST CSF
Page 7)
Anomalies and Events (DE.AE):
Anomalous activity is detected and the
potential impact of events is understood

Security Continuous Monitoring DE.CM-1: The network is


(DE.CM): The information system and monitored to detect potential
assets are monitored to identify cybersecurity events
cybersecurity events and verify the
effectiveness of protective measures.

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 53

gh
Ri
DE.CM-2: The physical

ll
environment is monitored to

Fu
detect potential cybersecurity
events

ns
DE.CM-3: Personnel activity is
monitored to detect potential

ai
cybersecurity events

et
Detection Processes (DE.DP): Detection

rR
processes and procedures are
maintained and tested to ensure

ho
awareness of anomalous events.

ut
Respond (RS) (11

,A
Subcategories) Develop and
implement appropriate
te
activities to take action
itu

regarding a detected
cybersecurity incident.
st

The Respond Function


In

supports the ability to contain


the impact of a potential
NS

cybersecurity incident. (NIST


CSF Page 8)
SA

Response Planning (RS.RP): Response RS.RP-1: Response plan is


processes and procedures are executed executed during or after an
e

and maintained, to ensure response to incident


Th

detected cybersecurity incidents.


22
20

Communications (RS.CO): Response RS.CO-1: Personnel know their


activities are coordinated with internal roles and order of operations
©

and external stakeholders (e.g. external when a response is needed


support from law enforcement
agencies).

RS.CO-2: Incidents are reported


consistent with established
criteria
RS.CO-3: Information is shared
consistent with response plans
RS.CO-4: Coordination with
stakeholders occurs consistent
with response plans

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 54

gh
Ri
RS.CO-5: Voluntary information

ll
sharing occurs with external

Fu
stakeholders to achieve broader
cybersecurity situational

ns
awareness
Analysis (RS.AN): Analysis is conducted RS.AN-2: The impact of the

ai
to ensure effective response and incident is understood

et
support recovery activities.

rR
RS.AN-4: Incidents are

ho
categorized consistent with
response plans

ut
RS.AN-5: Processes are

,A
established to receive, analyze
and respond to vulnerabilities
te
disclosed to the organization
itu

from internal and external


sources (e.g. internal testing,
st

security bulletins, or security


In

researchers)
Mitigation (RS.MI): Activities are
NS

performed to prevent expansion of the


event, mitigate its effects, and resolve
SA

the incident.
Improvements (RS.IM): Organizational RS.IM-1: Response plans
e

response activities are improved by incorporate lessons learned


Th

incorporating lessons learned from


current and previous detection/response
22

activities
20

RS.IM-2: Response strategies are


©

updated
Recover (RC) (6 Subcategories)
Develop and implement
appropriate activities to
maintain plans for resilience
and to restore any capabilities
or services that were impaired
due to a cybersecurity incident.
The Recover Function supports
timely recovery to normal
operations to reduce the
impact from a cybersecurity
incident. (NIST CSF Page 8)

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.
ts
Addressing Deepfake-Enabled Attacks Using Security Controls 55

gh
Ri
Recovery Planning (RC.RP): Recovery RC.RP-1: Recovery plan is

ll
processes and procedures are executed executed during or after a

Fu
and maintained to ensure restoration of cybersecurity incident
systems or assets affected by

ns
cybersecurity inidents.

ai
et
Improvements (RC.IM): Recovery RC.IM-1: Recovery plans
planning and processes are improved by incorporate lessons learned

rR
incorporating lessons learned into future
activities.

ho
ut
RC.IM-2: Recovery strategies are

,A
updated
Communications (RC.CO): Restoration RC.CO-1: Public relations are
te
activities are coordinated with internal managed
itu
and external parties (e.g. coordinating
centers, Internet Service Providers,
st

owners of attacking systems, victims,


other CSIRTS, and vendors).
In
NS

RC.CO-2: Reputation is repaired


SA

after an incident
RC.CO-3: Recovery activities are
e

communicated to internal and


Th

external stakeholders as well as


executive and management
22

teams
20
©

Jarrod Lynn, jdlynn@protonmail.com


© 2022 The SANS Institute Author retains full rights.

You might also like