You are on page 1of 28

Journal of Strategic Studies

ISSN: 0140-2390 (Print) 1743-937X (Online) Journal homepage: https://www.tandfonline.com/loi/fjss20

A matter of time: On the transitory nature of


cyberweapons

Max Smeets

To cite this article: Max Smeets (2018) A matter of time: On the transitory nature of
cyberweapons, Journal of Strategic Studies, 41:1-2, 6-32, DOI: 10.1080/01402390.2017.1288107

To link to this article: https://doi.org/10.1080/01402390.2017.1288107

Published online: 16 Feb 2017.

Submit your article to this journal

Article views: 7006

View related articles

View Crossmark data

Citing articles: 23 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=fjss20
THE JOURNAL OF STRATEGIC STUDIES, 2018
VOL. 41, NOS. 1–2, 6–32
https://doi.org/10.1080/01402390.2017.1288107

AMOS PERLMUTTER PRIZE ESSAY

A matter of time: On the transitory nature of


cyberweapons
Max Smeets
Department of Politics and International Relations (DPIR), University of Oxford, Oxford,
United Kingdom

ABSTRACT
This article examines the transitory nature of cyberweapons. Shedding light on this
highly understudied facet is important both for grasping how cyberspace affects
international security and policymakers’ efforts to make accurate decisions regard-
ing the deployment of cyberweapons. First, laying out the life cycle of a cyberwea-
pon, I argue that these offensive capabilities are both different in ‘degree’ and in
‘kind’ compared with other regarding their temporary ability to cause harm or
damage. Second, I develop six propositions which indicate that not only technical
features, inherent to the different types of cyber capabilities – that is, the type of
exploited vulnerability, access and payload – but also offender and defender
characteristics explain differences in transitoriness between cyberweapons.
Finally, drawing out the implications, I reveal that the transitory nature of cyber-
weapons benefits great powers, changes the incentive structure for offensive cyber
cooperation and induces a different funding structure for (military) cyber programs
compared with conventional weapon programs. I also note that the time-depen-
dent dynamic underlying cyberweapons potentially explains the limited deploy-
ment of cyberweapons compared to espionage capabilities.

KEYWORDS Cyberweapons; cyberspace; transitory nature; patching; stuxnet; cybersecurity

The impressive growth of cyberspace has brought a new type of weapon: the
cyberweapon. A cyberweapon concerns a capability designed to access a
computer system or network to damage or harm living or material entities.1
Whereas conventional weapons are generally characterised by their ‘multiple-
use-ability’ or ‘permanent’ nature, cyberweapons are unique in that they are
‘transitory’ in nature, that is, they have a short-lived or temporary ability to
effectively cause harm or damage to living or material entities.2 The nuclear
bombs developed at the height of the Cold War could still vanish cities in one
blow.3 Also the Kalashnikovs mass-produced in the early 1950s could still kill

CONTACT Max Smeets Max.Smeets@Politics.ox.ac.uk


1
As cyberweapons contains non-physical elements, it makes more sense to talk about cyberweapons as
a capability rather than a tool or instrument.
2
One should not confuse the concept with the notion that the effects of the weapon are potentially
temporary in nature.
3
Indeed, the latest developed nuclear weapons of the United States are about 25 years old.
© 2017 Informa UK Limited, trading as Taylor & Francis Group
THE JOURNAL OF STRATEGIC STUDIES 7

people. Even some of the earliest weapons used in conflict – including the
socket axe, the chariot, the spear and the sickle-sword – could be lethal today.
In contrast, the cyberweapons produced today are unlikely to have any impact
in a few years’ – or even less – time.
Although this dimension of cyberweapons has been overlooked for a
long time, there is a growing awareness of its consequences for interna-
tional security. Recent efforts have focused on how the transitory nature of
cyberweapons affects the incentive structure for deployment.4 Some scho-
lars have also paid attention to how the transitory nature of cyberweapons
changes the incentives for investing in these capabilities.5
Current research, however, fails to clarify what influences the temporary ability
of cyberweapons to cause harm or damage. The central objective of this research
is therefore to move towards a more well-considered understanding of the issue.
I aim to address the question: in what sense are cyberweapons transitory?
The article has three main motivations. First, I aim to enhance the con-
ceptual clarity of the cyber studies field. Just like mutual and shared under-
standings of values are considered to be the essential building blocks of any
society, so are mutual and shared understandings of concepts considered to
be the foundation of any academic discipline. Therefore, in unpacking the
concept of transitoriness more is involved than mere logomachy; it permits
more effective knowledge accumulation, and facilitates the security dialo-
gue between and across academic communities undertaking cyber research,
allowing to establish a common ground for discussion between those with
disparate views6. Second, scholars who have aimed to understand the
implications of the cyber danger for international society have repeatedly
focused on certain attributes of cyberspace as their starting point of analysis.
Topics discussed ad nauseam concern the notion that cyberspace radically
increased the speed, volume and range of communications of both state
and non-state actors, or that it leads to an obscurity of the identity and
location of actors causing a problem of attribution. Although these works

4
According to Libicki, the transitory nature of cyberweapons leads to less trigger-happy actors: ‘like
surprise, it is best saved for when it is most needed’. Krepinevich directly opposes this view arguing
that it creates a “use-it-or-lose-it dynamic” and might encourage a cyber power to launch an attack
before its advantage is lost.” Axelrod and Iliev reconcile the contrasting views and argue that the
degree to which a cyberweapons incentivises a ‘use-it-or lose-it dynamic’ or a ‘waiting-for-the-right-
moment dynamic’ depends on the type of capability and whether the stakes remain constant. See:
Martin C. Libicki, Conquest in Cyberspace: National Security and Information Warfare (Cambridge:
Cambridge University Press 2007), 87; Andrew Krepinevich, ‘Cyber Warfare: a “nuclear option”?’,
Center for Strategic and Budgetary Assessments, 2012, <http://www.csbaonline.org/wp-content/
uploads/2012/08/CSBA_Cyber_Warfare_For_Web_1.pdf>; Robert Axelrod and Rumen Iliev, ‘Timing
of cyber conflict’, PNAS, 111/4 (2014), 1298–1303.
5
According to Gartzke, it means reduces the incentives to invest in ‘cyberwar assets’. Erik Gartzke, ‘The
Myth of Cyberwar’, International Security, 38/2 (2013), 41–73, 59–60. Also see: James A. Lewis,
‘Conflict and Negotiation in Cyberspace’, The Technology and Public Policy Program, 2013, <http://
csis.org/files/publication/130208_Lewis_ConflictCyberspace_Web.pdf>.
6
For a similar point see: David A. Baldwin, ‘The Concept of Security’, Review of International Studies 23
(1997), 5–26.
8 M. SMEETS

have (generally) provided enlightening accounts on the makeup and role of


cyberspace, we should not continue to focus perpetually on the same
narrow set of questions. Too many fundamental issues within this inchoate
field of studies are still assumed or overlooked. Leaving out important
factors in our understanding of the cyber issue makes it more likely the
causal influence of dimensions already identified are under- or
overestimated.7 The transitory nature of cyberweapons is a good starting
point of this initiative as it is relevant for a number of fundamental debates
within the field of International Relations.8 Third, states are gradually coming
to terms with the cyber perils and are establishing guidelines, policies and
institutions to deal with this (potentially) transformative technology. This
research aims to ensure policymakers can enhance the accuracy of decisions
regarding the deployment of cyberweapons. It also adds to our understand-
ing to what degree cyberweapons require any extraordinary analysis or
authority to which non-cyberspace military planners are not already accus-
tomed. A better analytical understanding of the transitory nature of cyber-
weapons, can lead to better decision-making on deployment. And when you
know why cyberweapons are transitory, you can also find better ways to
prevent this from happening in an attempt to increase resource efficiency.
This article consists of three parts. Part I clarifies the meaning of the
concept of transitoriness, and subsequently assesses to what degree the
transitoriness of cyberweapons is a unique phenomenon. I argue that in
principle any weapon can be put on a spectrum ranging from highly
permanent to highly transitory as the effectiveness of a certain capability
to cause harm or damage inherently reduces over time, with the catch up of
the defence versus offense. It is the malleability of cyberspace, and the
‘window of exposure’ it creates, that causes cyberweapons to be on the
far end of the transitoriness spectrum – creating a ‘difference of degree’.
Yet, the security patching process of software vulnerabilities indicates that
these cyberweapons in particular are also ‘different in kind’ when it comes to
their short-lived nature to cause harm or damage. The corrective process which
takes place after a software vulnerability is exploited does not only prevent
successful exploitation against one system but against any administrator which

7
In statistics, it would called omitted-variable bias.
8
The work of the National Academy of Sciences is a good example of this trend. It states that weapons
have three characteristics that differentiate them from traditional kinetic weapons. First, ‘they are
easy to use with high degrees of anonymity and with plausible deniability, making them well suited
for covert operations and for instigating conflict between other parties’. Second, they ‘are more
uncertain in the outcomes they produce, making it difficult to estimate deliberate and collateral
damage’. And, third, they ‘involve a much larger range of options and possible outcomes, and may
operate on time scales ranging from tenths of a second to years, and at spatial scales anywhere from’
concentrated in a facility next door‘ to globally dispersed’. The study leaves out any discussion on the
notion of transitoriness. See: William A. Owens, Kenneth W. Dam and Herbert S. Lin (eds.), ‘Excerpts
from Technology, Policy, Law and Ethics Regarding U.S. Acquisition and Use of Cyberattack
Capabilities’, National Research Council, 2009; S-1, Section 1.4 and 2.1.
THE JOURNAL OF STRATEGIC STUDIES 9

uploads the patch. The fact that advances in the cyber defence of one (vendor)
can be relatively effortlessly adopted by others creates a defence for all.
Part II considers the follow up question, which is: why is there a difference in
transitoriness between cyberweapons? The short-lived nature of cyberweapons
to cause harm or damage is influenced by a number of technical properties.
First, this research finds that cyberweapons exploiting software vulnerabilities
are more transitory relative to capacities exploiting hardware and network
vulnerabilities. Second, I aver that cyberweapons exploiting closed-access sys-
tems are more likely to be transitory than cyberweapons exploiting open access
systems. Third, I aver that a cyberweapon causing a high level of visible harm
and/or damage is more likely to be transitory. Yet, I find that the transitoriness
of cyberweapons has a political dimension as well. This research finds that more
capable offensive actors are able to significantly reduce the short-lived nature
of cyberweapons. Certain offensive actors have a wider variety of zero-day
exploits at their disposal, enabling more targeted attacks and thus reducing the
chances of discovery. There are also asymmetries with respect to the ability to
test and retest cyberweapons before actual deployment. The time consuming
process of developing cyberweapons leads to constant attempts to reuse
computer codes designed to exploit zero-day vulnerabilities, even though it
significantly decreases the chances of successful penetration of the targeted
system – particularly when a patch is already made available. Offensive actors
will also have to make trade-offs in the deployment of a cyberweapon. The
most principal consideration concerns the number of targets, demanding a
balancing act between potential short-term gains and long(er) term effective-
ness. Finally, cyber defence, inherently unable to escape the laws of marginal
return, also effects the transitoriness of a deployed cyberweapon as it can cause
delays in the discovery, disclosure and patching of a vulnerability.
Part III concludes and draws out the implications of thinking about
cyberweapons in the way proposed in this article. It reveals that the transi-
tory nature of cyberweapons benefits great powers, changes the incentive
structure for offensive cyber cooperation and induces a different financial
funding structure for cyber programs (compared with conventional weapon
programs). It also provides a potential reason for the limited deployment of
cyberweapons compared to espionage tools.

Part I: Is it old wine in a new bottle?


Scholars do not talk about cyberweapons being ‘transitory’, but use other
terms instead to describe the time-dependent dynamics underlying cyber-
weapons. Two concepts have been used most prominently; ‘single use’ and
‘perishability’. Both concepts however unfittingly capture the scientific prop-
erties and practices of cyberweapons. The concept of ‘single use’ points out
that with a cyberattack, once the zero-day vulnerability – it means an
10 M. SMEETS

undisclosed vulnerability to the public – has been exploited and becomes


known to the public, the weapon loses its utility (hence, the term single use).9
In reality, however, after a zero-day vulnerability is exploited, it takes on
average 312 days before patches are installed and vulnerabilities closed.10
This means that the weapon can generally still be used after the first strike,
only the likelihood of successfully penetrating the target system significantly
decreases.11 In addition, the concept of ‘perishability’ lends its meaning from
the marketing literature.12 In this case, perishability refers to services that
cannot be produced and stockpiled (inventoried) before consumption: they
exist only at the time of their production.13 Yet, a cyberweapon can be
‘stored’.14 After a zero-day vulnerability becomes known to a cyber-power
and a code is developed, it can be said to be ‘on the shelf’.
Robert Axelrod and Rumen Iliev use two concepts to gauge the time-depen-
dent dynamic of cyberweapons – these are stealth and persistence.15 Stealth
concerns the probability that if you use a cyberweapon (or, a ‘resource’ in their
more general terms) in t = 0, it will still be usable in the next time period, t = 1.
Persistence denotes the probability that if you not use a cyberweapon in t = 0, it
will still be usable in the next time period, t = 1. The distinction is useful insofar
that they correctly appreciate that there are different time dynamics at play
before and after use of the weapon (see discussion below on the window of
vulnerability). The concept of ‘stealth’, however, requires reassessment – or at
least a note of caution. In normal discussions on low observable technology,
stealth refers to the ability to operate without discovery. Yet, in the context of the
time dependence of cyberweapons it should refer to not only the time operates
without being discovered but also the time it takes to develop a patch to close
the exploited vulnerability.
The term ‘transitoriness’ is used here to capture the overall essence of
this attribute of cyberweapons. Transitory derives from the Latin term
transitōrius, which means having or allowing a passageway.16 In current
meaning, the adjective refers to ‘[e]xisting or lasting only a short time; short-

9
Lewis, ‘Conflict and Negotiation in Cyberspace’; Krepinevich, Cyber Warfare
10
Leyla Bilge and Tudor Dumitras, ‘Before we knew it: an empirical study of zero-day attacks in the real
world’, CSS’12, Oct. 2012; Ratinder Kaur and Maninder Singh, ‘A Survey on Zero-Day Polymorphic
Worm Detection Techniques’, IEEE Communications surveys & tutorials 16/3 (2014).
11
A more detailed discussion on this issue will be provided below.
12
Gartzke, ‘The Myth of Cyberwar’
13
Andrew Sweeting, ‘Equilibrium Price Dynamics in Perishable Goods Markets: The Case of Secondary
Markets for Major League Baseball Tickets’, NBER, Working Paper 14505, (2008)
14
As a former U.S. executive at a defence contractor said to a reporter from Reuters: ‘My job was to
have 25 zero-days on a USB stick, ready to go’. See: Joseph Menn, ‘Special Report: U.S. Cyber war
Strategy Fear of Blowback’, Reuters, May 2013, <http://www.reuters.com/article/2013/05/10/us-usa-
cyberweapons-specialreport-idUSBRE9490EL20130510>.
15
Axelrod and Iliev, ‘Timing of Cyber Conflict’; It follows model developed in: Robert Axelrod, ‘The
Rational Timing of Surprise’, World Politics 31/2 (1979), 228–246.
16
Collins English Dictionary (online), ‘transitory’, <http://www.collinsdictionary.com/dictionary/
English>.
THE JOURNAL OF STRATEGIC STUDIES 11

lived or temporary’.17 In line with its earlier meaning it can be said that the
term in this context underlines the specific period of transition in which an
adversary can successfully get through the defence passage of a target. The
transitoriness of a weapon refers to the short-lived or temporary ability to
effectively cause harm or damage. Hence, in relation to cyberweapons it
refers to the temporary ability to access a computer system or network to
cause harm or damage to living and material entities.
It seems there are grounds to question the extent to which the transitoriness
of cyberweapons is actually a novel phenomenon. After all, we do not wage war
anymore with the same weapons used in ancient times. Indeed, the chariots
have been replaced (with many intermediate steps) by tanks, aircraft carriers and
fighter planes. And the bows and arrows have been replaced by highly effective
handguns, squad automatic weapons, rocket launchers and sniper rifles.
This observation on the ‘evolution’ in the use of arms in warfare is potentially
deceiving. The reason why most weapons are replaced is because a more
effective weapon was developed. Due to technological advancements, the new
weapon might be easier to use, more cost-efficient or able to cause more harm or
damage.18 Another reason might be because the weapon has lost its ability to
cause harm or damage to an insignificant degree due to new defence mechan-
isms which have been put in place by the target. Although the latter occurs less
frequently, it is this aspect to which the transitoriness of weapons refers to.
That said, it seems that the difference between cyberweapons and con-
ventional weapons is mostly one of degree rather than kind as a time-
dependent dynamic seems to underlie every weapon. Indeed, Bill Clinton
remarked in San Francisco in 1999: ‘the whole history of conflict can be seen
in part as the race of defensive measures to catch up with offensive
capabilities. That is what, we’re doing in dealing with the computer chal-
lenges today [. . .]. It is very important that the American people, without
panic, be serious and deliberate about them, because it is the kind of
challenge that we have faced repeatedly’.19
Theoretically, any weapon can be put on a spectrum ranging from highly
permanent to highly transitory as the effectiveness of a certain tool to cause
harm or damage inherently reduces over time. An example of a highly perma-
nent weapon is the knife – a tool which any member of the Special Forces still
wears today for the challenges of the battlefield. For nuclear weapons, states
have established costly programs to maintain a reliable capability.20 In that sense,

17
Random House Webster’s Unabridged Dictionary (online), ‘transitory’, <http://dictionary.reference.
com/browse/transitory>.
18
John Keegan, A History of Warfare (Random House: London 1994)
19
Philip E. Auerswald, Christian Duttweiler, and John Garofano, Clinton’s Foreign Policy: A Documentary
Record (The Hague: Kluwer Law International 2003), 73.
20
The United States Department of Energy, for example, has set up the ‘Stockpile Stewardship and
Management Program’ with as aim to maintain a reliable stockpile using various simulations and
applications from the scientific community to deal with the particular issue of an aging capability.
12 M. SMEETS

cyberweapons are exceptional in that they belong to the group which is the
most transitory as its ability and effectiveness to cause harm declines relatively
quickly. Cyberweapons from this perspective are merely unique in that there is
the potential of a quick adaptation of defence measures in cyberspace rendering
the specific weapon ineffective.21 The main question we thus have to address is:
What is the main reason cyberweapons are so short lived?
The basic underlying cause for the rapid offense–defence cycle of cyberwea-
pons is that cyberspace is more malleable. The term is often considered to be
synonymous with the notion that cyberspace is ‘man-made’, mentioned in
numerous cyber defence strategies and perpetuated by numerous scholars.22
Yet, the two concepts should not be conflated as one is meaningful for our
understanding of cyberspace and the transitoriness of cyberweapons whereas
the other is not. As General Michael Hayden, former director of the National
Security Agency (NSA) and the Central Intelligence Agency (CIA), observes: ‘the
other domains are natural, created by God and this one is the creation of man’.23
The problem with the ‘man-made’ notion is that also other domains of warfare are
to some degree produced, formed or made by humans – like tunnels, roads and
train tracks in the domain of land – and cyberspace does have natural compo-
nents too – like its heavy reliance on electromagnetic waves.24 Instead, what is
important to stress is that the man-made constructions in other domains are
more difficult to change by its owners (to enhance defence systems) compared to
cyberspace.25 Indeed, at least technically, cyberspace can more easily be changed
to reduce the effects of a certain cyberweapon. Hence, as Libicki indicates, ‘the
task in defending the network is [therefore] not so much to manoeuvre better or
apply more firepower in cyberspace but to change the particular features of one’s
own portion of cyberspace itself so that it is less tolerant of attack’.26

21
The literature on International Relations and military history contains numerous references to the
offensive or defensive balance of military technology. Yet, these discussions have focused on the
degree to which the offense has an advantage over the defence – and the strategic implications of it.
Scholars rarely focus on how specific defence measures catch up on offensive measures. For an
overview see: Jack S. Levy, ‘The Offensive/Defensive Balance of Military Technology: A Theoretical and
Historical Analysis’, International Studies Quarterly 28 (1984), 219–238.
22
According to Shachtman and Singer, ‘[c]yberspace is a man-made domain of technological commerce
and communication, not a geographical chessboard of competing alliances’. Noah Shachtman and
Peter W. Singer, ‘The Wrong War: The Insistence on Applying Cold War Metaphors to Cybersecurity Is
Misplaced and Counterproductive’, Brookings Institute, Aug. 2011, <http://www.brookings.edu/
research/articles/2011/08/15-cybersecurity-singer-shachtman>. For cyber strategies, so for example:
Presidency of the Council of Ministers Italy, ‘National Strategic Framework for the Security of
Cyberspace’, Dec. 2013, <http://www.sicurezzanazionale.gov.it/sisr.nsf/wp-content/uploads/2014/02/
italian-national-strategic-framework-for-cyberspace-security.pdf>.
23
Michael V. Hayden, ‘The Future of Things Cyber’, Strategic Studies Quarterly 5/1 (2011)
24
See Dorothy E. Denning, ‘Rethinking the Cyber Domain and Deterrence’, JFQ 77 (2015).
25
As Robert Bartlett states, ‘[a]nyone who understands how to read and write code is capable of
rewriting the instructions that define the possible’. Robert Bartlett, ‘Developments in the Law-The
Law of Cyberspace’, Harvard Law Review 112/1574 (1999), 1635.
26
Martin C. Libicki, ‘Cyberspace Is Not a Warfighting Domain’, A Journal of Law and Policy for the
Information Society 8/2 (2012), 326; As the discussion below on the window of vulnerability indicates,
these ‘corrections’ can take place both before and after a cyberattack.
THE JOURNAL OF STRATEGIC STUDIES 13

The malleability of cyberspace offers, in the words of Bruce Schneier, a unique


'window of exposure' for cyberattacks to be effective.27 Schneier’s conceptual
framework offers a useful starting point for understanding the life cycle of
cyberweapon’s ability to effectively cause harm or damage to a living or material
entity.28 The discussion below therefore largely draws upon Schneier’s original
conceptualisation. For analytical clarity, however, the framework provided here is
slightly more specific – adding a number of additional events to the life cycle. I
also shy away from using the Schneier’s term of ‘phases’ as the events described
in the cycle do not always have to occur in this specific order.29
First, tvulnerability is the date the vulnerability is introduced, meaning that a
bug is introduced in a program’s source code, design or in the operating
systems used by such programs that is subsequently released and deployed.
Robert Dacey estimates that there are as many as 20 flaws per thousand
lines of Source Line of Code,30 making vulnerability introduction basically a
daily occurrence. Second, tdiscovery is the earliest date of exploit discovery by
state actors, actors in the underground economy, hacktivists or other
actors.31 Depending on who discovered the exploit, news about it starts to
spread (or it might be not disclosed at all). Third, texploit is the first time an
exploit for the vulnerability is created which can be used to conduct
cyberattacks. Fourth, tawareness is the earliest date that the vendor becomes
aware of the vulnerability. The vendor can learn about the vulnerability
either by discovering it through testing or through third-party reporting.
Depending on the vendor’s risk assessment, it assigns a priority for devel-
oping a patch. Fifth, tdisclosure is the date that information on the vulner-
ability is reported on a public channel, published by a trusted and
(independent) author.32 Unsurprisingly, research has provided empirical

27
Bruce Schneier, ‘Crypto-Gram’, Sep. 2000, <https://www.schneier.com/crypto-gram/archives/2000/
0915.html>.
28
Others have referred to the ‘window of exposure’ as the ‘lifecycle of a vulnerability’. I use these terms
interchangeably (including the term the ‘life cycle of cyberweapons’ effectiveness’ as well). Stefan
Frei, Bernhard Tellenbach, and Bernhard Plattner, ‘0-Day Patch: Exposing Vendors (In)security
Performance’, BlackHat Europe, 2008, <https://www.blackhat.com/presentations/bh-europe-08/Frei/
Whitepaper/bh-eu-08-frei-WP.pdf>; Adrian Pauna and Konstantinos Moulinos, ‘Window of exposure. .
. a real problem for SCADA systems? Recommendations for Europe on SCADA patching’, European
Union Agency for Network and Information Security Publication, Dec. 2013.
29
Yet, some ordering is determined. The introduction of the vulnerability always precedes (or equal to)
the time of exploitation of it. And the release of a patch can only occur after the vendor has become
aware of the vulnerability. For a more detailed discussion see the recent study of: Antonio Nappa,
Richard Johnson, Leyla Bilge, Juan Caballero, Tudor Dumitras, ‘The Attack of the Clones: A Study of
the Impact of Shared Code on Vulnerability Patching’, IEEE Symposium on Security and Privacy, 2015.
30
Robert F. Dacey, ‘Information security progress made, but challenges remain to protect federal
systems and the nation’s critical infrastructures’, Government Accountability Office, 2003, <http://
world.std.com/~goldberg/daceysecurity.pdf>.
31
Sam Ransbotham, Sabyasachi Mitra, and Jon Ramsey, ‘Are Markets for Vulnerabilities Effective?’ ICIS,
2008, <http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1192&context=icis2008>.
32
Frei, Tellenbach, and Plattner note that the disclosure of a vulnerability has been defined in a number
of ways, ranging from ‘made public to wider audience’, ‘made public through forums or by vendor’,
‘made public by anyone before vendor releases a patch’. See Frei, Tellenbach, and Plattner. ‘0-Day
Patch’.
14 M. SMEETS

evidence that a vulnerability disclosure increases the number of attacks per


host.33 Last, tpatch is the first date that a patch is released by the vendor or
originator to fix, patch or workaround the exploitation of the vulnerability.
Fixes and patches can range from being very simple – such as a straightfor-
ward instruction for certain configuration changes – to highly complex –
such as the provision of signatures for intrusion prevention systems or
antivirus tools.34 Timing the application of security patches is a difficult
balancing act. As security researchers indicate,‘[p]atch too early, [without
doing local patch testing in aid of immediate deployment], and one might
be applying a broken patch that will actually cripple the system’s
functionality.35 Patch too late, and one is at risk from penetration by an
attacker exploiting a hole that is publicly known’.36 According to Shipley,
‘the time window between identification of a vulnerability and creation of
an exploit has shrunk dramatically over the years’.37
From this point onwards, the hosts that have applied the patch are no
longer susceptible to the exploit.38 Similar to the way normal technology
spreads through society, we can distinguish between different types of path
adopters. There are ‘early adopters’; those hosts which apply the patch
immediately and are no longer susceptible to the exploit. Following, there
is the ‘early majority’ and ‘late majority’ of hosts having an average adoption
rate. And, finally, there are the ‘laggards’, last to adopt the patch. Laggards
typically tend to take security far from seriously.39
Worms exploit the tendency that patches are generally not immediately
applied – that is, that there are so few early adopters. The speedily written
Witty worm, with a wiper payload, serves as an excellent illustration. eEye
Digital Security, acquired in 2012 by BeyondTrust, discovered a vulnerability
in software products made by Internet Security Systems (ISS) on 8 March
2004. The following day a patch version was released by ISS. On the 18

33
Ashish Arora, Ramayya Krishnan, Anand Nandkumar, Rahul Telang and Yubao Yang, ‘Impact of
Vulnerability Disclosure and Patch Availability – An Empirical Analysis’, Workshop on the
Economics of Information Security, 2004; Hasan Cavusoglu, Huseyin Cavusoglu, and Srinivasan
Raghunathan, ‘Efficiency of Vulnerability Disclosure Mechanisms to Disseminate Vulnerability
Knowledge’, IEEE Transactions on Software Engineering 33/3 (2007),171–185.
34
Frei and Plattner, ‘0-Day Patch’
35
The testing of patches is required before being applied to the production environment to make sure
that it works properly and does not conflict with other applications in the system. Hasan Cavusoglu,
Huseyin Cavusoglu, and Jun Zhang, ‘Security Patch Management: Share the Burden or Share the
Damage?’ Management Science 54/4 (2008), 657–670.
36
Steve Beattie, Seth Arnold, Crispin Cowan, Perry Wagle, and Chris Wright, ‘Timing the Application of
Security Patches for Optimal Uptime’, LISA XVI, Nov. 2002; Also see: Ross Anderson and Tyler Moore,
‘The Economics of Information Security’, Science 314/5799 (2006), 610–613.
37
Greg Shipley, ‘Painless (well, almost) patch management procedures’, Network Computer 2004,
<http://www. networkcomputing. com/showitem.jhtml?docid = 1506f1>.
38
In an average week, vendors and security organisations announce around 150 vulnerabilities along
with information on how to fix them. Cavusoglu, Cavusoglu, and Zhang, ‘Security Patch
Management’.
39
Gary Armstrong, Stewart Adam, Sara Denize, Philip Kotler, Principles of Marketing, (Pearson:
Melbourne 2015).
THE JOURNAL OF STRATEGIC STUDIES 15

March, eEye Digital Security published a detailed description of the flaws in


the software. Just 36 hours after public disclosure, on the evening of 19
March, the network attack worm Witty was released into the wild. 40
Figure 1 summarises the life cycle of a cyberweapon.41 An attack taken
place between texploit and tdisclosure is called a zero-day attack.42 Notice that the
events shown in the figure each signify a specific point in time (i.e. earliest date
of release, exploitation, etc.). ‘tpatch’, however, is the exception as the adapta-
tion of the patch takes place over a (longer) period of time. The dotted line
therefore attempts to indicate that follow on attacks could continue further
depending on how quickly a certain host patches the exploit.
The life cycle described above, following Schneier’s earlier conception, has
been specific to a cyberweapon exploiting software vulnerabilities. In principle,
however, the framework can be applied to hardware and network vulnerabilities
as well, with a few qualifications. In the case of hardware vulnerabilities, the first
three events – tvulnerability, tdiscovery and texploit – are collapsed when a chip is
purposefully manipulated to introduce a backdoor or kill switch.43 The patching
process is also different as the vulnerability, hardwired into a certain device, is
more difficult to eradicate. In the case of network and protocol vulnerabilities,
the general framework can be applied – depending on both function and
domain of use, however, the patching process can be a highly complex process.
Also, the security patching process of software vulnerabilities indicates that
these cyberweapons in particular are also qualitatively different from conven-
tional weapons in terms of their short-lived nature to cause harm or damage.
Though patches can be distributed in a number of ways,44 the corrective
process which takes place after a software vulnerability is exploited does not
only prevent successful exploitation against one system but against any admin-
istrator which uploads the patch. Hence, the threat capability is mitigated for

40
The worm was also special in that it was first time a worm was released in the wild through a bot
network of about 100 infected machines. It meant that every available host was very quickly infected.
Bruce Schneier, ‘The Witty Worm a New Chapter in Malware’, Computer World, Jun. 2014, <http://
www.computerworld.com/article/2565119/malware-vulnerabilities/the-witty-worm–a-new-chapter-
in-malware.html>.
41
Most research assumes a linear model for the cyberweapon life cycle, which is critiqued in more
recent scholarship. As the goal of this article is not to estimate the exploitation of a vulnerability or
the average deployment of a patch, I do not make any unnecessary assumptions about this. William
A. Arbaugh, William L. Fithen, and John McHugh, ‘Windows of vulnerability: A case study analysis’,
IEEE Computer 33/12 (2000); Hamed Okhravi and David Nicol, ‘Evaluation of patch management
strategies’, International Journal of Computational Intelligence: Theory and Practice 3/2(2008), 109–117;
Terry Ramos, ‘The Laws of Vulnerabilities’, RSA Conference, Feb. 2006.
42
Bilge and Dumitras, ‘Before We Knew It’.
43
The three events are not collapsed if the chipmaker itself has introduced the backdoor. Although still
unconfirmed, this was likely the case with backdoor in a computer chip used in military systems and
aircraft, discovered by two experts from Cambridge University. See: Charles Arthur, ‘Cyberattack
concerns raised over Boeing 787 chip’s “back door”’, The Guardian, May 2012, <http://www.theguar
dian.com/technology/2012/may/29/cyber-attack-concerns-boeing-chip>.
44
Package management systems can offer various degrees of patch automation. Completely automatic
updates are still rife with problems, so are not widely adopted. Cavusoglu, Cavusoglu and Zhang,
‘Security Patch Management’.
16 M. SMEETS

Figure 1. Life cycle of a cyberweapon’s effectiveness.

all targets; if a cyberweapon, exploiting a certain vulnerability, is used against


one target it loses its effectiveness against other targets. In that respect, a
cyberattack against one is perhaps not an attack against all, but a cyberattack
against one does create a cyber-defence for all. This is fundamentally different
compared to conventional weapons in which a similar dynamic does not exist.
If a knight in the Middle Ages builds a strong castle with high curtain walls,
ramparts, machicolations, flanking towers and special gateway defences, it
might offer a safe retreat against invasions to his people. Yet, it does not
mean that all other nobility in the region (and in the world) can subsequently
‘upgrade’ their fortification to similar standard without much effort.
Figure 1 indicates that the life cycle of vulnerabilities is subject to three delays;
(i) the awareness delay, (ii) the patching delay and (iii) the adaptation delay.45 In
the three periods it is expected that different actors or organisations are targeted.
In the first period, before there is any awareness of the existence of the exploitable
vulnerability, the adversary carefully choses a prime target in order to maximise
the gain of its developed cyberweapon. In the second period, when the exploit
becomes known to the public, a competitive, free-for-all situation arises involving
various participants in which the selectivity of targets is reduced (as long as the
attack offers favourable gains to the attacker). In the third period, in which certain
actors have failed to adopt new security measures, a situation of ‘grab what you
can grab’ emerges in which ‘laggards’ are attacked as the last vulnerable targets.
Finally, the patching dynamic discussed in the figure indicates that the type
of decay function underlying cyberweapons differs from conventional weapons
as well. Conventional weapons’ aging is generally modelled as a gradual (log-
linear) deterioration. This typical type of function however does not hold up for
cyberweapons. Instead, the ability to cause harm or damage remains constant
for a certain period of time, but rapidly declines at tawareness.46

45
For a more detailed discussion on the degree to which the time between these different effects
determines the security risk exposure, see: Frei and Plattner, ‘0-Day Patch’.
46
Hence, I would argue that a better way to model the transitory nature a cyberweapon is similar to
the Black-Scholes model of options pricing in finance. A cyberweapon is analogous to what is called
THE JOURNAL OF STRATEGIC STUDIES 17

Part II: The determinants of transitoriness


Having identified what makes cyberweapons strongly transitory, we can now
delve into the second order question: what makes one cyberweapon more
transitory than another? In other words, what increases the likelihood of delays
in the (i) awareness, (ii) patching and/or (iii) adaptation of a vulnerability?
Addressing this question requires first of all a diagnosis of the technical
properties of cyberweapons. In Economics, there is something of a cottage
industry of research identifying the permanent and transitory components
of market indicators such as stock prices,47 macroeconomic fluctuations48 or
(even) overqualification.49 In the same way as these studies look at the
perpetuity of a certain indicator, we can derive three propositions on the
permanent and transitory technical components of cyberweapons.50
Second, an analysis of how actors’ actions affect the transitory nature of
cyberweapons is required. We need to untangle to what degree the transitory
nature of cyberweapons is first and foremost a technical attribute or whether it
has a political dimension as well. International Relations scholars generally
think of technology as a material, apolitical and exogenous influence on inter-
national society. Yet, as constructivist approaches to international relations
have pointed out, technologies have both social origins and social effects,
and should therefore be endogenised. Practices of actors might affect the
transitoriness of cyberweapons as well.51 To elucidate which practices affect
the overall dynamics of cyberweapons, the last three propositions are therefore
pertain to offender and defender characteristics.

Proposition 1: Cyberweapons exploiting software vulnerabilities are more


transitory than capacities exploiting hardware and network vulnerabilities.

an ‘American option’. The value of usage could be modelled as a ‘Brownian Motion’ with random
crashes representing the use of the weapon by others.
47
John H. Cochrane, ‘Permanent and Transitory Components of GNP and Stock Prices’, The Quarterly
Journal of Economics 109/1 (1994), 241–265.
48
John Y. Campbell, N. Gregory Mankiw, ‘Permanent and Transitory Components in Macroeconomic
Fluctuations’, NBER, 2169 (1987).
49
Christa Frei and Alfonso Sousa-Poza, ‘Overqualification: permanent or transitory?’, Applied Economics
44/14 (2012).
50
In developing these propositions I follow Lin in asserting that a successful cyberattack requires three
elements; (a) a vulnerability, (b) access to the vulnerability (i.e. access vector) and (c) a payload to be
executed (i.e. malicious code). Herr’s application has usefully clarified that the three conditions for
cyberattacks can be successfully translated into ‘cyberweapons’, using a variety of examples. Herbert
S. Lin, ‘Escalation Dynamics and Conflict Termination in Cyberspace’, Strategic Studies Quarterly 6/3
(2012), 46–70; Herbert S. Lin, ‘Offensive Cyber Operations and the Use of Force’, Journal of National
Security Law and Policy 4/63(2010), 63–86; Owens, Dam, and Lin, ‘Technology, Policy, Law, and Ethics
Regarding U.S. Acquisition and Use of Cyberattack Capabilities’; See: Trey Herr, ‘PrEP: A Framework
for Malware & Cyber Weapons’, Cyber Security and Research Institute, (2014), 2.
51
Geoffrey L. Herrera, Technology and International Transformation: The Railroad, the Atom Bomb,and the
Politics of Technological Change (Albany: State University of New York Press 2006).
18 M. SMEETS

In most basic terms, vulnerability concerns a specific problem in a computer


system or network that can be used by the attacker to compromise it. Jang-
Jaccard and Nepal distinguish between three types of vulnerabilities: (i) hard-
ware, (ii) software and (iii) network infrastructure and protocol vulnerabilities.52
First, a hardware-based cyberweapon alters the physical elements that comprise
a computer system and/or network. The key source of hardware-based cyber-
weapons is derived from the unauthentic or illegal clones of hardware which can
be found on the IT market.53 Second, a software-based cyberweapon exploits a
certain weakness/defect in the code of a computer program.54 Currently, most
cyberweapons utilise vulnerabilities in application or system software.55 Third, a
network infrastructure and protocol-based cyberweapon concerns a capacity
which exploit vulnerabilities in the network infrastructure and protocols. A net-
work protocol is a set of rules and conventions that governs the communication
between network devices.56 After all, computers can only communicate with
each other if they speak the same language.57 There are a variety of layers of
network protocols. The most frequent network attacks occur by exploiting the
limitations of the commonly used network protocols such as Internet Protocol,
Transmission Control Protocol or Domain Name System (DNS).58
The proposition is based on the notion that, ceteris paribus, software vulner-
abilities have a higher chance of being detected (awareness delay), are easier to
patch for the vendor (patching delay), as well as adopt by the target (adoption
delay). On the awareness delay, for software level attacks many security patches,
intrusion detection tools and antivirus scanners exist to detect malicious attacks
periodically. Instead, many of the hardware-based attacks have the ability to
escape such detection considering that only few hardware detection tools exist.
As Adee also writes, ‘[a]lthough commercial chip makers routinely and exhaus-
tively test chips with hundreds of millions of logic gates, they can’t afford to

52
Julian Jang-Jaccard and Surya Nepal, ‘A Survey of Emerging Threats in Cybersecurity’, Journal of
Computer and System Sciences 80/5 (2014), 973–993.
53
Ramesh Karri, Jeyavijayan Rajendran, Kurt Rosenfeld, Mark Tehranipoor, ‘Trustworthy hardware:
Identifying and classifying hardware Trojans’, Computer 43/10 (2010), 39–46; Also, as the authors
indicate, IT companies often buy untrusted hardware from websites or resellers which may also
contain malicious hardware-based Trojans.
54
Definition adopted from: Jaziar Radianti, and Jose. J. Gonzalez, ‘Understanding Hidden Information
Security Threats: The Vulnerability Black Market’, Proceedings of the 40th Hawaii International
Conference on System Sciences, 2007.
55
Most common software vulnerabilities happen as a result of exploiting software bugs in (i) the
memory, (ii) user input validation, (iii) race conditions and (iv) user access privileges. See: Katrina
Tsipenyuk, Brian Chess, and Gary McGraw, ‘Seven pernicious kingdoms: A taxonomy of software
security errors’, Security and Privacy 3/6 (2005), 81–84.
56
See JaeSeung Song, Cristian Cadar, and Peter Pietzuch, ‘SYMBEXNET: Testing Network Protocol
Implementations with Symbolic Execution and Rule-Based Specifications’, IEEEE Transactions on
Software Engineering 40/7 (2013), 695–709.
57
Florida Center for Instructional Technology, ‘Chapter 2: What is a Protocol?’, 2013, <http://fcit.usf.
edu/network/chap2/chap2.htm>.
58
Jang-Jaccard and Nepal, ‘A survey of emerging threats in cybersecurity’.
THE JOURNAL OF STRATEGIC STUDIES 19

inspect everything’.59 The fact that network protocols are becoming increasingly
complex raises similar issues. Song, Cadar and Pietzuch note that ‘[t]he complex-
ity of network protocols makes errors difficult to detect, even for well-studied and
mature protocols: errors may only manifest themselves after complex sequences
of network packets. For example, DNS server implementations that are vulner-
able to cache poisoning attacks only exhibit problems in specific scenarios’.60
On the patching and adoption delay, not only are hardware vulnerabilities
more difficult to detect, it is also more difficult patch, other than replacing the
hardware. Indeed, hardware after deployment generally cannot be updated,
short of wholesale replacements, whereas software can be updated by uploading
new code – even often remotely.61 Also it often takes a considerable amount of
time for a network vulnerability to close. For example, when Steve Bellovin, then
working for AT&T Bell Laboratories, found a number of important security flaws
in the DNS system he delayed the publication of this vulnerability for a number of
years until a fix was available.62 Also, the way network protocols are set up,
requiring confirmation by both sender and receiver, means that it often is not a
prompt process.
A fourth vulnerability, not mentioned by Jang-Jaccard and Nepal, is at least as
important: the human vulnerability. The notion that the person holding the
information is generally the weakest link in any computer system has been
aptly described in various hacking accounts. Kevin Mitnick describes social
engineering as a ‘craft’ using a mix of deception, influence and persuasion.63
The effectiveness of spear phishing is often astonishing, even for getting some of
the most resilient computer systems.64 Little data are however available on the
human susceptibility to cyberattacks, and how they gain awareness and learn, to
further substantiate this claim.65

59
Adee, ‘The Hunt for the Kill Switch’; It means that when the chips are tested the focus is on how well
it performs the functions it is destined to use for. It is impossible to check for the infinite possible
issues that are not specified. It is also an incredible laborious process to test every chip.
60
Song, Cadar, Pietzuch, ‘SYMBEXNET: Testing Network Protocol Implementations with Symbolic
Execution and Rule-Based Specifications’.
61
Gedare Bloom, Eugen Leontie, Bhagirath Narahari, Rahul Simha, ‘Chapter 12: Hardware and Security:
Vulnerabilities and Solutions’, in Sajal K. Das, Krishna Kant and Nan Zhang (eds.), Handbook on
Securing Cyber-Physical Critical Infrastructure (Waltham: Morgan Kaufmann 2012).
62
Schneier, ‘Crypto-Gram’; The Economist, ‘It’s about time: Escalating cyber-attacks’, Feb. 2014, <http://
www.economist.com/blogs/babbage/2014/02/escalating-cyber-attacks>.
63
Kevin Mitnick, The Art of Deception (Hoboken: John Wiley & Sons 2002), paraphrased from: introduc-
tion; also see: Kevin Mitnick and William L. Simon, The Art of Intrusion: The Real Stories Behind the
Exploits of Hackers, Intruders, & Deceivers (Ronald Madzima & Sons 2005).
64
As ‘the Grugq’, a well-known security researcher/hacker, writes: ‘Give a man an 0day and he’ll have
access for a day, teach a man to phish and he’ll have access for life’. The Grugq, ‘Twitter’, 2016,
<https://twitter.com/thegrugq>.
65
For an interesting recent analysis aiming to establish a rigorous data-driven approach see: V. S.
Subrahmanian, Michael Ovelgönne, Tudor Dumitras, B. Aditya Prakash, ‘Chapter 4, The Global Cyber-
Vulnerability Report’, in V.S. Subrahmanian, Michael Ovelgonne, Tudor Dumitras, B. Aditya Prakash
(eds.), Terrorism, Security and Computation (Springer: New York City 2015).
20 M. SMEETS

Proposition 2: Cyberweapons exploiting closed-access systems are more


likely to be transitory than cyberweapons exploiting open access systems.

The attack vector of a cyberweapon refers to the route used by attack-


ers to get into the computer system. It is common to distinguish between
two types of access paths. First, there are remote-access system attacks, in
which the cyberweapon is launched at some distance from the adversary
computer or network of interest. The canonical example of a remote-
access attack is that of an adversary computer attacked through the access
path provided by the Internet.66 Second, there are closed-access (i.e. air-
gapped) system attacks in which, as Kello writes, the cyberweapon
employed launches an attack against a computer system ‘not interdepen-
dent at logical or information layers’.67 In this case, the cyberweapon
accesses an adversary computer or network through the local installation
of hardware or software functionality by ‘friendly parties’ in close proximity
to the computer or network of interest.68 It can of course also happen
through exploiting the ignorant human element.
Herb Lin notes that anti-radiation missiles are often set up in such a
way that they ‘home in on the emissions of adversary radar systems;
once the radar shuts down, the missile aims at the last known position
of the radar’.69 Similar considerations sometimes apply for the deploy-
ment of cyberweapons. As Lin states, ‘[u]nder such circumstances, a
successful cyberattack on the adversary computer may require speed
to establish an access path and use a vulnerability before the computer
goes dark and makes establishing a path difficult or impossible’.70 Open
systems usually have a broad availability set, meaning that they have
more than just one entry point (even if this is unintentional). Hence, the
chance that the access vector of an attacker is using to exploit the
system is shut down is less likely to occur.

Proposition 3: A cyberweapon causing a high level of visible damage and/


or harm is more likely to be transitory.

66
Indeed, public websites are considered to be the low-hanging fruit as they generally run on generic
server software and are connected to the Internet; even relatively unskilled individuals can launch a
website defacement attack. See: Symantec Corporation, ‘Internet Security Threat Report 2014’, 2014,
<http://www.symantec.com/content/en/us/enterprise/other_resources/b-istr_main_report_v19_
21291018.en-us.pdf>.
67
Lucas Kello, ‘Cyber Disorders: Rivalry and Conflict in a Global Information Age’, Presentation,
International Security Program Seminar Series, Belfer Center for Science and International Affairs,
Harvard Kennedy School, May 2012, <http://belfercenter.hks.harvard.edu/files/kello-isp-cyber-disor
ders.pdf>.
68
Owens, Dam, Lin, ‘Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of
Cyberattack Capabilities’.
69
Ibid.
70
Ibid.
THE JOURNAL OF STRATEGIC STUDIES 21

The payload is the ‘raison d’etre’ of a cyber weapon, as Herr and


Rosenzweig note.[70] It is the part of the capability which causes the
harm or damage. Payloads greatly differ in nature and sophistication –
they can also be programmed to conduct multiple actions.
Differences in payload primarily influences the likelihood of either
extending or reducing the time period between the moment a vulnerabil-
ity is exploited (texploit) and the earliest date a vendor becomes aware of
the vulnerability (tawareness). Cyberweapon which visibly affects the target
system is inherently more likely to be discovered than a weapon which
affects are less visible. The notion of ‘visibility’ requires some unpacking, as
it refers here to two aspects: first, the observable damage or harm caused
to the target; second, the complexity of causal chain that ultimately results
in the infliction of damage or harm to the target.71 Following this principle
it is also expected that cyber espionage tools, holding all things equal, are
more permanent in nature than cyberweapons. Furthermore, ironically,
‘bad’ cyberweapons – weapons designed to cause harm or damage but
were not able to do so due to a bug in the code – are also less time
dependent due to the lower chance of discovery.

Proposition 4: The reduction of transitoriness requires significant material and


other resources, which certain types of actors are likelier to have than others.

Actors significantly differ in their capacity to develop cyberweapons. Certain


actors have a wider variety of zero-day exploits at their disposal, enabling more
targeted attacks and thus reducing the chances of discovery. In fact, a series of
reports have offered insights on how governments are buying up the market of
zero-day vulnerabilities – which can sell for anywhere between thousands and
millions or more – to gain a strategic advantage in this new area of contestation.
In this ‘business of zero-days’, the United States is considered to be the main
consumer.72 ‘Even as the U.S. government confronts rival powers over wide-
spread Internet espionage’, as Joseph Menn writes, ‘it has become the biggest

71
After all, the indirect path might lead defending actors to confuse a kinetic attack for a cyberattack or
accidental for purposeful harm. Part of the reason why the cyber revolution is a bone of contention is due to
the indirect path in which a cyberweapon potentially causes harm or damage. As Rid writes ‘the actual use
of cyber force is to be a far more complex and mediated sequence of causes and consequences that
ultimately result in violence and casualties’. In those Cassandra-esque scenarios in which a cyberweapon
inflicts a lot of material damage or people suffer serious injuries or be killed, ‘the causal chain that links
somebody pushing a button to somebody else being hurt is mediated, delayed and permeated by chance
and friction’. Thomas Rid, ‘Cyber War Will Not Take Place’, Journal of Strategic Studies 35/1 (2012), 5–32, 9.
72
Kim Zetter, ‘Hacking Team Leak shows How Secretive Zero-Day Exploit Sales Work’, Wired (Jul. 2015),
<http://www.wired.com/2015/07/hacking-team-leak-shows-secretive-zero-day-exploit-sales-work/>;
Andy Greenberg, ‘Shopping for Zero-Days: A Price List For Hackers’ Secret Software Exploits’, Forbes
Magazine, Mar. 2012, <http://www.forbes.com/sites/andygreenberg/2012/03/23/shopping-for-zero-
days-an-price-list-for-hackers-secret-software-exploits/>.
22 M. SMEETS

buyer in a burgeoning gray market where hackers and security firms sell tools for
breaking into computers’.73
There are also asymmetries with respect to the ability to test and retest
cyberweapons before actual deployment. Stuxnet provides, again, a good case
in point given the resources and effort poured into the development of this
capacity. The planning of the cyberweapon started during George W. Bush's first
term, and was eventually developed with close collaboration of the NSA and
secret Israeli 8200 unit.74 The complexity of the worm means that thorough
testing was required to see whether the bug could do what it was intended to
do. The United States therefore had to produce its own P-1s, perfect replicas of
the variant used by the Iranians at Natanz. According to Sanger, at first, small-
scale tests were conducted on borrowing centrifuges stored at the Oak Ridge
National Laboratory in Tennessee, which had been taken taken from Muammar
Qaddafi in late 2003 when he gave up the program.75 The tests grew in size and
sophistication – obtaining parts from various small factories in the world. As
David Sanger reports, at some point the United States was ‘even testing the
malware against mock-ups of the next generation of centrifuges the Iranians
were expected to deploy, called IR-2s, and successor models, including some the
Iranians still are struggling to construct’.76
The time consuming process of developing cyberweapons leads to constant
attempts to reuse computer codes designed to exploit zero-day vulnerabilities,
even though it significantly decreases the chances of successful penetration of
the targeted system. Cyber commands make a constant trade-off between the
skills and resources required to develop a new computer code, and the odds of
successfully penetrating targeted systems. In that sense, great powers – with
dedicated cyber organisations with a high number of personnel, both military
and civilian – have a clear advantage. The need to ‘reuse’ old vulnerabilities –
found at the later end of the window of exposure spectrum – is less urgent. For
small and middle powers, it is difficult to have an assembly line of cyberweapon
production running like a clockwork; ensuring a cycle that, when one cyber-
weapon becomes ineffective, the next weapon can be put to use if necessary.
Finally, attackers go to great pains to integrate various evasion and persis-
tence techniques into their cyber capacity to stretch the discovery delay
period.77 It is a feature actually particularly prominent in cyberespionage and

73
Joseph Menn, ‘Special Report: U.S. cyberwar strategy stokes fear of blowback’, Reuters, May 2013,
<http://www.reuters.com/article/us-usa-cyberweapons-specialreport-idUSBRE9490EL20130510>.
74
David E. Sanger, ‘Obama Order Sped Up Wave of Cyberattacks Against Iran’, The New York Times, Jun.
2012, <http://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberat
tacks-against-iran.html?_r = 0>.
75
David E. Sanger, Confront and Conceal: Obama’s Secret Wars and Surprising use of American Power
(New York: Crown Publishing 2012), 197.
76
Ibid., 198.
77
One can, for example, think of polymorphic malware, binary archives or domain generation algorithm
techniques. For more detailed discussion. see: Fortinet, Head-First into the Sandbox’, 2014, <https://
www.fortinet.com/sites/default/files/whitepapers/Head_First_into_the_Sandbox.pdf>.
THE JOURNAL OF STRATEGIC STUDIES 23

surveillance capacities given the purpose of those tools – see, for example,
Finspy (2011), Blue Termite (2013) and Black Energy (2013). Although not much
is known about it, the spyware Turla is also an interesting case in this respect.
The malware was discovered in 2014 and still active today, using satellite
internet connection to hide its command and control servers.

Proposition 5: The transitoriness of cyberweapons is affected by the way


offensive actors deploy them.

Offensive actors will also have to make trade-offs in the deployment of a


cyberweapon. The most basic consideration concerns the number of targets.
Consider a cyberweapon which exploits a zero-day vulnerability found in 1000+
computer systems. The offender could use the cyberweapon to target all 1000+
computer systems.78
Another option would be to only employ the cyberweapon against the more
important targets.79 An interesting case was reported by Dan Goodin a few years
ago: ‘In 2009, one or more prestigious researchers received a CD by mail that
contained pictures and other materials from a recent scientific conference they
attended in Houston. The scientists didn’t know it then, but the disc also delivered
a malicious payload developed by a highly advanced hacking operation that had
been active since at least 2001. The CD, it seems, was tampered with on its way
through the mail’; the package was intercepted in transit, its contents were
booby-trapped and subsequently sent to its original destination.80 It turned out
to be the work of the members of the so-called Equation Group, most likely part of
the NSA. The group has the capability to rewrite firmware in a secret section
within the drives which is considered to be resistant to even military grade wiping
and reformatting, as David Gilbert reports.81 Despite of this powerful capability,
Kaspersky writes that ‘[t]he Equation group’s HDD firmware reprogramming
module is extremely rare. During our research, we’ve only identified a few victims
who were targeted by this module. This indicates that it is probably only kept for
the most valuable victims or for some very unusual circumstances’.82 The way

78
See, for example, Animal Farm targeted around 3001–5000 systems. Kaspersky Lab’s Global Research
& ‘Analysis Team, Animals in the APT Farm’, Securelist, Mar. 2015, <https://securelist.com/blog/
research/69114/animals-in-the-apt-farm/>.
79
For sparingly used espionage capacities see CozyDuke, Wild Neutron, miniFlame, Regin, and SabPub.
80
Dan Goodin, ‘How “omnipotent” hackers tied to NSA hid for 14 years – and were found at last’, Ars
Tecnica, (16 Feb. 2015), <http://arstechnica.com/security/2015/02/how-omnipotent-hackers-tied-to-
the-nsa-hid-for-14-years-and-were-found-at-last/>; Kaspersky Lab’s Global Research & Analysis Team,
‘Houston, we have a problem’, SecureList, Feb. 2015, <https://securelist.com/blog/research/68750/
equation-the-death-star-of-malware-galaxy>.
81
David Gilbert, ‘Equation Group: Meet the NSA “gods of cyber espionage”’, International Business
Times, Feb. 2015, <http://www.ibtimes.co.uk/equation-group-meet-nsa-gods-cyber-espionage-
1488327>.
82
Kaspersky Lab, ‘Equation Group, Questions and Answers’, Feb. 2015, <https://securelist.com/files/
2015/02/Equation_group_questions_and_answers.pdf>; The exact number of victims is difficult to
establish due to the self-destructive mechanism build into the capability.
24 M. SMEETS

Equation used their capability, combined with its technical dexterity, makes it one
of the most persistent cyber resources in existence.
Again, Stuxnet would be a good example given its accuracy. Stuxnet searches
for and affects only a particular model of programmable logic controller match-
ing the characteristics of Natanz’ nuclear enrichment facilities. If a certain com-
puter system does not match, Stuxnet removes itself from the particular machine
after it has replicated itself to other vulnerable computer systems.83 General
Michael Hayden, former director of the NSA and CIA, observes that the attack was
‘incredibly precise’. [. . .] ‘Although it was widely propagated, it was designed to
trigger only in very carefully defined, discreet circumstances’ – not acknowl-
edging that the United States was behind the attack, but stating that it has been
launched by a ‘responsible nation’.84 Clearly, the agog approach will likely reduce
the window between texploit and tawareness more quickly compared to the more
subtle application. Also, the vendor might see more urgency in developing a
patch in the first scenario (reducing the time between texploit and tpatch). Hence, in
the deployment offensive actors have to make trade-offs between potential
short-term gains and long(er) term effectiveness.85
Next to the number of targets, inherently, the decision for the type of target
matters as well. Duqu 2.0, an updated version of the infamous 2011 Duqu
malware platform, illustrates this aspect. Duqu 2.0 is a highly sophisticated
strain of malware which exists only in the memory of the computer to ensure
persistence. The attackers, however, decided to use the capacity to intrude the
internal network of Kaspersky Lab. The attackers gambled in attacking a world-
class security company. And lost. Duqu 2.0 was discovered by the Lab while
testing a new technology designed to detect advanced persistent threats. As
experts from Kaspersky remarked on the bet; ‘[o]n one hand, it almost surely
means the attack will be exposed – it’s very unlikely that the attack will go
unnoticed. So the targeting of security companies indicates that either they are
very confident they won’t get caught, or perhaps they don’t care much if they
are discovered and exposed’.86

83
Eric Byres, Andrew Ginter, and Joel Langill, ‘How Stuxnet Spreads – A Study of Infection Paths in Best
Practice Systems’, Feb. 2011, <http://www.abterra.ca/papers/how-stuxnet-spreads.pdf>.
84
Ben Flanagan, ‘Former CIA chief speaks out on Iran Stuxnet attack’, The National, Dec. 2011, <http://
www.thenational.ae/business/industry-insights/technology/former-cia-chief-speaks-out-on-iran-stux
net-attack>.
85
A quote from an interview with a hacker illustrates the point. ‘The Grugq’ explains why he has no
contracts with the Russian Government or other Russian actors: ‘Selling a bug to the Russian mafia
guarantees it will be dead in no time, and they pay very little money’. He continues saying that:
‘Russia is flooded with criminals. They monetize exploits in the most brutal and mediocre way
possible, and they cheat each other heavily’. See: Andy Greenberg, ‘Shopping For Zero-Days: A Price
List For Hackers’ Secret Software Exploits’, Mar. 2012, <http://www.forbes.com/sites/andygreenberg/
2012/03/23/shopping-for-zero-days-an-price-list-for-hackers-secret-software-exploits/>.
86
Kaspersky Lab’s Global Research & Analysis Team, ‘The Mystery of Duqu 2.0: a sophisticated
cyberespionage actor returns’, Jun. 2015, Securelist, <https://securelist.com/blog/research/70504/
the-mystery-of-duqu-2–0-a-sophisticated-cyberespionage-actor-returns/>.
THE JOURNAL OF STRATEGIC STUDIES 25

There is evidence that the NSA consiously makes these ‘bets’. A leaked top-
secret presentation provided by Snowden has revealed ‘FoxAcid’, NSA’s code-
name for what it refers as an ‘exploit orchestrator’. FoxAcid is a system which
matches target computer systems with different types of attack.87 What is
especially remarkable about the system is that it saves the most valuable exploits
for the most important targets. As Schneier observes, ‘Low-value exploits are run
against technically sophisticated targets where the chance of detection is high.
[NSA’s Office of Tailored Access Operations] maintains a library of exploits, each
based on a different vulnerability in a system. Different exploits are authorised
against different targets, depending on the value of the target, the target’s
technical sophistication, the value of the exploit and other considerations’.88

Proposition 6: The transitory nature of a cyberweapon is strongly influenced


by the capability of the defensive actors.

Cyber defence is subject to the laws of marginal return. As David Baldwin


famously put it, talking about security more generally; it ‘is only one of many
policy objectives competing for scarce resources and subject to the law of
diminishing returns’.89 Hence, the value of an increment of cybersecurity will
vary from one actor to another. Resources will only be allocated to cybersecurity
– to ensure that delays in awareness, patching and adaptation are minimised – as
long as the marginal return is greater for it than for other uses of the resources.90
It is also subject to the organisational processes of institutions as existing
institutional bureaucracy affects a target’s ability to effectively minimise the
‘window of exposure’ to a cyberattack.91 Comprehensive cybersecurity actions
are not always taken due to the large resources and time it requires to set it up in
(large) organisations.92
A discussion on the ‘defender’ characteristics of a cyberattack is how-
ever inherently complicated by the fact that, as Bruce Schneier observes,
‘for the most part, the size and shape of the window of exposure is not
under the control of any central authority’. Indeed, with interconnectivity
being the very essence of cyberspace, a complex overlapping structure
exists consisting of various clusters of authority and responsibility. In most

87
Bruce Schneier, ‘How the NSA Attacks Tor/Firefox Users With QUANTUM and FOXACID’, Schneier on
Security, Oct. 2013, <https://www.schneier.com/blog/archives/2013/10/how_the_nsa_att.html>.
88
Ibid.
89
Baldwin, ‘The Concept of Security’, 20.
90
Ibid., 21.
91
Graham T. Allison and Philip Zelikow, Essence of Decision: Explaining the Cuban Missile Crisis (Pearson
Education 1999); James G. March and Herbert A. Simon, Organisations, (New York: John Wiley and Sons
1958).
92
In recent years a shift has slowly occurred, however .A report from McKinsey indicates that cybersecurity has
now become much more of a ‘CEO-level issue’. Tucker Bailey, James Kaplan, and Chris Rezek, ‘Why senior
leaders are the front line against cyberattacks’, McKinsey Insights, Jun. 2014, <http://www.mckinsey.com/
insights/business_technology/why_senior_leaders_are_the_front_line_against_cyberattacks>.
26 M. SMEETS

NCCS documents, cybersecurity is therefore presented as a ‘shared con-


cern’ with numerous ‘stakeholders’, including local and federal authorities,
private sector actors and society more generally. That said, as numerous
studies on ‘cyber capacity’ and ‘cyber readiness’ indicate, some (national)
security clusters are better able to defend itself against the cyber threat
than others.93 A state actor, pouring significant resources in cyber defence,
is more likely to discover an exploit and render the weapon’s effectiveness.

Conclusion
This article has revealed that the transitory nature of cyberweapons is both a
technical as well as a social product. The technical dimensions of a cyber
capability provokes a certain usage, creating incentives to either use it early
or play the waiting game. As a product affected by social dynamics, the
characteristics and actions of actors can affect the life cycle of a cyberwea-
pon’s effectiveness to cause harm or damage. The ‘curse of transitoriness’
can be beaten following careful development and deployment, as the
propositions developed in this article indicate.94
If my findings concerning the transitory nature of cyberweapons are
correct it implies that, in contrast to the view of scholars that cyberspace
empowers weaker actors in the international system, cyberweapons are
actually for the strong. The transitoriness of cyberweapons means that a
constant (re)investment is required for the development of a sustainable,
constant offensive capability. Also, weak powers have great difficulties to
‘beat the curse of transitoriness’ with less resources to test and retest their
capability. Finally, when offensive actors have invested significant resources
in a cyberweapon, they are incentivised to not attack highly capable actors
considering the chances of exploit discovery are higher.
The nature of cyberweapons’ transitoriness also reduces the incentives
for offensive cyber cooperation. As the likelihood of a cyberweapon’s inef-
fectiveness after use significantly increases, the international sharing of
offensive cyber capabilities is less likely to occur compared to other weapon
systems. Mutual benefits only arise when states are similar in their view on
the (i) timing, (ii) target (iii) and proportionality of the cyberattack. The
paradox of cyberweapons is that, although technically they can be (rela-
tively) effortlessly replicated, their transitoriness changes the incentive
93
See, for example: RSA, ‘Cybersecurity Poverty Index’, 2015, <https://www.emc.com/collateral/ebook/
rsa-cybersecurity-poverty-index-ebook.pdf>; Booz Allen Hamilton and The Economist Intelligence
Unit, ‘Cyber Power Index: Findings and Methodology’, 2011, <http://www.boozallen.com/media/
file/Cyber_Power_Index_Findings_and_Methodology.pdf>; United Nations Institute for
Disarmament Research, ‘The Cyber Index: International Security Trends and Realities’, United
Nations Publications, 2013, <http://www.unidir.org/files/publications/pdfs/cyber-index-2013-en-463.
pdf>.
94
Inherently, what is a ‘curse’ to the offensive actor might be a ‘blessing’ to the defender.
THE JOURNAL OF STRATEGIC STUDIES 27

structure of actors and turns weapons into indivisible goods.95 This aspect
also affects dynamic between allied great and small/middle powers as it has
become more difficult for less-capable states to offer a specialised contribu-
tion to the larger coalition.
In addition, the unique decay function of cyberweapons, as laid out in
this article, implies that offensive cyber programs potentially require a
different funding set-up compared with conventional weapon programs.
For conventional weapon programs, (government) institutions can come
up a relatively good cost estimate as to what is required to maintain a
certain capability; a typical budget proposal would say ‘in X years’ time, the
following capability needs to be replaced/upgraded. Hence, we project to
spent . . .’ As a cyberweapon’s decay function is characterised by ‘random
crashes’, more flexible budgets (and hiring procedures) are recommended to
cope with potentially prompt fluctuations in overall capability.
Finally, the number of cyber incidents we have witnessed to date is almost
incomprehensible.96 Yet, very few of those incidents concern sophisticated
cyberattacks with as aim to cause harm or damage. Indeed, most advanced
persistent threats concern espionage capabilities rather than cyberweapons.
Scholars tend to argue that this is the result of cyberweapons’ limited strategic
usage. Following the propositions developed in this article, however, a different
explanation comes to fore. Cyberweapons are generally part of a larger collection
of capabilities – sharing vulnerability exploits, propagation techniques and/or
other features. Stuxnet’s ‘father’, for example, is supposed to be USB worm Fanny
and has also been linked to espionage platforms Duqu, Flame, Gauss and Duqu
2.0.97 Using a capability which is likely to be discovered early – that is, a
cyberweapon causing visible harm or damage – runs the risk that other capabil-
ities are soon exposed as well; not least because cybersecurity firm are establish-
ing special detection tools in attempt to uncover the cluster of capabilities.98 In
other words, costly multi-year cyber programs are susceptible to a low return of
investment in case capabilities are used with a destructive payload. What this also
means is that we can expect states to delink their intelligence capability from
their warfare capability in the future to minimise losses in capability following
detection.

95
This dynamic might even exist within a state as government institutions, with different organisational
missions, might be developing separate offensive cyber capability programs.
96
According to Verizon’s 2015 Data Breach Investigations Report, more than 317 million new pieces of
malicious software were created last year. That is about ten new pieces of malware each second of
every day. Verizon, ‘Data Breach Investigations Report’, 2015, <http://www.verizonenterprise.com/
DBIR/>.
97
Boldizsár Bencsáth, ‘Duqu, Flame, Gauss: Followers of Stuxnet’, RSA Conference Europe 2012, <http://
www.rsaconference.com/writable/presentations/file_upload/br-208_bencsath.pdf>.
98
In the case of Fanny and Stuxnet see: Kaspersky Lab’s Global Research & Analysis Team, ‘A Fanny
Equation: “I am your father, Stuxnet”’, Securelist, Feb. 2015, <https://securelist.com/blog/research/
68787/a-fanny-equation-i-am-your-father-stuxnet/>.
28 M. SMEETS

Acknowledgements
For written comments on early drafts, the author is indebted to Graham Fairclough,
Trey Herr, Lucas Kello, Joseph Nye Jr., Taylor Roberts, James Shires, and an anon-
ymous reviewer. An earlier version of this paper was presented at ISA, Atlanta (2016),
and the IR Colloquium at the University of Oxford (2016)

Disclosure statement
No potential conflict of interest was reported by the author.

Notes on contributor
Max Smeets is a lecturer in Politics at Keble College, University of Oxford, and a D.Phil
Candidate in International Relations at St. John’s College, University of Oxford. He
was previously a visiting research scholar at Columbia University SIPA and Sciences
Po CERI. Max’ current research focuses on the proliferation of cyberweapons. More
at: http://maxsmeets.com

Bibliography
Allison, Graham T. and Philip Zelikow, Essence of Decision: Explaining the Cuban
Missile Crisis (New York: Pearson Education 1999).
Anderson, Ross and Tyler Moore, ‘The Economics of Information Security’, Science
314/5799 (2006), 610–13. doi:10.1126/science.1130992
Arbaugh, William A., William L. Fithen, and John McHugh, ‘Windows of Vulnerability:
A Case Study analysis’, IEEE Computer 33/12 (2000), 52-58.
Armstrong, Gary, Stewart Adam, Sara Denize, and Philip Kotler, Principles of Marketing
(Melbourne: Pearson 2015).
Arora, Ashish, Ramayya Krishnan, Anand Nandkumar, Rahul Telang, and Yubao Yang,
‘Impact of Vulnerability Disclosure and Patch Availability - An Empirical Analysis’,
Workshop on the Economics of Information Security (Harvard University 2004).
Arthur, Charles, ‘Cyber-attack concerns raised over Boeing 787 chip’s “back door”’,
The Guardian, May 2012, <http://www.theguardian.com/technology/2012/may/29/
cyber-attack-concerns-boeing-chip>.
Auerswald, Philip E., Christian Duttweiler, and John Garofano, Clinton’s Foreign Policy:
A Documentary Record (The Hague: Kluwer Law International 2003).
Axelrod, Robert, ‘The Rational Timing of Surprise’, World Politics 31/2 (1979), 228–46.
doi:10.2307/2009943
Axelrod, Robert and Rumen Iliev, ‘Timing of Cyber Conflict’, Proceedings of the National
Academy of Sciences 111/4 (2014), 1298–303. doi:10.1073/pnas.1322638111
Bailey, Tucker, James Kaplan, and Chris Rezek, ‘Why senior leaders are the front line
against cyberattacks’, McKinsey Insights, June 2014, <http://www.mckinsey.com/
insights/business_technology/why_senior_leaders_are_the_front_line_against_
cyberattacks>.
Baldwin, D A., ‘The Concept of Security’, Review of International Studies 23 (1997), 5–
26, 20. doi:10.1017/S0260210597000053
THE JOURNAL OF STRATEGIC STUDIES 29

Bartlett, Robert, ‘Developments in the Law-The Law of Cyberspace’, Harvard Law


Review 112/1574 (1999), 1635.
Beattie, Steve, Seth Arnold, Crispin Cowan, Perry Wagle, and Chris Wright, ‘Timing the
Application of Security Patches for Optimal Uptime’, LISA XVI, November 2002.
Bencsáth, Boldizsár, ‘Duqu, Flame, Gauss: Followers of Stuxnet’, RSA Conference
Europe, 2012, <http://www.rsaconference.com/writable/presentations/file_
upload/br-208_bencsath.pdf>.
Bilge, Leyla and Tudor Dumitras, ‘Before We Knew It, An Empirical Study of Zero-Day
Attacks In The Real World‘, CCS, October 2012.
Bloom, Gedare, Eugen Leontie, Bhagirath Narahari, and Rahul Simha, ‘Chapter 12:
Hardware and Security: Vulnerabilities and Solutions’, in Sajal K. Das, Krishna Kant,
and Nan Zhang (eds.), Handbook on Securing Cyber-Physical Critical Infrastructure
(Waltham: Morgan Kaufmann 2012).
Byres, Eric, Andrew Ginter, and Joel Langill, ‘How Stuxnet Spreads – A Study of
Infection Paths in Best Practice Systems’, February 2011, <http://www.abterra.ca/
papers/how-stuxnet-spreads.pdf>.
Campbell, John Y. and N. Gregory Mankiw, ‘Permanent and Transitory Components
in Macroeconomic Fluctuations‘, NBER, 2169 (1987).
Cavusoglu, Hasan, Huseyin Cavusoglu, and Srinivasan Raghunathan, ‘Efficiency of
Vulnerability Disclosure Mechanisms to Disseminate Vulnerability Knowledge’,
IEEE Transactions on Software Engineering 33/3 (2007), 171–85. doi:10.1109/
TSE.2007.26
Cavusoglu, Hasan, Huseyin Cavusoglu, and Jun Zhang, ‘Security Patch Management:
Share the Burden or Share the Damage?’, Management Science 54/4 (2008), 657–
70. doi:10.1287/mnsc.1070.0794
Cochrane, J H., ‘Permanent and Transitory Components of GNP and Stock Prices’, The
Quarterly Journal of Economics 109/1 (1994), 241–65. doi:10.2307/2118434
Collins English Dictionary (online), ‘transitory’, <http://www.collinsdictionary.com/dic
tionary/English>.
Corporation, Symantec, ‘Internet Security Threat Report 2014‘, 2014, <http://www.
symantec.com/content/en/us/enterprise/other_resources/b-istr_main_report_
v19_21291018.en-us.pdf>.
Dacey, Robert F., ‘Information security progress made, but challenges remain to
protect federal systems and the nation’s critical infrastructures’, Government
Accountability Office, 2003, <http://world.std.com/~goldberg/daceysecurity.pdf>.
Denning, Dorothy E., ‘Rethinking the Cyber Domain and Deterrence’ JFQ, 77 (2015).
Economist, The, ‘It’s about time: Escalating cyber-attacks‘, February 2014, <http://
www.economist.com/blogs/babbage/2014/02/escalating-cyber-attacks>.
Flanagan, Ben, ‘Former CIA chief speaks out on Iran Stuxnet attack’, The National,
December 2011, <http://www.thenational.ae/business/industry-insights/technol
ogy/former-cia-chief-speaks-out-on-iran-stuxnet-attack>.
Florida Center for Instructional Technology, ‘Chapter 2: What is a Protocol?’, 2013,
<http://fcit.usf.edu/network/chap2/chap2.htm>.
Fortinet, Head-First into the Sandbox‘, 2014, <https://www.fortinet.com/sites/default/
files/whitepapers/Head_First_into_the_Sandbox.pdf>.
Frei, Christa and Alfonso Sousa-Poza, ‘Overqualification: Permanent or Transitory?’,
Applied Economics 44 (2012), 1837–47. doi:10.1080/00036846.2011.554380
Frei, Stefan, Bernhard Tellenbach, and Bernhard Plattner, ‘0-Day Patch: Exposing
Vendors (In)security Performance’, BlackHat Europe, 2008, <https://www.blackhat.
com/presentations/bh-europe-08/Frei/Whitepaper/bh-eu-08-frei-WP.pdf>.
30 M. SMEETS

Gartzke, Erik, ‘The Myth of Cyberwar: Bringing War in Cyberspace Back Down to
Earth’, International Security 38/2 (2013), 41–73, 59-60. doi:10.1162/ISEC_a_00136
Gilbert, David, ‘Equation Group: Meet the NSA ‘gods of cyber espionage’,
International Business Times, February 2015, <http://www.ibtimes.co.uk/equation-
group-meet-nsa-gods-cyber-espionage-1488327>.
Goodin, Dan, ‘How “omnipotent” hackers tied to NSA hid for 14 years—and were
found at last’, Ars Tecnica, (February 16, 2015), <http://arstechnica.com/security/
2015/02/how-omnipotent-hackers-tied-to-the-nsa-hid-for-14-years-and-were-
found-at-last/>.
Greenberg, Andy, ‘Shopping for Zero-Days: A Price List For Hackers’ Secret Software
Exploits‘, Forbes Magazine, March 2012, <http://www.forbes.com/sites/andygreen
berg/2012/03/23/shopping-for-zero-days-an-price-list-for-hackers-secret-software-
exploits/>.
Hamilton, Booz Allen and The Economist Intelligence Unit, ‘Cyber Power Index:
Findings and Methodology‘, 2011, <http://www.boozallen.com/media/file/Cyber_
Power_Index_Findings_and_Methodology.pdf>.
Hayden, Michael V., ‘The Future of Things Cyber’, Strategic Studies Quarterly 5/1 (2011), 3-7.
Herr, Trey, ‘PrEP: A Framework for Malware & Cyber Weapons’, Cyber Security and
Research Institute, (2014).
Herrera, Geoffrey L., Technology and International Transformation: The Railroad, the
Atom Bomb, and the Politics of Technological Change (Albany: State University of
New York Press 2006).
Jang-Jaccard, Julian and Surya Nepal, ‘A Survey of Emerging Threats in
Cybersecurity’, Journal of Computer and System Sciences 80/5 (2014), 973–93.
doi:10.1016/j.jcss.2014.02.005
Karri, Ramesh, Jeyavijayan Rajendran, Kurt Rosenfeld, and Mark Tehranipoor,
‘Trustworthy Hardware: Identifying and Classifying Hardware Trojans’, Computer
43/10 (2010), 39–46. doi:10.1109/MC.2010.299
Kaspersky Lab’s Global Research & Analysis Team, ‘Animals in the APT Farm’,
Securelist, March 2015, <https://securelist.com/blog/research/69114/animals-in-
the-apt-farm/>.
Kaspersky Lab’s Global Research & Analysis Team, ‘Houston, we have a problem‘,
SecureList, February 2015, <https://securelist.com/blog/research/68750/equation-
the-death-star-of-malware-galaxy>.
Kaspersky Lab’s Global Research & Analysis Team, ‘The Mystery of Duqu 2.0: a sophisti-
cated cyberespionage actor returns‘, June 2015, Securelist, <https://securelist.com/blog/
research/70504/the-mystery-of-duqu-2-0-a-sophisticated-cyberespionage-actor-
returns/>.
Kaspersky Lab’s Global Research & Analysis Team, ‘A Fanny Equation: “I am your
father, Stuxnet” ‘, Securelist, February 2015, <https://securelist.com/blog/research/
68787/a-fanny-equation-i-am-your-father-stuxnet/>.
Kaur, Ratinder and Maninder Singh, ‘A Survey on Zero-Day Polymorphic Worm
Detection Techniques’, IEEE Communications Surveys & Tutorials 16/3 (2014),
1520–49. doi:10.1109/SURV.2014.022714.00160
Keegan, John, A History of Warfare (London: Random House 1994).
Kello, Lucas, ‘Cyber Disorders: Rivalry and Conflict in a Global Information Age’,
Presentation, International Security Program Seminar Series, Belfer Center for
Science and International Affairs, Harvard Kennedy School May 2012, <http://
belfercenter.hks.harvard.edu/files/kello-isp-cyber-disorders.pdf>.
THE JOURNAL OF STRATEGIC STUDIES 31

Krepinevich, Andrew ‘Cyber Warfare: a ‘nuclear option’?’, Center for Strategic and
Budgetary Assessments, 2012, <http://www.csbaonline.org/wp-content/uploads/
2012/08/CSBA_Cyber_Warfare_For_Web_1.pdf>.
Lab, Kaspersky, ‘Equation Group, Questions and Answers’, February 2015, <https://
securelist.com/files/2015/02/Equation_group_questions_and_answers.pdf>.
Levy, J S., ‘The Offensive/Defensive Balance of Military Technology: A Theoretical and
Historical Analysis’, International Studies Quarterly 28 (1984), 219–38. doi:10.2307/
2600696
Lewis, James A., ‘Conflict and Negotiation in Cyberspace’, The Technology and Public
Policy Program, 2013, <http://csis.org/files/publication/130208_Lewis_
ConflictCyberspace_Web.pdf>.
Libicki, Martin C., Conquest in Cyberspace: National Security and Information Warfare
(Cambridge: Cambridge University Press 2007).
Libicki, Martin C., ‘Cyberspace Is Not a Warfighting Domain’, A Journal of Law and
Policy for the Information Society 8/2 (2012), 326.
Lin, Herbert S., ‘Offensive Cyber Operations and the Use of Force’, Journal of National
Security Law and Policy 4/63 (2010), 63–86.
Lin, Herbert S., ‘Escalation Dynamics and Conflict Termination in Cyberspace’,
Strategic Studies Quarterly 6/3 (2012), 46–70.
March, James G. and Herbert A. Simon, Organizations (New York: John Wiley and Sons
1958).
Menn, Joseph, ‘Special Report: U.S. Cyber war Strategy Fear of Blowback’, Reuters,
May 2013, <http://www.reuters.com/article/2013/05/10/us-usa-cyberweapons-spe
cialreport-idUSBRE9490EL20130510>.
Mitnick, Kevin, The Art of Deception (Hoboken: John Wiley & Sons 2002).
Mitnick, Kevin and William L. Simon, The Art of Intrusion: The Real Stories Behind the
Exploits of Hackers, Intruders, & Deceivers (Ronald Madzima & Sons 2005).
Nappa, Antonio, Richard Johnson, Leyla Bilge, Juan Caballero, and Tudor Dumitras,
‘The Attack of the Clones: A Study of the Impact of Shared Code on Vulnerability
Patching’, IEEE Symposium on Security and Privacy, San Jose, CA, 2015.
Okhravi, Hamed and David Nicol, ‘Evaluation of Patch Management Strategies’,
International Journal of Computational Intelligence: Theory and Practice 3/2
(2008), 109–17.
Owens, William A., Kenneth W. Dam, and Herbert S. Lin (eds.), ‘Excerpts from
Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of
Cyberattack Capabilities‘, National Research Council, 2009.
Pauna, Adrian and Konstantinos Moulinos, ‘Window of exposure. . . a real problem
for SCADA systems? Recommendations for Europe on SCADA patching‘,
European Union Agency for Network and Information Security Publication,
December 2013.
Presidency of the Council of Ministers Italy, ‘National Strategic Framework for the
Security of Cyberspace‘, December 2013, <http://www.sicurezzanazionale.gov.it/
sisr.nsf/wp-content/uploads/2014/02/italian-national-strategic-framework-for-
cyberspace-security.pdf>.
Radianti, Jaziar and Jose. J. Gonzalez, ‘Understanding Hidden Information Security
Threats: The Vulnerability Black Market’, Proceedings of the 40th Hawaii
International Conference on System Sciences, Hawaii, 2007.
Ramos, Terry, ‘The Laws of Vulnerabilities,’ RSA Conference, February 2006.
Random House Webster’s Unabridged Dictionary (online), ‘transitory,’< http://diction
ary.reference.com/browse/transitory>.
32 M. SMEETS

Ransbotham, Sam, Sabyasachi Mitra, and Jon Ramsey, ‘Are Markets for Vulnerabilities
Effective?’, ICIS 2008, <http://aisel.aisnet.org/cgi/viewcontent.cgi?article=
1192&context=icis2008>.
Rid, Thomas, ‘Cyber War Will Not Take Place’, Journal of Strategic Studies 35/1 (2012),
5–32. doi:10.1080/01402390.2011.608939
RSA, ‘Cybersecurity Poverty Index‘, 2015, <https://www.emc.com/collateral/ebook/
rsa-cybersecurity-poverty-index-ebook.pdf>.
Sanger, David E., Confront and Conceal: Obama’s Secret Wars and Surprising use of
American Power (New York: Crown Publishing 2012).
Sanger, David E., ‘Obama Order Sped Up Wave of Cyberattacks Against Iran’, The New
York Times, June 2012, <http://www.nytimes.com/2012/06/01/world/middleeast/
obama-ordered-wave-of-cyberattacks-against-iran.html?_r=0>.
Schneier, Bruce, ‘Crypto-Gram‘, September 2000, <https://www.schneier.com/crypto
gram/archives/2000/0915.html>.
Schneier, Bruce, ‘How the NSA Attacks Tor/Firefox Users With QUANTUM and
FOXACID’, Schneier on Security, October 2013, <https://www.schneier.com/blog/
archives/2013/10/how_the_nsa_att.html>.
Schneier, Bruce, ‘The Witty Worm a New Chapter in Malware’, Computer World, June
2014, <http://www.computerworld.com/article/2565119/malware-vulnerabilities/
the-witty-worm–a-new-chapter-in-malware.html>.
Shachtman, Noah and Peter W. Singer, ‘The Wrong War: The Insistence on Applying
Cold War Metaphors to Cybersecurity Is Misplaced and Counterproductive’,
Brookings Institute, August 2011, <http://www.brookings.edu/research/articles/
2011/08/15-cybersecurity-singer-shachtman>.
Shipley, Greg, ‘Painless (well, almost) patch management procedures’, Network Computer,
2004, <http://www.networkcomputing. com/showitem.jhtml?docid=1506f1>.
Song, Cristian, Cadar JaeSeung, and Peter Pietzuch, ‘SYMBEXNET: Testing Network
Protocol Implementations with Symbolic Execution and Rule-Based Specifications’,
IEEEE Transactions on Software Engineering 40/7 (2013), 695–709. doi:10.1109/
TSE.2014.2323977
Subrahmanian, V. S., Michael Ovelgönne, B. Tudor Dumitras, and Aditya Prakash,
‘Chapter 4, The Global Cyber-Vulnerability Report’, in V.S. Subrahmanian, Michael
Ovelgonne, B. Tudor Dumitras, and Aditya Prakash) (eds.), Terrorism, Security and
Computation (New York: Springer 2015).
Sweeting, Andrew, ‘Equilibrium Price Dynamics in Perishable Goods Markets: The
Case of Secondary Markets for Major League Baseball Tickets‘, NBER, Working
Paper 14505, (2008).
The Grugq, ‘Twitter’, 2016, <https://twitter.com/thegrugq>.
Tsipenyuk, Katrina, Brian Chess, and Gary McGraw, ‘Seven pernicious kingdoms: A
taxonomy of software security errors’, IEEE Security and Privacy Magazine 3/6
(2005), 81–84. doi:10.1109/MSP.2005.159
United Nations Institute for Disarmament Research, ‘The Cyber Index: International
Security Trends and Realities‘, United Nations Publications, 2013, <http://www.
unidir.org/files/publications/pdfs/cyber-index-2013-en-463.pdf>.
Verizon, ‘Data Breach Investigations Report‘, 2015, <http://www.verizonenterprise.
com/DBIR/>.
Zetter, Kim, ‘Hacking Team Leak shows How Secretive Zero-Day Exploit Sales Work’,
Wired, (July 2015), <http://www.wired.com/2015/07/hacking-team-leak-shows-
secretive-zero-day-exploit-sales-work/>.

You might also like