Professional Documents
Culture Documents
Max Smeets
To cite this article: Max Smeets (2018) A matter of time: On the transitory nature of
cyberweapons, Journal of Strategic Studies, 41:1-2, 6-32, DOI: 10.1080/01402390.2017.1288107
ABSTRACT
This article examines the transitory nature of cyberweapons. Shedding light on this
highly understudied facet is important both for grasping how cyberspace affects
international security and policymakers’ efforts to make accurate decisions regard-
ing the deployment of cyberweapons. First, laying out the life cycle of a cyberwea-
pon, I argue that these offensive capabilities are both different in ‘degree’ and in
‘kind’ compared with other regarding their temporary ability to cause harm or
damage. Second, I develop six propositions which indicate that not only technical
features, inherent to the different types of cyber capabilities – that is, the type of
exploited vulnerability, access and payload – but also offender and defender
characteristics explain differences in transitoriness between cyberweapons.
Finally, drawing out the implications, I reveal that the transitory nature of cyber-
weapons benefits great powers, changes the incentive structure for offensive cyber
cooperation and induces a different funding structure for (military) cyber programs
compared with conventional weapon programs. I also note that the time-depen-
dent dynamic underlying cyberweapons potentially explains the limited deploy-
ment of cyberweapons compared to espionage capabilities.
The impressive growth of cyberspace has brought a new type of weapon: the
cyberweapon. A cyberweapon concerns a capability designed to access a
computer system or network to damage or harm living or material entities.1
Whereas conventional weapons are generally characterised by their ‘multiple-
use-ability’ or ‘permanent’ nature, cyberweapons are unique in that they are
‘transitory’ in nature, that is, they have a short-lived or temporary ability to
effectively cause harm or damage to living or material entities.2 The nuclear
bombs developed at the height of the Cold War could still vanish cities in one
blow.3 Also the Kalashnikovs mass-produced in the early 1950s could still kill
people. Even some of the earliest weapons used in conflict – including the
socket axe, the chariot, the spear and the sickle-sword – could be lethal today.
In contrast, the cyberweapons produced today are unlikely to have any impact
in a few years’ – or even less – time.
Although this dimension of cyberweapons has been overlooked for a
long time, there is a growing awareness of its consequences for interna-
tional security. Recent efforts have focused on how the transitory nature of
cyberweapons affects the incentive structure for deployment.4 Some scho-
lars have also paid attention to how the transitory nature of cyberweapons
changes the incentives for investing in these capabilities.5
Current research, however, fails to clarify what influences the temporary ability
of cyberweapons to cause harm or damage. The central objective of this research
is therefore to move towards a more well-considered understanding of the issue.
I aim to address the question: in what sense are cyberweapons transitory?
The article has three main motivations. First, I aim to enhance the con-
ceptual clarity of the cyber studies field. Just like mutual and shared under-
standings of values are considered to be the essential building blocks of any
society, so are mutual and shared understandings of concepts considered to
be the foundation of any academic discipline. Therefore, in unpacking the
concept of transitoriness more is involved than mere logomachy; it permits
more effective knowledge accumulation, and facilitates the security dialo-
gue between and across academic communities undertaking cyber research,
allowing to establish a common ground for discussion between those with
disparate views6. Second, scholars who have aimed to understand the
implications of the cyber danger for international society have repeatedly
focused on certain attributes of cyberspace as their starting point of analysis.
Topics discussed ad nauseam concern the notion that cyberspace radically
increased the speed, volume and range of communications of both state
and non-state actors, or that it leads to an obscurity of the identity and
location of actors causing a problem of attribution. Although these works
4
According to Libicki, the transitory nature of cyberweapons leads to less trigger-happy actors: ‘like
surprise, it is best saved for when it is most needed’. Krepinevich directly opposes this view arguing
that it creates a “use-it-or-lose-it dynamic” and might encourage a cyber power to launch an attack
before its advantage is lost.” Axelrod and Iliev reconcile the contrasting views and argue that the
degree to which a cyberweapons incentivises a ‘use-it-or lose-it dynamic’ or a ‘waiting-for-the-right-
moment dynamic’ depends on the type of capability and whether the stakes remain constant. See:
Martin C. Libicki, Conquest in Cyberspace: National Security and Information Warfare (Cambridge:
Cambridge University Press 2007), 87; Andrew Krepinevich, ‘Cyber Warfare: a “nuclear option”?’,
Center for Strategic and Budgetary Assessments, 2012, <http://www.csbaonline.org/wp-content/
uploads/2012/08/CSBA_Cyber_Warfare_For_Web_1.pdf>; Robert Axelrod and Rumen Iliev, ‘Timing
of cyber conflict’, PNAS, 111/4 (2014), 1298–1303.
5
According to Gartzke, it means reduces the incentives to invest in ‘cyberwar assets’. Erik Gartzke, ‘The
Myth of Cyberwar’, International Security, 38/2 (2013), 41–73, 59–60. Also see: James A. Lewis,
‘Conflict and Negotiation in Cyberspace’, The Technology and Public Policy Program, 2013, <http://
csis.org/files/publication/130208_Lewis_ConflictCyberspace_Web.pdf>.
6
For a similar point see: David A. Baldwin, ‘The Concept of Security’, Review of International Studies 23
(1997), 5–26.
8 M. SMEETS
7
In statistics, it would called omitted-variable bias.
8
The work of the National Academy of Sciences is a good example of this trend. It states that weapons
have three characteristics that differentiate them from traditional kinetic weapons. First, ‘they are
easy to use with high degrees of anonymity and with plausible deniability, making them well suited
for covert operations and for instigating conflict between other parties’. Second, they ‘are more
uncertain in the outcomes they produce, making it difficult to estimate deliberate and collateral
damage’. And, third, they ‘involve a much larger range of options and possible outcomes, and may
operate on time scales ranging from tenths of a second to years, and at spatial scales anywhere from’
concentrated in a facility next door‘ to globally dispersed’. The study leaves out any discussion on the
notion of transitoriness. See: William A. Owens, Kenneth W. Dam and Herbert S. Lin (eds.), ‘Excerpts
from Technology, Policy, Law and Ethics Regarding U.S. Acquisition and Use of Cyberattack
Capabilities’, National Research Council, 2009; S-1, Section 1.4 and 2.1.
THE JOURNAL OF STRATEGIC STUDIES 9
uploads the patch. The fact that advances in the cyber defence of one (vendor)
can be relatively effortlessly adopted by others creates a defence for all.
Part II considers the follow up question, which is: why is there a difference in
transitoriness between cyberweapons? The short-lived nature of cyberweapons
to cause harm or damage is influenced by a number of technical properties.
First, this research finds that cyberweapons exploiting software vulnerabilities
are more transitory relative to capacities exploiting hardware and network
vulnerabilities. Second, I aver that cyberweapons exploiting closed-access sys-
tems are more likely to be transitory than cyberweapons exploiting open access
systems. Third, I aver that a cyberweapon causing a high level of visible harm
and/or damage is more likely to be transitory. Yet, I find that the transitoriness
of cyberweapons has a political dimension as well. This research finds that more
capable offensive actors are able to significantly reduce the short-lived nature
of cyberweapons. Certain offensive actors have a wider variety of zero-day
exploits at their disposal, enabling more targeted attacks and thus reducing the
chances of discovery. There are also asymmetries with respect to the ability to
test and retest cyberweapons before actual deployment. The time consuming
process of developing cyberweapons leads to constant attempts to reuse
computer codes designed to exploit zero-day vulnerabilities, even though it
significantly decreases the chances of successful penetration of the targeted
system – particularly when a patch is already made available. Offensive actors
will also have to make trade-offs in the deployment of a cyberweapon. The
most principal consideration concerns the number of targets, demanding a
balancing act between potential short-term gains and long(er) term effective-
ness. Finally, cyber defence, inherently unable to escape the laws of marginal
return, also effects the transitoriness of a deployed cyberweapon as it can cause
delays in the discovery, disclosure and patching of a vulnerability.
Part III concludes and draws out the implications of thinking about
cyberweapons in the way proposed in this article. It reveals that the transi-
tory nature of cyberweapons benefits great powers, changes the incentive
structure for offensive cyber cooperation and induces a different financial
funding structure for cyber programs (compared with conventional weapon
programs). It also provides a potential reason for the limited deployment of
cyberweapons compared to espionage tools.
9
Lewis, ‘Conflict and Negotiation in Cyberspace’; Krepinevich, Cyber Warfare
10
Leyla Bilge and Tudor Dumitras, ‘Before we knew it: an empirical study of zero-day attacks in the real
world’, CSS’12, Oct. 2012; Ratinder Kaur and Maninder Singh, ‘A Survey on Zero-Day Polymorphic
Worm Detection Techniques’, IEEE Communications surveys & tutorials 16/3 (2014).
11
A more detailed discussion on this issue will be provided below.
12
Gartzke, ‘The Myth of Cyberwar’
13
Andrew Sweeting, ‘Equilibrium Price Dynamics in Perishable Goods Markets: The Case of Secondary
Markets for Major League Baseball Tickets’, NBER, Working Paper 14505, (2008)
14
As a former U.S. executive at a defence contractor said to a reporter from Reuters: ‘My job was to
have 25 zero-days on a USB stick, ready to go’. See: Joseph Menn, ‘Special Report: U.S. Cyber war
Strategy Fear of Blowback’, Reuters, May 2013, <http://www.reuters.com/article/2013/05/10/us-usa-
cyberweapons-specialreport-idUSBRE9490EL20130510>.
15
Axelrod and Iliev, ‘Timing of Cyber Conflict’; It follows model developed in: Robert Axelrod, ‘The
Rational Timing of Surprise’, World Politics 31/2 (1979), 228–246.
16
Collins English Dictionary (online), ‘transitory’, <http://www.collinsdictionary.com/dictionary/
English>.
THE JOURNAL OF STRATEGIC STUDIES 11
lived or temporary’.17 In line with its earlier meaning it can be said that the
term in this context underlines the specific period of transition in which an
adversary can successfully get through the defence passage of a target. The
transitoriness of a weapon refers to the short-lived or temporary ability to
effectively cause harm or damage. Hence, in relation to cyberweapons it
refers to the temporary ability to access a computer system or network to
cause harm or damage to living and material entities.
It seems there are grounds to question the extent to which the transitoriness
of cyberweapons is actually a novel phenomenon. After all, we do not wage war
anymore with the same weapons used in ancient times. Indeed, the chariots
have been replaced (with many intermediate steps) by tanks, aircraft carriers and
fighter planes. And the bows and arrows have been replaced by highly effective
handguns, squad automatic weapons, rocket launchers and sniper rifles.
This observation on the ‘evolution’ in the use of arms in warfare is potentially
deceiving. The reason why most weapons are replaced is because a more
effective weapon was developed. Due to technological advancements, the new
weapon might be easier to use, more cost-efficient or able to cause more harm or
damage.18 Another reason might be because the weapon has lost its ability to
cause harm or damage to an insignificant degree due to new defence mechan-
isms which have been put in place by the target. Although the latter occurs less
frequently, it is this aspect to which the transitoriness of weapons refers to.
That said, it seems that the difference between cyberweapons and con-
ventional weapons is mostly one of degree rather than kind as a time-
dependent dynamic seems to underlie every weapon. Indeed, Bill Clinton
remarked in San Francisco in 1999: ‘the whole history of conflict can be seen
in part as the race of defensive measures to catch up with offensive
capabilities. That is what, we’re doing in dealing with the computer chal-
lenges today [. . .]. It is very important that the American people, without
panic, be serious and deliberate about them, because it is the kind of
challenge that we have faced repeatedly’.19
Theoretically, any weapon can be put on a spectrum ranging from highly
permanent to highly transitory as the effectiveness of a certain tool to cause
harm or damage inherently reduces over time. An example of a highly perma-
nent weapon is the knife – a tool which any member of the Special Forces still
wears today for the challenges of the battlefield. For nuclear weapons, states
have established costly programs to maintain a reliable capability.20 In that sense,
17
Random House Webster’s Unabridged Dictionary (online), ‘transitory’, <http://dictionary.reference.
com/browse/transitory>.
18
John Keegan, A History of Warfare (Random House: London 1994)
19
Philip E. Auerswald, Christian Duttweiler, and John Garofano, Clinton’s Foreign Policy: A Documentary
Record (The Hague: Kluwer Law International 2003), 73.
20
The United States Department of Energy, for example, has set up the ‘Stockpile Stewardship and
Management Program’ with as aim to maintain a reliable stockpile using various simulations and
applications from the scientific community to deal with the particular issue of an aging capability.
12 M. SMEETS
cyberweapons are exceptional in that they belong to the group which is the
most transitory as its ability and effectiveness to cause harm declines relatively
quickly. Cyberweapons from this perspective are merely unique in that there is
the potential of a quick adaptation of defence measures in cyberspace rendering
the specific weapon ineffective.21 The main question we thus have to address is:
What is the main reason cyberweapons are so short lived?
The basic underlying cause for the rapid offense–defence cycle of cyberwea-
pons is that cyberspace is more malleable. The term is often considered to be
synonymous with the notion that cyberspace is ‘man-made’, mentioned in
numerous cyber defence strategies and perpetuated by numerous scholars.22
Yet, the two concepts should not be conflated as one is meaningful for our
understanding of cyberspace and the transitoriness of cyberweapons whereas
the other is not. As General Michael Hayden, former director of the National
Security Agency (NSA) and the Central Intelligence Agency (CIA), observes: ‘the
other domains are natural, created by God and this one is the creation of man’.23
The problem with the ‘man-made’ notion is that also other domains of warfare are
to some degree produced, formed or made by humans – like tunnels, roads and
train tracks in the domain of land – and cyberspace does have natural compo-
nents too – like its heavy reliance on electromagnetic waves.24 Instead, what is
important to stress is that the man-made constructions in other domains are
more difficult to change by its owners (to enhance defence systems) compared to
cyberspace.25 Indeed, at least technically, cyberspace can more easily be changed
to reduce the effects of a certain cyberweapon. Hence, as Libicki indicates, ‘the
task in defending the network is [therefore] not so much to manoeuvre better or
apply more firepower in cyberspace but to change the particular features of one’s
own portion of cyberspace itself so that it is less tolerant of attack’.26
21
The literature on International Relations and military history contains numerous references to the
offensive or defensive balance of military technology. Yet, these discussions have focused on the
degree to which the offense has an advantage over the defence – and the strategic implications of it.
Scholars rarely focus on how specific defence measures catch up on offensive measures. For an
overview see: Jack S. Levy, ‘The Offensive/Defensive Balance of Military Technology: A Theoretical and
Historical Analysis’, International Studies Quarterly 28 (1984), 219–238.
22
According to Shachtman and Singer, ‘[c]yberspace is a man-made domain of technological commerce
and communication, not a geographical chessboard of competing alliances’. Noah Shachtman and
Peter W. Singer, ‘The Wrong War: The Insistence on Applying Cold War Metaphors to Cybersecurity Is
Misplaced and Counterproductive’, Brookings Institute, Aug. 2011, <http://www.brookings.edu/
research/articles/2011/08/15-cybersecurity-singer-shachtman>. For cyber strategies, so for example:
Presidency of the Council of Ministers Italy, ‘National Strategic Framework for the Security of
Cyberspace’, Dec. 2013, <http://www.sicurezzanazionale.gov.it/sisr.nsf/wp-content/uploads/2014/02/
italian-national-strategic-framework-for-cyberspace-security.pdf>.
23
Michael V. Hayden, ‘The Future of Things Cyber’, Strategic Studies Quarterly 5/1 (2011)
24
See Dorothy E. Denning, ‘Rethinking the Cyber Domain and Deterrence’, JFQ 77 (2015).
25
As Robert Bartlett states, ‘[a]nyone who understands how to read and write code is capable of
rewriting the instructions that define the possible’. Robert Bartlett, ‘Developments in the Law-The
Law of Cyberspace’, Harvard Law Review 112/1574 (1999), 1635.
26
Martin C. Libicki, ‘Cyberspace Is Not a Warfighting Domain’, A Journal of Law and Policy for the
Information Society 8/2 (2012), 326; As the discussion below on the window of vulnerability indicates,
these ‘corrections’ can take place both before and after a cyberattack.
THE JOURNAL OF STRATEGIC STUDIES 13
27
Bruce Schneier, ‘Crypto-Gram’, Sep. 2000, <https://www.schneier.com/crypto-gram/archives/2000/
0915.html>.
28
Others have referred to the ‘window of exposure’ as the ‘lifecycle of a vulnerability’. I use these terms
interchangeably (including the term the ‘life cycle of cyberweapons’ effectiveness’ as well). Stefan
Frei, Bernhard Tellenbach, and Bernhard Plattner, ‘0-Day Patch: Exposing Vendors (In)security
Performance’, BlackHat Europe, 2008, <https://www.blackhat.com/presentations/bh-europe-08/Frei/
Whitepaper/bh-eu-08-frei-WP.pdf>; Adrian Pauna and Konstantinos Moulinos, ‘Window of exposure. .
. a real problem for SCADA systems? Recommendations for Europe on SCADA patching’, European
Union Agency for Network and Information Security Publication, Dec. 2013.
29
Yet, some ordering is determined. The introduction of the vulnerability always precedes (or equal to)
the time of exploitation of it. And the release of a patch can only occur after the vendor has become
aware of the vulnerability. For a more detailed discussion see the recent study of: Antonio Nappa,
Richard Johnson, Leyla Bilge, Juan Caballero, Tudor Dumitras, ‘The Attack of the Clones: A Study of
the Impact of Shared Code on Vulnerability Patching’, IEEE Symposium on Security and Privacy, 2015.
30
Robert F. Dacey, ‘Information security progress made, but challenges remain to protect federal
systems and the nation’s critical infrastructures’, Government Accountability Office, 2003, <http://
world.std.com/~goldberg/daceysecurity.pdf>.
31
Sam Ransbotham, Sabyasachi Mitra, and Jon Ramsey, ‘Are Markets for Vulnerabilities Effective?’ ICIS,
2008, <http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1192&context=icis2008>.
32
Frei, Tellenbach, and Plattner note that the disclosure of a vulnerability has been defined in a number
of ways, ranging from ‘made public to wider audience’, ‘made public through forums or by vendor’,
‘made public by anyone before vendor releases a patch’. See Frei, Tellenbach, and Plattner. ‘0-Day
Patch’.
14 M. SMEETS
33
Ashish Arora, Ramayya Krishnan, Anand Nandkumar, Rahul Telang and Yubao Yang, ‘Impact of
Vulnerability Disclosure and Patch Availability – An Empirical Analysis’, Workshop on the
Economics of Information Security, 2004; Hasan Cavusoglu, Huseyin Cavusoglu, and Srinivasan
Raghunathan, ‘Efficiency of Vulnerability Disclosure Mechanisms to Disseminate Vulnerability
Knowledge’, IEEE Transactions on Software Engineering 33/3 (2007),171–185.
34
Frei and Plattner, ‘0-Day Patch’
35
The testing of patches is required before being applied to the production environment to make sure
that it works properly and does not conflict with other applications in the system. Hasan Cavusoglu,
Huseyin Cavusoglu, and Jun Zhang, ‘Security Patch Management: Share the Burden or Share the
Damage?’ Management Science 54/4 (2008), 657–670.
36
Steve Beattie, Seth Arnold, Crispin Cowan, Perry Wagle, and Chris Wright, ‘Timing the Application of
Security Patches for Optimal Uptime’, LISA XVI, Nov. 2002; Also see: Ross Anderson and Tyler Moore,
‘The Economics of Information Security’, Science 314/5799 (2006), 610–613.
37
Greg Shipley, ‘Painless (well, almost) patch management procedures’, Network Computer 2004,
<http://www. networkcomputing. com/showitem.jhtml?docid = 1506f1>.
38
In an average week, vendors and security organisations announce around 150 vulnerabilities along
with information on how to fix them. Cavusoglu, Cavusoglu, and Zhang, ‘Security Patch
Management’.
39
Gary Armstrong, Stewart Adam, Sara Denize, Philip Kotler, Principles of Marketing, (Pearson:
Melbourne 2015).
THE JOURNAL OF STRATEGIC STUDIES 15
40
The worm was also special in that it was first time a worm was released in the wild through a bot
network of about 100 infected machines. It meant that every available host was very quickly infected.
Bruce Schneier, ‘The Witty Worm a New Chapter in Malware’, Computer World, Jun. 2014, <http://
www.computerworld.com/article/2565119/malware-vulnerabilities/the-witty-worm–a-new-chapter-
in-malware.html>.
41
Most research assumes a linear model for the cyberweapon life cycle, which is critiqued in more
recent scholarship. As the goal of this article is not to estimate the exploitation of a vulnerability or
the average deployment of a patch, I do not make any unnecessary assumptions about this. William
A. Arbaugh, William L. Fithen, and John McHugh, ‘Windows of vulnerability: A case study analysis’,
IEEE Computer 33/12 (2000); Hamed Okhravi and David Nicol, ‘Evaluation of patch management
strategies’, International Journal of Computational Intelligence: Theory and Practice 3/2(2008), 109–117;
Terry Ramos, ‘The Laws of Vulnerabilities’, RSA Conference, Feb. 2006.
42
Bilge and Dumitras, ‘Before We Knew It’.
43
The three events are not collapsed if the chipmaker itself has introduced the backdoor. Although still
unconfirmed, this was likely the case with backdoor in a computer chip used in military systems and
aircraft, discovered by two experts from Cambridge University. See: Charles Arthur, ‘Cyberattack
concerns raised over Boeing 787 chip’s “back door”’, The Guardian, May 2012, <http://www.theguar
dian.com/technology/2012/may/29/cyber-attack-concerns-boeing-chip>.
44
Package management systems can offer various degrees of patch automation. Completely automatic
updates are still rife with problems, so are not widely adopted. Cavusoglu, Cavusoglu and Zhang,
‘Security Patch Management’.
16 M. SMEETS
45
For a more detailed discussion on the degree to which the time between these different effects
determines the security risk exposure, see: Frei and Plattner, ‘0-Day Patch’.
46
Hence, I would argue that a better way to model the transitory nature a cyberweapon is similar to
the Black-Scholes model of options pricing in finance. A cyberweapon is analogous to what is called
THE JOURNAL OF STRATEGIC STUDIES 17
an ‘American option’. The value of usage could be modelled as a ‘Brownian Motion’ with random
crashes representing the use of the weapon by others.
47
John H. Cochrane, ‘Permanent and Transitory Components of GNP and Stock Prices’, The Quarterly
Journal of Economics 109/1 (1994), 241–265.
48
John Y. Campbell, N. Gregory Mankiw, ‘Permanent and Transitory Components in Macroeconomic
Fluctuations’, NBER, 2169 (1987).
49
Christa Frei and Alfonso Sousa-Poza, ‘Overqualification: permanent or transitory?’, Applied Economics
44/14 (2012).
50
In developing these propositions I follow Lin in asserting that a successful cyberattack requires three
elements; (a) a vulnerability, (b) access to the vulnerability (i.e. access vector) and (c) a payload to be
executed (i.e. malicious code). Herr’s application has usefully clarified that the three conditions for
cyberattacks can be successfully translated into ‘cyberweapons’, using a variety of examples. Herbert
S. Lin, ‘Escalation Dynamics and Conflict Termination in Cyberspace’, Strategic Studies Quarterly 6/3
(2012), 46–70; Herbert S. Lin, ‘Offensive Cyber Operations and the Use of Force’, Journal of National
Security Law and Policy 4/63(2010), 63–86; Owens, Dam, and Lin, ‘Technology, Policy, Law, and Ethics
Regarding U.S. Acquisition and Use of Cyberattack Capabilities’; See: Trey Herr, ‘PrEP: A Framework
for Malware & Cyber Weapons’, Cyber Security and Research Institute, (2014), 2.
51
Geoffrey L. Herrera, Technology and International Transformation: The Railroad, the Atom Bomb,and the
Politics of Technological Change (Albany: State University of New York Press 2006).
18 M. SMEETS
52
Julian Jang-Jaccard and Surya Nepal, ‘A Survey of Emerging Threats in Cybersecurity’, Journal of
Computer and System Sciences 80/5 (2014), 973–993.
53
Ramesh Karri, Jeyavijayan Rajendran, Kurt Rosenfeld, Mark Tehranipoor, ‘Trustworthy hardware:
Identifying and classifying hardware Trojans’, Computer 43/10 (2010), 39–46; Also, as the authors
indicate, IT companies often buy untrusted hardware from websites or resellers which may also
contain malicious hardware-based Trojans.
54
Definition adopted from: Jaziar Radianti, and Jose. J. Gonzalez, ‘Understanding Hidden Information
Security Threats: The Vulnerability Black Market’, Proceedings of the 40th Hawaii International
Conference on System Sciences, 2007.
55
Most common software vulnerabilities happen as a result of exploiting software bugs in (i) the
memory, (ii) user input validation, (iii) race conditions and (iv) user access privileges. See: Katrina
Tsipenyuk, Brian Chess, and Gary McGraw, ‘Seven pernicious kingdoms: A taxonomy of software
security errors’, Security and Privacy 3/6 (2005), 81–84.
56
See JaeSeung Song, Cristian Cadar, and Peter Pietzuch, ‘SYMBEXNET: Testing Network Protocol
Implementations with Symbolic Execution and Rule-Based Specifications’, IEEEE Transactions on
Software Engineering 40/7 (2013), 695–709.
57
Florida Center for Instructional Technology, ‘Chapter 2: What is a Protocol?’, 2013, <http://fcit.usf.
edu/network/chap2/chap2.htm>.
58
Jang-Jaccard and Nepal, ‘A survey of emerging threats in cybersecurity’.
THE JOURNAL OF STRATEGIC STUDIES 19
inspect everything’.59 The fact that network protocols are becoming increasingly
complex raises similar issues. Song, Cadar and Pietzuch note that ‘[t]he complex-
ity of network protocols makes errors difficult to detect, even for well-studied and
mature protocols: errors may only manifest themselves after complex sequences
of network packets. For example, DNS server implementations that are vulner-
able to cache poisoning attacks only exhibit problems in specific scenarios’.60
On the patching and adoption delay, not only are hardware vulnerabilities
more difficult to detect, it is also more difficult patch, other than replacing the
hardware. Indeed, hardware after deployment generally cannot be updated,
short of wholesale replacements, whereas software can be updated by uploading
new code – even often remotely.61 Also it often takes a considerable amount of
time for a network vulnerability to close. For example, when Steve Bellovin, then
working for AT&T Bell Laboratories, found a number of important security flaws
in the DNS system he delayed the publication of this vulnerability for a number of
years until a fix was available.62 Also, the way network protocols are set up,
requiring confirmation by both sender and receiver, means that it often is not a
prompt process.
A fourth vulnerability, not mentioned by Jang-Jaccard and Nepal, is at least as
important: the human vulnerability. The notion that the person holding the
information is generally the weakest link in any computer system has been
aptly described in various hacking accounts. Kevin Mitnick describes social
engineering as a ‘craft’ using a mix of deception, influence and persuasion.63
The effectiveness of spear phishing is often astonishing, even for getting some of
the most resilient computer systems.64 Little data are however available on the
human susceptibility to cyberattacks, and how they gain awareness and learn, to
further substantiate this claim.65
59
Adee, ‘The Hunt for the Kill Switch’; It means that when the chips are tested the focus is on how well
it performs the functions it is destined to use for. It is impossible to check for the infinite possible
issues that are not specified. It is also an incredible laborious process to test every chip.
60
Song, Cadar, Pietzuch, ‘SYMBEXNET: Testing Network Protocol Implementations with Symbolic
Execution and Rule-Based Specifications’.
61
Gedare Bloom, Eugen Leontie, Bhagirath Narahari, Rahul Simha, ‘Chapter 12: Hardware and Security:
Vulnerabilities and Solutions’, in Sajal K. Das, Krishna Kant and Nan Zhang (eds.), Handbook on
Securing Cyber-Physical Critical Infrastructure (Waltham: Morgan Kaufmann 2012).
62
Schneier, ‘Crypto-Gram’; The Economist, ‘It’s about time: Escalating cyber-attacks’, Feb. 2014, <http://
www.economist.com/blogs/babbage/2014/02/escalating-cyber-attacks>.
63
Kevin Mitnick, The Art of Deception (Hoboken: John Wiley & Sons 2002), paraphrased from: introduc-
tion; also see: Kevin Mitnick and William L. Simon, The Art of Intrusion: The Real Stories Behind the
Exploits of Hackers, Intruders, & Deceivers (Ronald Madzima & Sons 2005).
64
As ‘the Grugq’, a well-known security researcher/hacker, writes: ‘Give a man an 0day and he’ll have
access for a day, teach a man to phish and he’ll have access for life’. The Grugq, ‘Twitter’, 2016,
<https://twitter.com/thegrugq>.
65
For an interesting recent analysis aiming to establish a rigorous data-driven approach see: V. S.
Subrahmanian, Michael Ovelgönne, Tudor Dumitras, B. Aditya Prakash, ‘Chapter 4, The Global Cyber-
Vulnerability Report’, in V.S. Subrahmanian, Michael Ovelgonne, Tudor Dumitras, B. Aditya Prakash
(eds.), Terrorism, Security and Computation (Springer: New York City 2015).
20 M. SMEETS
66
Indeed, public websites are considered to be the low-hanging fruit as they generally run on generic
server software and are connected to the Internet; even relatively unskilled individuals can launch a
website defacement attack. See: Symantec Corporation, ‘Internet Security Threat Report 2014’, 2014,
<http://www.symantec.com/content/en/us/enterprise/other_resources/b-istr_main_report_v19_
21291018.en-us.pdf>.
67
Lucas Kello, ‘Cyber Disorders: Rivalry and Conflict in a Global Information Age’, Presentation,
International Security Program Seminar Series, Belfer Center for Science and International Affairs,
Harvard Kennedy School, May 2012, <http://belfercenter.hks.harvard.edu/files/kello-isp-cyber-disor
ders.pdf>.
68
Owens, Dam, Lin, ‘Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of
Cyberattack Capabilities’.
69
Ibid.
70
Ibid.
THE JOURNAL OF STRATEGIC STUDIES 21
71
After all, the indirect path might lead defending actors to confuse a kinetic attack for a cyberattack or
accidental for purposeful harm. Part of the reason why the cyber revolution is a bone of contention is due to
the indirect path in which a cyberweapon potentially causes harm or damage. As Rid writes ‘the actual use
of cyber force is to be a far more complex and mediated sequence of causes and consequences that
ultimately result in violence and casualties’. In those Cassandra-esque scenarios in which a cyberweapon
inflicts a lot of material damage or people suffer serious injuries or be killed, ‘the causal chain that links
somebody pushing a button to somebody else being hurt is mediated, delayed and permeated by chance
and friction’. Thomas Rid, ‘Cyber War Will Not Take Place’, Journal of Strategic Studies 35/1 (2012), 5–32, 9.
72
Kim Zetter, ‘Hacking Team Leak shows How Secretive Zero-Day Exploit Sales Work’, Wired (Jul. 2015),
<http://www.wired.com/2015/07/hacking-team-leak-shows-secretive-zero-day-exploit-sales-work/>;
Andy Greenberg, ‘Shopping for Zero-Days: A Price List For Hackers’ Secret Software Exploits’, Forbes
Magazine, Mar. 2012, <http://www.forbes.com/sites/andygreenberg/2012/03/23/shopping-for-zero-
days-an-price-list-for-hackers-secret-software-exploits/>.
22 M. SMEETS
buyer in a burgeoning gray market where hackers and security firms sell tools for
breaking into computers’.73
There are also asymmetries with respect to the ability to test and retest
cyberweapons before actual deployment. Stuxnet provides, again, a good case
in point given the resources and effort poured into the development of this
capacity. The planning of the cyberweapon started during George W. Bush's first
term, and was eventually developed with close collaboration of the NSA and
secret Israeli 8200 unit.74 The complexity of the worm means that thorough
testing was required to see whether the bug could do what it was intended to
do. The United States therefore had to produce its own P-1s, perfect replicas of
the variant used by the Iranians at Natanz. According to Sanger, at first, small-
scale tests were conducted on borrowing centrifuges stored at the Oak Ridge
National Laboratory in Tennessee, which had been taken taken from Muammar
Qaddafi in late 2003 when he gave up the program.75 The tests grew in size and
sophistication – obtaining parts from various small factories in the world. As
David Sanger reports, at some point the United States was ‘even testing the
malware against mock-ups of the next generation of centrifuges the Iranians
were expected to deploy, called IR-2s, and successor models, including some the
Iranians still are struggling to construct’.76
The time consuming process of developing cyberweapons leads to constant
attempts to reuse computer codes designed to exploit zero-day vulnerabilities,
even though it significantly decreases the chances of successful penetration of
the targeted system. Cyber commands make a constant trade-off between the
skills and resources required to develop a new computer code, and the odds of
successfully penetrating targeted systems. In that sense, great powers – with
dedicated cyber organisations with a high number of personnel, both military
and civilian – have a clear advantage. The need to ‘reuse’ old vulnerabilities –
found at the later end of the window of exposure spectrum – is less urgent. For
small and middle powers, it is difficult to have an assembly line of cyberweapon
production running like a clockwork; ensuring a cycle that, when one cyber-
weapon becomes ineffective, the next weapon can be put to use if necessary.
Finally, attackers go to great pains to integrate various evasion and persis-
tence techniques into their cyber capacity to stretch the discovery delay
period.77 It is a feature actually particularly prominent in cyberespionage and
73
Joseph Menn, ‘Special Report: U.S. cyberwar strategy stokes fear of blowback’, Reuters, May 2013,
<http://www.reuters.com/article/us-usa-cyberweapons-specialreport-idUSBRE9490EL20130510>.
74
David E. Sanger, ‘Obama Order Sped Up Wave of Cyberattacks Against Iran’, The New York Times, Jun.
2012, <http://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberat
tacks-against-iran.html?_r = 0>.
75
David E. Sanger, Confront and Conceal: Obama’s Secret Wars and Surprising use of American Power
(New York: Crown Publishing 2012), 197.
76
Ibid., 198.
77
One can, for example, think of polymorphic malware, binary archives or domain generation algorithm
techniques. For more detailed discussion. see: Fortinet, Head-First into the Sandbox’, 2014, <https://
www.fortinet.com/sites/default/files/whitepapers/Head_First_into_the_Sandbox.pdf>.
THE JOURNAL OF STRATEGIC STUDIES 23
surveillance capacities given the purpose of those tools – see, for example,
Finspy (2011), Blue Termite (2013) and Black Energy (2013). Although not much
is known about it, the spyware Turla is also an interesting case in this respect.
The malware was discovered in 2014 and still active today, using satellite
internet connection to hide its command and control servers.
78
See, for example, Animal Farm targeted around 3001–5000 systems. Kaspersky Lab’s Global Research
& ‘Analysis Team, Animals in the APT Farm’, Securelist, Mar. 2015, <https://securelist.com/blog/
research/69114/animals-in-the-apt-farm/>.
79
For sparingly used espionage capacities see CozyDuke, Wild Neutron, miniFlame, Regin, and SabPub.
80
Dan Goodin, ‘How “omnipotent” hackers tied to NSA hid for 14 years – and were found at last’, Ars
Tecnica, (16 Feb. 2015), <http://arstechnica.com/security/2015/02/how-omnipotent-hackers-tied-to-
the-nsa-hid-for-14-years-and-were-found-at-last/>; Kaspersky Lab’s Global Research & Analysis Team,
‘Houston, we have a problem’, SecureList, Feb. 2015, <https://securelist.com/blog/research/68750/
equation-the-death-star-of-malware-galaxy>.
81
David Gilbert, ‘Equation Group: Meet the NSA “gods of cyber espionage”’, International Business
Times, Feb. 2015, <http://www.ibtimes.co.uk/equation-group-meet-nsa-gods-cyber-espionage-
1488327>.
82
Kaspersky Lab, ‘Equation Group, Questions and Answers’, Feb. 2015, <https://securelist.com/files/
2015/02/Equation_group_questions_and_answers.pdf>; The exact number of victims is difficult to
establish due to the self-destructive mechanism build into the capability.
24 M. SMEETS
Equation used their capability, combined with its technical dexterity, makes it one
of the most persistent cyber resources in existence.
Again, Stuxnet would be a good example given its accuracy. Stuxnet searches
for and affects only a particular model of programmable logic controller match-
ing the characteristics of Natanz’ nuclear enrichment facilities. If a certain com-
puter system does not match, Stuxnet removes itself from the particular machine
after it has replicated itself to other vulnerable computer systems.83 General
Michael Hayden, former director of the NSA and CIA, observes that the attack was
‘incredibly precise’. [. . .] ‘Although it was widely propagated, it was designed to
trigger only in very carefully defined, discreet circumstances’ – not acknowl-
edging that the United States was behind the attack, but stating that it has been
launched by a ‘responsible nation’.84 Clearly, the agog approach will likely reduce
the window between texploit and tawareness more quickly compared to the more
subtle application. Also, the vendor might see more urgency in developing a
patch in the first scenario (reducing the time between texploit and tpatch). Hence, in
the deployment offensive actors have to make trade-offs between potential
short-term gains and long(er) term effectiveness.85
Next to the number of targets, inherently, the decision for the type of target
matters as well. Duqu 2.0, an updated version of the infamous 2011 Duqu
malware platform, illustrates this aspect. Duqu 2.0 is a highly sophisticated
strain of malware which exists only in the memory of the computer to ensure
persistence. The attackers, however, decided to use the capacity to intrude the
internal network of Kaspersky Lab. The attackers gambled in attacking a world-
class security company. And lost. Duqu 2.0 was discovered by the Lab while
testing a new technology designed to detect advanced persistent threats. As
experts from Kaspersky remarked on the bet; ‘[o]n one hand, it almost surely
means the attack will be exposed – it’s very unlikely that the attack will go
unnoticed. So the targeting of security companies indicates that either they are
very confident they won’t get caught, or perhaps they don’t care much if they
are discovered and exposed’.86
83
Eric Byres, Andrew Ginter, and Joel Langill, ‘How Stuxnet Spreads – A Study of Infection Paths in Best
Practice Systems’, Feb. 2011, <http://www.abterra.ca/papers/how-stuxnet-spreads.pdf>.
84
Ben Flanagan, ‘Former CIA chief speaks out on Iran Stuxnet attack’, The National, Dec. 2011, <http://
www.thenational.ae/business/industry-insights/technology/former-cia-chief-speaks-out-on-iran-stux
net-attack>.
85
A quote from an interview with a hacker illustrates the point. ‘The Grugq’ explains why he has no
contracts with the Russian Government or other Russian actors: ‘Selling a bug to the Russian mafia
guarantees it will be dead in no time, and they pay very little money’. He continues saying that:
‘Russia is flooded with criminals. They monetize exploits in the most brutal and mediocre way
possible, and they cheat each other heavily’. See: Andy Greenberg, ‘Shopping For Zero-Days: A Price
List For Hackers’ Secret Software Exploits’, Mar. 2012, <http://www.forbes.com/sites/andygreenberg/
2012/03/23/shopping-for-zero-days-an-price-list-for-hackers-secret-software-exploits/>.
86
Kaspersky Lab’s Global Research & Analysis Team, ‘The Mystery of Duqu 2.0: a sophisticated
cyberespionage actor returns’, Jun. 2015, Securelist, <https://securelist.com/blog/research/70504/
the-mystery-of-duqu-2–0-a-sophisticated-cyberespionage-actor-returns/>.
THE JOURNAL OF STRATEGIC STUDIES 25
There is evidence that the NSA consiously makes these ‘bets’. A leaked top-
secret presentation provided by Snowden has revealed ‘FoxAcid’, NSA’s code-
name for what it refers as an ‘exploit orchestrator’. FoxAcid is a system which
matches target computer systems with different types of attack.87 What is
especially remarkable about the system is that it saves the most valuable exploits
for the most important targets. As Schneier observes, ‘Low-value exploits are run
against technically sophisticated targets where the chance of detection is high.
[NSA’s Office of Tailored Access Operations] maintains a library of exploits, each
based on a different vulnerability in a system. Different exploits are authorised
against different targets, depending on the value of the target, the target’s
technical sophistication, the value of the exploit and other considerations’.88
87
Bruce Schneier, ‘How the NSA Attacks Tor/Firefox Users With QUANTUM and FOXACID’, Schneier on
Security, Oct. 2013, <https://www.schneier.com/blog/archives/2013/10/how_the_nsa_att.html>.
88
Ibid.
89
Baldwin, ‘The Concept of Security’, 20.
90
Ibid., 21.
91
Graham T. Allison and Philip Zelikow, Essence of Decision: Explaining the Cuban Missile Crisis (Pearson
Education 1999); James G. March and Herbert A. Simon, Organisations, (New York: John Wiley and Sons
1958).
92
In recent years a shift has slowly occurred, however .A report from McKinsey indicates that cybersecurity has
now become much more of a ‘CEO-level issue’. Tucker Bailey, James Kaplan, and Chris Rezek, ‘Why senior
leaders are the front line against cyberattacks’, McKinsey Insights, Jun. 2014, <http://www.mckinsey.com/
insights/business_technology/why_senior_leaders_are_the_front_line_against_cyberattacks>.
26 M. SMEETS
Conclusion
This article has revealed that the transitory nature of cyberweapons is both a
technical as well as a social product. The technical dimensions of a cyber
capability provokes a certain usage, creating incentives to either use it early
or play the waiting game. As a product affected by social dynamics, the
characteristics and actions of actors can affect the life cycle of a cyberwea-
pon’s effectiveness to cause harm or damage. The ‘curse of transitoriness’
can be beaten following careful development and deployment, as the
propositions developed in this article indicate.94
If my findings concerning the transitory nature of cyberweapons are
correct it implies that, in contrast to the view of scholars that cyberspace
empowers weaker actors in the international system, cyberweapons are
actually for the strong. The transitoriness of cyberweapons means that a
constant (re)investment is required for the development of a sustainable,
constant offensive capability. Also, weak powers have great difficulties to
‘beat the curse of transitoriness’ with less resources to test and retest their
capability. Finally, when offensive actors have invested significant resources
in a cyberweapon, they are incentivised to not attack highly capable actors
considering the chances of exploit discovery are higher.
The nature of cyberweapons’ transitoriness also reduces the incentives
for offensive cyber cooperation. As the likelihood of a cyberweapon’s inef-
fectiveness after use significantly increases, the international sharing of
offensive cyber capabilities is less likely to occur compared to other weapon
systems. Mutual benefits only arise when states are similar in their view on
the (i) timing, (ii) target (iii) and proportionality of the cyberattack. The
paradox of cyberweapons is that, although technically they can be (rela-
tively) effortlessly replicated, their transitoriness changes the incentive
93
See, for example: RSA, ‘Cybersecurity Poverty Index’, 2015, <https://www.emc.com/collateral/ebook/
rsa-cybersecurity-poverty-index-ebook.pdf>; Booz Allen Hamilton and The Economist Intelligence
Unit, ‘Cyber Power Index: Findings and Methodology’, 2011, <http://www.boozallen.com/media/
file/Cyber_Power_Index_Findings_and_Methodology.pdf>; United Nations Institute for
Disarmament Research, ‘The Cyber Index: International Security Trends and Realities’, United
Nations Publications, 2013, <http://www.unidir.org/files/publications/pdfs/cyber-index-2013-en-463.
pdf>.
94
Inherently, what is a ‘curse’ to the offensive actor might be a ‘blessing’ to the defender.
THE JOURNAL OF STRATEGIC STUDIES 27
structure of actors and turns weapons into indivisible goods.95 This aspect
also affects dynamic between allied great and small/middle powers as it has
become more difficult for less-capable states to offer a specialised contribu-
tion to the larger coalition.
In addition, the unique decay function of cyberweapons, as laid out in
this article, implies that offensive cyber programs potentially require a
different funding set-up compared with conventional weapon programs.
For conventional weapon programs, (government) institutions can come
up a relatively good cost estimate as to what is required to maintain a
certain capability; a typical budget proposal would say ‘in X years’ time, the
following capability needs to be replaced/upgraded. Hence, we project to
spent . . .’ As a cyberweapon’s decay function is characterised by ‘random
crashes’, more flexible budgets (and hiring procedures) are recommended to
cope with potentially prompt fluctuations in overall capability.
Finally, the number of cyber incidents we have witnessed to date is almost
incomprehensible.96 Yet, very few of those incidents concern sophisticated
cyberattacks with as aim to cause harm or damage. Indeed, most advanced
persistent threats concern espionage capabilities rather than cyberweapons.
Scholars tend to argue that this is the result of cyberweapons’ limited strategic
usage. Following the propositions developed in this article, however, a different
explanation comes to fore. Cyberweapons are generally part of a larger collection
of capabilities – sharing vulnerability exploits, propagation techniques and/or
other features. Stuxnet’s ‘father’, for example, is supposed to be USB worm Fanny
and has also been linked to espionage platforms Duqu, Flame, Gauss and Duqu
2.0.97 Using a capability which is likely to be discovered early – that is, a
cyberweapon causing visible harm or damage – runs the risk that other capabil-
ities are soon exposed as well; not least because cybersecurity firm are establish-
ing special detection tools in attempt to uncover the cluster of capabilities.98 In
other words, costly multi-year cyber programs are susceptible to a low return of
investment in case capabilities are used with a destructive payload. What this also
means is that we can expect states to delink their intelligence capability from
their warfare capability in the future to minimise losses in capability following
detection.
95
This dynamic might even exist within a state as government institutions, with different organisational
missions, might be developing separate offensive cyber capability programs.
96
According to Verizon’s 2015 Data Breach Investigations Report, more than 317 million new pieces of
malicious software were created last year. That is about ten new pieces of malware each second of
every day. Verizon, ‘Data Breach Investigations Report’, 2015, <http://www.verizonenterprise.com/
DBIR/>.
97
Boldizsár Bencsáth, ‘Duqu, Flame, Gauss: Followers of Stuxnet’, RSA Conference Europe 2012, <http://
www.rsaconference.com/writable/presentations/file_upload/br-208_bencsath.pdf>.
98
In the case of Fanny and Stuxnet see: Kaspersky Lab’s Global Research & Analysis Team, ‘A Fanny
Equation: “I am your father, Stuxnet”’, Securelist, Feb. 2015, <https://securelist.com/blog/research/
68787/a-fanny-equation-i-am-your-father-stuxnet/>.
28 M. SMEETS
Acknowledgements
For written comments on early drafts, the author is indebted to Graham Fairclough,
Trey Herr, Lucas Kello, Joseph Nye Jr., Taylor Roberts, James Shires, and an anon-
ymous reviewer. An earlier version of this paper was presented at ISA, Atlanta (2016),
and the IR Colloquium at the University of Oxford (2016)
Disclosure statement
No potential conflict of interest was reported by the author.
Notes on contributor
Max Smeets is a lecturer in Politics at Keble College, University of Oxford, and a D.Phil
Candidate in International Relations at St. John’s College, University of Oxford. He
was previously a visiting research scholar at Columbia University SIPA and Sciences
Po CERI. Max’ current research focuses on the proliferation of cyberweapons. More
at: http://maxsmeets.com
Bibliography
Allison, Graham T. and Philip Zelikow, Essence of Decision: Explaining the Cuban
Missile Crisis (New York: Pearson Education 1999).
Anderson, Ross and Tyler Moore, ‘The Economics of Information Security’, Science
314/5799 (2006), 610–13. doi:10.1126/science.1130992
Arbaugh, William A., William L. Fithen, and John McHugh, ‘Windows of Vulnerability:
A Case Study analysis’, IEEE Computer 33/12 (2000), 52-58.
Armstrong, Gary, Stewart Adam, Sara Denize, and Philip Kotler, Principles of Marketing
(Melbourne: Pearson 2015).
Arora, Ashish, Ramayya Krishnan, Anand Nandkumar, Rahul Telang, and Yubao Yang,
‘Impact of Vulnerability Disclosure and Patch Availability - An Empirical Analysis’,
Workshop on the Economics of Information Security (Harvard University 2004).
Arthur, Charles, ‘Cyber-attack concerns raised over Boeing 787 chip’s “back door”’,
The Guardian, May 2012, <http://www.theguardian.com/technology/2012/may/29/
cyber-attack-concerns-boeing-chip>.
Auerswald, Philip E., Christian Duttweiler, and John Garofano, Clinton’s Foreign Policy:
A Documentary Record (The Hague: Kluwer Law International 2003).
Axelrod, Robert, ‘The Rational Timing of Surprise’, World Politics 31/2 (1979), 228–46.
doi:10.2307/2009943
Axelrod, Robert and Rumen Iliev, ‘Timing of Cyber Conflict’, Proceedings of the National
Academy of Sciences 111/4 (2014), 1298–303. doi:10.1073/pnas.1322638111
Bailey, Tucker, James Kaplan, and Chris Rezek, ‘Why senior leaders are the front line
against cyberattacks’, McKinsey Insights, June 2014, <http://www.mckinsey.com/
insights/business_technology/why_senior_leaders_are_the_front_line_against_
cyberattacks>.
Baldwin, D A., ‘The Concept of Security’, Review of International Studies 23 (1997), 5–
26, 20. doi:10.1017/S0260210597000053
THE JOURNAL OF STRATEGIC STUDIES 29
Gartzke, Erik, ‘The Myth of Cyberwar: Bringing War in Cyberspace Back Down to
Earth’, International Security 38/2 (2013), 41–73, 59-60. doi:10.1162/ISEC_a_00136
Gilbert, David, ‘Equation Group: Meet the NSA ‘gods of cyber espionage’,
International Business Times, February 2015, <http://www.ibtimes.co.uk/equation-
group-meet-nsa-gods-cyber-espionage-1488327>.
Goodin, Dan, ‘How “omnipotent” hackers tied to NSA hid for 14 years—and were
found at last’, Ars Tecnica, (February 16, 2015), <http://arstechnica.com/security/
2015/02/how-omnipotent-hackers-tied-to-the-nsa-hid-for-14-years-and-were-
found-at-last/>.
Greenberg, Andy, ‘Shopping for Zero-Days: A Price List For Hackers’ Secret Software
Exploits‘, Forbes Magazine, March 2012, <http://www.forbes.com/sites/andygreen
berg/2012/03/23/shopping-for-zero-days-an-price-list-for-hackers-secret-software-
exploits/>.
Hamilton, Booz Allen and The Economist Intelligence Unit, ‘Cyber Power Index:
Findings and Methodology‘, 2011, <http://www.boozallen.com/media/file/Cyber_
Power_Index_Findings_and_Methodology.pdf>.
Hayden, Michael V., ‘The Future of Things Cyber’, Strategic Studies Quarterly 5/1 (2011), 3-7.
Herr, Trey, ‘PrEP: A Framework for Malware & Cyber Weapons’, Cyber Security and
Research Institute, (2014).
Herrera, Geoffrey L., Technology and International Transformation: The Railroad, the
Atom Bomb, and the Politics of Technological Change (Albany: State University of
New York Press 2006).
Jang-Jaccard, Julian and Surya Nepal, ‘A Survey of Emerging Threats in
Cybersecurity’, Journal of Computer and System Sciences 80/5 (2014), 973–93.
doi:10.1016/j.jcss.2014.02.005
Karri, Ramesh, Jeyavijayan Rajendran, Kurt Rosenfeld, and Mark Tehranipoor,
‘Trustworthy Hardware: Identifying and Classifying Hardware Trojans’, Computer
43/10 (2010), 39–46. doi:10.1109/MC.2010.299
Kaspersky Lab’s Global Research & Analysis Team, ‘Animals in the APT Farm’,
Securelist, March 2015, <https://securelist.com/blog/research/69114/animals-in-
the-apt-farm/>.
Kaspersky Lab’s Global Research & Analysis Team, ‘Houston, we have a problem‘,
SecureList, February 2015, <https://securelist.com/blog/research/68750/equation-
the-death-star-of-malware-galaxy>.
Kaspersky Lab’s Global Research & Analysis Team, ‘The Mystery of Duqu 2.0: a sophisti-
cated cyberespionage actor returns‘, June 2015, Securelist, <https://securelist.com/blog/
research/70504/the-mystery-of-duqu-2-0-a-sophisticated-cyberespionage-actor-
returns/>.
Kaspersky Lab’s Global Research & Analysis Team, ‘A Fanny Equation: “I am your
father, Stuxnet” ‘, Securelist, February 2015, <https://securelist.com/blog/research/
68787/a-fanny-equation-i-am-your-father-stuxnet/>.
Kaur, Ratinder and Maninder Singh, ‘A Survey on Zero-Day Polymorphic Worm
Detection Techniques’, IEEE Communications Surveys & Tutorials 16/3 (2014),
1520–49. doi:10.1109/SURV.2014.022714.00160
Keegan, John, A History of Warfare (London: Random House 1994).
Kello, Lucas, ‘Cyber Disorders: Rivalry and Conflict in a Global Information Age’,
Presentation, International Security Program Seminar Series, Belfer Center for
Science and International Affairs, Harvard Kennedy School May 2012, <http://
belfercenter.hks.harvard.edu/files/kello-isp-cyber-disorders.pdf>.
THE JOURNAL OF STRATEGIC STUDIES 31
Krepinevich, Andrew ‘Cyber Warfare: a ‘nuclear option’?’, Center for Strategic and
Budgetary Assessments, 2012, <http://www.csbaonline.org/wp-content/uploads/
2012/08/CSBA_Cyber_Warfare_For_Web_1.pdf>.
Lab, Kaspersky, ‘Equation Group, Questions and Answers’, February 2015, <https://
securelist.com/files/2015/02/Equation_group_questions_and_answers.pdf>.
Levy, J S., ‘The Offensive/Defensive Balance of Military Technology: A Theoretical and
Historical Analysis’, International Studies Quarterly 28 (1984), 219–38. doi:10.2307/
2600696
Lewis, James A., ‘Conflict and Negotiation in Cyberspace’, The Technology and Public
Policy Program, 2013, <http://csis.org/files/publication/130208_Lewis_
ConflictCyberspace_Web.pdf>.
Libicki, Martin C., Conquest in Cyberspace: National Security and Information Warfare
(Cambridge: Cambridge University Press 2007).
Libicki, Martin C., ‘Cyberspace Is Not a Warfighting Domain’, A Journal of Law and
Policy for the Information Society 8/2 (2012), 326.
Lin, Herbert S., ‘Offensive Cyber Operations and the Use of Force’, Journal of National
Security Law and Policy 4/63 (2010), 63–86.
Lin, Herbert S., ‘Escalation Dynamics and Conflict Termination in Cyberspace’,
Strategic Studies Quarterly 6/3 (2012), 46–70.
March, James G. and Herbert A. Simon, Organizations (New York: John Wiley and Sons
1958).
Menn, Joseph, ‘Special Report: U.S. Cyber war Strategy Fear of Blowback’, Reuters,
May 2013, <http://www.reuters.com/article/2013/05/10/us-usa-cyberweapons-spe
cialreport-idUSBRE9490EL20130510>.
Mitnick, Kevin, The Art of Deception (Hoboken: John Wiley & Sons 2002).
Mitnick, Kevin and William L. Simon, The Art of Intrusion: The Real Stories Behind the
Exploits of Hackers, Intruders, & Deceivers (Ronald Madzima & Sons 2005).
Nappa, Antonio, Richard Johnson, Leyla Bilge, Juan Caballero, and Tudor Dumitras,
‘The Attack of the Clones: A Study of the Impact of Shared Code on Vulnerability
Patching’, IEEE Symposium on Security and Privacy, San Jose, CA, 2015.
Okhravi, Hamed and David Nicol, ‘Evaluation of Patch Management Strategies’,
International Journal of Computational Intelligence: Theory and Practice 3/2
(2008), 109–17.
Owens, William A., Kenneth W. Dam, and Herbert S. Lin (eds.), ‘Excerpts from
Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of
Cyberattack Capabilities‘, National Research Council, 2009.
Pauna, Adrian and Konstantinos Moulinos, ‘Window of exposure. . . a real problem
for SCADA systems? Recommendations for Europe on SCADA patching‘,
European Union Agency for Network and Information Security Publication,
December 2013.
Presidency of the Council of Ministers Italy, ‘National Strategic Framework for the
Security of Cyberspace‘, December 2013, <http://www.sicurezzanazionale.gov.it/
sisr.nsf/wp-content/uploads/2014/02/italian-national-strategic-framework-for-
cyberspace-security.pdf>.
Radianti, Jaziar and Jose. J. Gonzalez, ‘Understanding Hidden Information Security
Threats: The Vulnerability Black Market’, Proceedings of the 40th Hawaii
International Conference on System Sciences, Hawaii, 2007.
Ramos, Terry, ‘The Laws of Vulnerabilities,’ RSA Conference, February 2006.
Random House Webster’s Unabridged Dictionary (online), ‘transitory,’< http://diction
ary.reference.com/browse/transitory>.
32 M. SMEETS
Ransbotham, Sam, Sabyasachi Mitra, and Jon Ramsey, ‘Are Markets for Vulnerabilities
Effective?’, ICIS 2008, <http://aisel.aisnet.org/cgi/viewcontent.cgi?article=
1192&context=icis2008>.
Rid, Thomas, ‘Cyber War Will Not Take Place’, Journal of Strategic Studies 35/1 (2012),
5–32. doi:10.1080/01402390.2011.608939
RSA, ‘Cybersecurity Poverty Index‘, 2015, <https://www.emc.com/collateral/ebook/
rsa-cybersecurity-poverty-index-ebook.pdf>.
Sanger, David E., Confront and Conceal: Obama’s Secret Wars and Surprising use of
American Power (New York: Crown Publishing 2012).
Sanger, David E., ‘Obama Order Sped Up Wave of Cyberattacks Against Iran’, The New
York Times, June 2012, <http://www.nytimes.com/2012/06/01/world/middleeast/
obama-ordered-wave-of-cyberattacks-against-iran.html?_r=0>.
Schneier, Bruce, ‘Crypto-Gram‘, September 2000, <https://www.schneier.com/crypto
gram/archives/2000/0915.html>.
Schneier, Bruce, ‘How the NSA Attacks Tor/Firefox Users With QUANTUM and
FOXACID’, Schneier on Security, October 2013, <https://www.schneier.com/blog/
archives/2013/10/how_the_nsa_att.html>.
Schneier, Bruce, ‘The Witty Worm a New Chapter in Malware’, Computer World, June
2014, <http://www.computerworld.com/article/2565119/malware-vulnerabilities/
the-witty-worm–a-new-chapter-in-malware.html>.
Shachtman, Noah and Peter W. Singer, ‘The Wrong War: The Insistence on Applying
Cold War Metaphors to Cybersecurity Is Misplaced and Counterproductive’,
Brookings Institute, August 2011, <http://www.brookings.edu/research/articles/
2011/08/15-cybersecurity-singer-shachtman>.
Shipley, Greg, ‘Painless (well, almost) patch management procedures’, Network Computer,
2004, <http://www.networkcomputing. com/showitem.jhtml?docid=1506f1>.
Song, Cristian, Cadar JaeSeung, and Peter Pietzuch, ‘SYMBEXNET: Testing Network
Protocol Implementations with Symbolic Execution and Rule-Based Specifications’,
IEEEE Transactions on Software Engineering 40/7 (2013), 695–709. doi:10.1109/
TSE.2014.2323977
Subrahmanian, V. S., Michael Ovelgönne, B. Tudor Dumitras, and Aditya Prakash,
‘Chapter 4, The Global Cyber-Vulnerability Report’, in V.S. Subrahmanian, Michael
Ovelgonne, B. Tudor Dumitras, and Aditya Prakash) (eds.), Terrorism, Security and
Computation (New York: Springer 2015).
Sweeting, Andrew, ‘Equilibrium Price Dynamics in Perishable Goods Markets: The
Case of Secondary Markets for Major League Baseball Tickets‘, NBER, Working
Paper 14505, (2008).
The Grugq, ‘Twitter’, 2016, <https://twitter.com/thegrugq>.
Tsipenyuk, Katrina, Brian Chess, and Gary McGraw, ‘Seven pernicious kingdoms: A
taxonomy of software security errors’, IEEE Security and Privacy Magazine 3/6
(2005), 81–84. doi:10.1109/MSP.2005.159
United Nations Institute for Disarmament Research, ‘The Cyber Index: International
Security Trends and Realities‘, United Nations Publications, 2013, <http://www.
unidir.org/files/publications/pdfs/cyber-index-2013-en-463.pdf>.
Verizon, ‘Data Breach Investigations Report‘, 2015, <http://www.verizonenterprise.
com/DBIR/>.
Zetter, Kim, ‘Hacking Team Leak shows How Secretive Zero-Day Exploit Sales Work’,
Wired, (July 2015), <http://www.wired.com/2015/07/hacking-team-leak-shows-
secretive-zero-day-exploit-sales-work/>.