You are on page 1of 99

1

DebateUS!
NATO Cyber Coop Aff

Cyber Neg
2
DebateUS!
NATO Cyber Coop Aff
Status Quo Solves

NATO developing cyber protection capabilities now

Valori, 4-12, 22, Professor Giancarlo Elia Valori is a world-renowned Italian economist and international relations
expert, who serves as the President of the International World Group. In 1995, the Hebrew University of
Jerusalem dedicated the Giancarlo Elia Valori chair of Peace and Regional Cooperation. Prof. Valori also holds
chairs for Peace Studies at Yeshiva University in New York and at Peking University in China. Among his many
honors from countries and institutions around the world, Prof. Valori is an Honorable of the Academy of Science
at the Institute of France, as well as Knight Grand Cross and Knight of Labor of the Italian Republic, How the US
has developed its cyber warfare – part 4, https://www.israeldefense.co.il/en/node/54214
In April 2021 the US Air Force Research Laboratory awarded Lockheed Martin a 12.8 million dollar contract for the Defense Experiment in Commercial
Space-Based Internet (DEUCSI) program. The DEUCSI project hopes to form a flexible, high-bandwidth, high-availability Air Force communication and data
sharing capability by taking full advantage of commercial outer space-based Internet networks. The project consists of three phases, namely, using
satellites and commercial demonstration terminals to establish connections between multiple Air Force sites; expanding user terminals to multiple
locations and various types of platforms to extend the range of connections; and conducting specialized tests and experiments to solve special military
space-based needs that cannot be met by Internet providers. The Air Force also announced in May that the Advanced Battle Management System (ABMS)
program will enter a new phase of development, moving from a focus based on testing and rapid technology development to a more traditional focus on
deploying combat capabilities. The move marks the transition of ABMS into a full-fledged procurement program, moving from a largely theoretical and
developmental state to one that involves specialized equipment procurement and more hands-on testing. The Air Force Rapid Capabilities Office (RCO) has
created a new capability matrix for ABMS, which includes six categories: 1. security processing; 2. connectivity; 3. data management; 4. applications; 5.
sensor integration; 6. integrated effects. The Air Force plans to use more contracted tools to leverage commercial technologies, infrastructure and proven
applications to get ABMS off the ground in a secure military digital network environment. NATO is developing new cloud technologies to
establish technical standards in the field and ensure interoperability among Member States . The current cloud
technology project that has attracted much attention is the Firefly system, developed by French company Thales. The system will deploy
NATO's first deployable scenario-level defence cloud capability and enable its own forces to receive, analyse
and transmit data between static headquarters and in real time across theatres of operation . Firefly uses an all-in-
one system architecture, including application management, IT networking and security, and hence it represents a holistic approach to deployable
command and control resources for the Atlantic Alliance. Firefly is designed to provide command and control services to NATO response forces and enable
collaboration between static and deployed users in support of major joint operations (MJO) or smaller joint operations (SJO). The
Firefly system
will provide eight deployable communication and information points of presence (DPOPs) to provide
communication services with NATO command and deployed force applications and information services.
Firefly will integrate and interact with existing NATO information and communication systems and provide
countries and partners with Federated Mission Networking (FMN) connectivity for operations, missions and
exercises so as to communicate effectively. Specific Firefly services include: communication services,
infrastructure services, business support services, and staging and deployment environments.
3
DebateUS!
NATO Cyber Coop Aff
No Cyber Attacks

Cyber attacks on the Ukraine limited and largely failed

Evan Dyer, 5-22, 22, Russia's dreaded cyberwarriors seem to be struggling in Ukraine,
https://www.cbc.ca/news/politics/russia-ukraine-cyber-cyberwar-1.6455055
One day after Russian tanks broke through Ukrainian border posts on February 24, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) issued
a rare "Shields Up" alert warning that "every organization — large and small — must be prepared to respond to disruptive cyber activity." The expectation
was that Russia would attack not only Ukraine but also Ukraine's western allies. For some reason, that hasn't really happened in a big way. "We haven't
seen anything that we can directly attribute to Russia turning its sights to Canada," Sami Khoury, head of the Canadian Centre for Cyber Security, told CBC
News. "There's been probably spillover effects in some cases, but we haven't seen anything that is directly targeted at the Canadian infrastructure or
Canadian ecosystem." Instead, Russia has found itself being hacked — in one instance with embarrassing results that surely must have
marred President Vladimir Putin's Victory Day extravaganza. As RuTube, Russia's version of YouTube, was taken down by hackers, YouTube itself remained
online in Russia and continued sharing videos demonstrating Ukraine's dominance of the information space in this war. Hacktivist groups such as Network
Battalion 65 have stolen reams of emails and data from Russian government and corporate sites. In March, for the first time ever, more Russian email
credentials were leaked online than those of any other nation. Russian hackers even failed to disrupt voting in the Eurovision Song Contest. (Ukraine won.)
Just as Russia's armoured divisions entered this conflict with a fearsome reputation that turned out to be
wildly overblown, the reach of Moscow's cyber legions may have been overestimated . And just as Russia's war has
diminished the reputation of Russian arms, it might also lead to a reassessment of nations' relative strengths in the virtual world. Fearing the worst
Ukraine had every reason to expect the worst. Online attacks have been happening there since war began in 2014. A Russian "persistent threat group"
known as Sandworm was behind a December 2015 attack on the Ukrainian electrical grid that caused widespread power outages. A year later, in
December 2016, the Ukrainian financial system was targeted by the Black Energy malware attack which also caused power cuts in Kyiv. Electronic
espionage agency getting major funding boost to ward off cyber attacks Foreign minister decries sexual violence in Ukraine; top commander highlights
information warfare Canadian intelligence agency calls out false Russian claim that Ukraine is harvesting organs Then in June 2017, the same group struck
again with a powerful new malware called Petya, causing chaos at government ministries, forcing banks to close, jamming telecom networks and again
disrupting Ukraine's electrical grid. Airports and railways were affected and Chernobyl's radiation monitoring system went offline. Ukrainian and western
officials blamed the attacks on Russia's GRU (main intelligence directorate) and SVR (foreign intelligence service). Last year, Ukraine's SBU security service
reported it had "neutralized" an average of four cyberattacks per day. So it was widely assumed that an army of bots would act as vanguard for any real
invasion by attempting to cut power and communications, clog transportation links and generally sow confusion. Russia did try something modest along
those lines. A laptop screen displays a warning message in Ukrainian, Russian and Polish that appeared on the official website of the Ukrainian Foreign
Ministry after a massive cyberattack on January 14, 2022. (Valentyn Ogirenko/Illustration/Reuters) In mid-January, a cyberattack hit about 70 Ukrainian
government websites hours after talks between Russia and NATO failed to produce the concessions the Kremlin was hoping for. "All information about you
has become public, be afraid and expect the worst," said a pop-up screen message. "This is for your past, present and future." It repeated familiar Kremlin
tropes about Nazis and persecution of Russian-speakers. In addition to hitting government and military sites, the distributed denial of service (DDOS)
attacks also targeted two banks, shutting down ATMs and credit card transactions. Hack and attack Russia launched another cyberattack
on Ukraine on the day of the invasion with a piece of malware called Hermetic Wiper that targeted hard drives. Last week, the Canadian government
accused the Russian military of having "directly targeted the Viasat KA-SAT satellite Internet service in Ukraine" in February. The U.K. government says the
attack also hit collateral targets such as central European wind farms. But the trains continued to run and the Ukrainian
government continued to function. The attack was much less damaging than the 2007 attack on Estonia, or the attacks that preceded the
2008 invasion of Georgia. Ali Dehghantanha, Canada Research Chair in Cybersecurity and Threat Intelligence at the University of Guelph, said Russia may
have underused its offensive cyber capabilities because it was confident of a swift military victory. But Ukraine
is also better defended after
years of successive attacks, he added. "Because of their previous story with Russia," said Dehghantanha, "going back
to the time of the conflict in Crimea, Ukraine — with the support of Western allies — did a very good job in
protecting its physical infrastructure this time." Western involvement Those western partners include Canada's digital counter-
espionage agency, the Communications Security Establishment. "While we can't speak about specific operations, we can confirm that CSE has been
tracking cyber threat activity associated with the current crisis," the CSE's Ryan Foreman told CBC News. "CSE has been sharing valuable cyber threat
intelligence with key partners in Ukraine and continues to work with the Canadian Armed Forces in support of Ukraine." CSE also has to worry about
Canada's own assets, of course. A message demanding money appears on the monitor of a payment terminal at a branch of Ukraine's state-owned bank
Oschadbank in Kyiv after Ukrainian institutions were hit by a wave of cyber attacks on June 27, 2017. (Valentyn Ogirenko/Reuters) For years, major
cyberattacks on North American assets have been landing with some regularity. CISA has compiled a long list of American online assets it sees as coveted
targets for Russia's disruption and theft operations, including "COVID-19 research, governments, election organizations, health care and pharmaceutical,
4
DebateUS!
NATO Cyber Coop Aff
defence, energy, video gaming, nuclear, commercial facilities, water, aviation, and critical manufacturing." "Russia has significant cyber capabilities and a
demonstrated history of using them irresponsibly. This includes SolarWinds cyber compromise, COVID-19 vaccine development, Georgia's democratic
process and NotPetya malware," Foreman told CBC. Shotgun tactics Dehghantanha said state-sponsored hackers are now shifting away from building the
most sophisticated malware to employing more of a scattergun approach — one that involves installing simpler backdoors into a wide range of less well-
defended infrastructure targets.

Low risk of significant cyber attacks

Jeane Kirkpatrick Visiting Research Fellow, American Enterprise Institute, 4-25, 22, The Russian cyber threat is
here to stay and NATO needs to understand it, https://www.aei.org/op-eds/the-russian-cyber-threat-is-here-to-
stay-and-nato-needs-to-understand-it/
Since the Russian invasion of Ukraine, the Biden administration has escalated warnings about likely Russian cyber-attacks on American infrastructure and
business. More worrying still, cyber alarmists like Senate Intelligence Committee Chairman Mark Warner, D-Va., have suggested that cyber-attacks from
the Kremlin could be acts of war that trigger NATO’s collective defense. This sky-is-falling delusion, particularly from leaders with access to
classified intelligence, is at best counterproductive and at worst dangerous. Cyber-attacks are rarely acts of
war, and treating them as if they are undermines NATO’s ability to deal with real threats short of cyber war. NATO has only invoked Article 5 – which
triggers a collective response – once and that was after the 9/11 attacks. Cyber-attacks are unlikely to destroy buildings and
kill thousands in an instant. While collective defense extends to cyberspace, few operations could realistically be a
cause for war. This would include cyber-attacks resulting in death or damage like traditional military operations or coordinated
assaults that take the power grid or entire economic sectors offline. These scenarios are unlikely though: such attacks require far too
much time, funding, manpower, and control. Instead, most attacks temporarily overwhelm servers
with traffic, deny network access, hold computers hostage, and steal or delete data.

At best, minor cyber attacks against Finland and Sweden

NES KAGUBARE AND ELLEN MITCHELL - 05/14/22, Finland, Sweden’s NATO moves prompt fears of Russian
cyberattacks, https://thehill.com/policy/cybersecurity/3488518-finland-swedens-nato-moves-prompt-fears-of-
russian-cyber-attacks/

Finland and Sweden’s move to join NATO has raised concerns about potential cyber retaliation from Russ ia,
which sees the expansion of the alliance as a direct threat. While it is too early to judge how Russia might try to use its cyber capabilities against Finland,
Sweden or other NATO members, including the U.S., experts said it will likely launch unsophisticated and small-scale
cyberattacks as a form of protest against the expansion. Such attacks would not have the severity of cyber efforts Moscow launched
against Ukraine amid the Russian invasion of that country. “I think it’s unlikely that Russia will launch the types of cyberattacks
against Finland and Sweden like it did with Ukraine, primarily because the aims are different,” said Jason Blessing, a
fellow at the American Enterprise Institute. Blessing said that since Russia has no intention, at least for the moment, to invade Finland or Sweden, it may
use different cyber tactics than it did with Ukraine to get its message across. He added that it’s likely that Russia will launch unsophisticated types of
attacks including website defacement and distributed denial-of-service attacks to disrupt its enemies’ networks rather than starting a full-scale cyber
warfare. The process would likely move far more swiftly than previous bids into the alliance, as NATO Secretary General Jens Stoltenberg said last month
that both nations would be welcomed into the organization should they decide to join and could quickly become members. The potential additions to
NATO would be significant as both countries have long avoided military alliances and sought neutrality. Finland, which shares a 830-mile border with
Russia, last fought the Kremlin in 1944 when it was the Soviet Union. And Sweden has not had a military alliance for more than 200 years, choosing instead
to cooperate with NATO. The prospect of retaliation is a real worry for Finland and Sweden. On Friday, a Finnish transmission system operator announced
that a Russian energy company would be cutting off its electricity imports to Finland beginning Saturday.

No systemic risk – big tech is patching up holes.


Chung ’21 [Ingrid; August 30; writer; National Review, “Big Tech Is Doing the Right Thing on Cybersecurity,”
https://www.nationalreview.com/corner/big-tech-is-doing-the-right-thing-on-cybersecurity/; KP]
5
DebateUS!
NATO Cyber Coop Aff
President Joe Biden recently met with Big Tech executives to discuss how to improve cybersecurity after recent cyberattacks in which government
software contractor Solarwinds and oil pipeline Colonial Pipeline were targeted. Leading tech corporations, including IBM,
Google, and Amazon, will all try to improve cybersecurity by investing in the training of personnel in this field and
upgrading their respective encryption and security systems. Microsoft has also committed to investing $150 million in upgrades for
cybersecurity systems of government agencies. Big Tech may not always do the right thing, but these plans to enhance cybersecurity are
certainly something that we can all stand behind.

In recent years, as the Internet has become increasingly influential and indispensable, cybersecurity has, correspondingly, become an increasingly
prominent threat to not only citizens’ privacy but also to national security. Former national-security adviser John Bolton explained the significance of
cybersecurity to national defense in a recent National Review article, in which he characterized threats from cyberspace as “a multiplicity of hidden, ever-
changing threats.” A recent report by the Heritage Foundation raised concern over espionage, trading of secrets, and the disruption of military commands
and communication potentially being conducted in the cyber domain.

The effective regulation of cyberspace, a relatively new front for modern warfare characterized by its elusiveness and lack of boundaries, is sometimes
challenging. Laxness in cybersecurity, however, has often led to catastrophic consequences. For instance, the WannaCry Ransomware Cyber Attack in
2017, in which files in affected computer systems were locked until ransom was paid for their decryption, affected approximately 200,000 computers in
150 countries and led to enormous financial costs. Victims of the cyber-extortion scheme included entities from government agencies such as the English
National Health Service to major international corporates such as Boeing.

It is well established that both the state and leading tech corporations have a legitimate interest in enhancing cybersecurity.
The government is responsible for engaging in national defense in the cyber domain and tech corporations are obligated to protect the privacy of their
users, whose personal information is often entrusted to them.

Big Tech’s plans to cooperate with the government to improve cybersecurity through financial investments appears to be
promising. While it may be difficult to predict the effectiveness of such investments, the fact that Big Tech and the government are placing
the enhancement of cybersecurity close to the top of their agenda and are committing to coordinated efforts is
good news. Big Tech, with its financial prowess derived from the sheer size of the industry, and a unique relationship with the
use of cyberspace, is uniquely positioned to materially contribute to state-led efforts to secure cyberspace. Furthermore,
investing in education on cybersecurity of employees may also be useful in raising awareness and amplifying the industry’s collective concern over capacity
to combat cyberattacks in the long run.

Nobody launches catastrophic cyber attacks


James Andrew Lewis 20, Senior Vice President and Director of the Technology Policy Program at the Center for
Strategic and International Studies, “Dismissing Cyber Catastrophe”, Center for Strategic and International
Studies, 8/17/2020, https://www.csis.org/analysis/dismissing-cyber-catastrophe
A catastrophic cyberattack was first predicted in the mid-1990s. Since then, predictions of a catastrophe have appeared regularly and have entered the
popular consciousness. As a trope, a cyber catastrophe captures our imagination, but as analysis, it remains entirely imaginary and is of
dubious value as a basis for policymaking. There has never been a catastrophic cyberattack.

To qualify as a catastrophe, an event must produce damaging mass effect, including casualties and destruction. The fires that swept across California last
summer were a catastrophe. Covid-19 has been a catastrophe, especially in countries with inadequate responses. With man-made actions, however, a
catastrophe is harder to produce than it may seem, and for cyberattacks a catastrophe requires organizational and technical skills most actors still do not
possess. It requires planning, reconnaissance to find vulnerabilities, and then acquiring or building attack tools—things that require resources and
experience. To achieve mass effect, either a few central targets (like an electrical grid) need to be hit or multiple targets would
have to be hit simultaneously (as is the case with urban water systems), something that is itself an operational challenge.

It is easier to imagine a catastrophe than to produce it. The 20 03 East Coast blackout is the archetype for an attack
on the U.S. electrical grid. No one died in this blackout, and services were restored in a few days. As electric production
is digitized, vulnerability increases, but many electrical companies have made cybersecurity a priority. Similarly, at
water treatment plants, the chemicals used to purify water are controlled in ways that make mass releases difficult. In any case, it would take a massive
amount of chemicals to poison large rivers or lakes, more than most companies keep on hand, and any release would quickly be diluted.
6
DebateUS!
NATO Cyber Coop Aff
More importantly, there are powerful strategic constraints on those who have the ability to launch catastrophe
attacks. We have more than two decades of experience with the use of cyber techniques and operations for coercive and criminal purposes and have a
clear understanding of motives, capabilities, and intentions. We can be guided by the methods of the Strategic Bombing Survey, which used interviews and
observation (rather than hypotheses) to determine effect. These methods apply equally to cyberattacks. The conclusions we can draw from this are:

 Nonstate actors and most states lack the capability to launch attacks that cause physical damage at any level, much less a catastrophe. There
have been regular predictions every year for over a decade that nonstate actors will acquire these high-end cyber capabilities in two or three
years in what has become a cycle of repetition. The monetary return is negligible, which dissuades the skilled cybercriminals (mostly Russian
speaking) who might have the necessary skills. One mystery is why these groups have not been used as mercenaries, and this may reflect either
a degree of control by the
Russian state (if it has forbidden mercenary acts) or a degree of caution by criminals.
 There is enough uncertainty among potential attackers about the United States’ ability to attribute that
they are unwilling to risk massive retaliation in response to a catastrophic attack . (They are perfectly willing to
take the risk of attribution for espionage and coercive cyber actions.)
 No one has ever died from a cyberattack, and only a handful of these attacks have produced physical
damage. A cyberattack is not a nuclear weapon, and it is intellectually lazy to equate them to nuclear weapons. Using a tactical nuclear
weapon against an urban center would produce several hundred thousand casualties, while a strategic nuclear exchange would cause tens of
millions of casualties and immense physical destruction. These are catastrophes that some hack cannot duplicate. The shadow of nuclear war
distorts discussion of cyber warfare.
 State use of cyber operations is consistent with their broad national strategies and interests. Their
primary emphasis is on espionage and political coercion. The United States has opponents and is in conflict with
them, but they have no interest in launching a catastrophic cyberattack since it would certainly produce an
equally catastrophic retaliation. Their goal is to stay below the “use-of-force” threshold and undertake
damaging cyber actions against the United States, not start a war.
This has implications for the discussion of inadvertent escalation, something that has also never occurred. The concern over escalation
deserves a longer discussion, as there are both technological and strategic constraints that shape and limit risk in cyber operations, and the absence
of inadvertent escalation suggests a high degree of control for cyber capabilities by advanced states. Attackers ,
particularly among the United States’ major opponents for whom cyber is just one of the tools for confrontation, seek to avoid actions that could
trigger escalation.

The United States has two opponents (China and Russia) who are capable of damaging cyberattacks. Russia has demonstrated
its attack skills on the Ukrainian power grid, but neither Russia nor China would be well served by a similar
attack on the United States. Iran is improving and may reach the point where it could use cyberattacks to cause major damage, but it would only do
so when it has decided to engage in a major armed conflict with the United States. Iran might attack targets outside the United States and its allies with
less risk and continues to experiment with cyberattacks against Israeli critical infrastructure. North Korea has not yet developed this kind of capability.

One major failing of catastrophe


scenarios is that they discount the robustness and resilience of modern economies.
These economies present multiple targets and configurations; they are harder to damage through cyberattack
than they look, given the growing (albeit incomplete) attention to cybersecurity; and experience shows that people
compensate for damage and quickly repair or rebuild. This was one of the counterintuitive lessons of the Strategic Bombing Survey.
Pre-war planning assumed that civilian morale and production would crumple under aerial bombardment. In fact, the opposite occurred. Resistance
hardened and production was restored.1

This is a short overview of why catastrophe is unlikely. Several longer CSIS reports go into the reasons in some detail. Past performance may not
necessarily predict the future, but after 25 years without a single catastrophic cyberattack, we should invoke the concept cautiously, if at all. Why then, it is
raised so often?

No cyberwar---answers all their scenarios.


Jeremy Rabkin & John Yoo 17. Rabkin is a Professor of Law at the Antonin Scalia Law School, George Mason
University; Yoo is currently the Emanuel S. Heller Professor of Law at the University of California, Berkeley.
7
DebateUS!
NATO Cyber Coop Aff
09/12/2017. “CHAPTER 6 Cyber Weapons.” Striking Power: How Cyber, Robots, and Space Weapons Change the
Rules for War, Encounter Books.

It is possible for a “virus” to disable the hardware elements of a network , as happened in the Shamoon attack. The effects of
such an attack are costly, especially if they crash electric power supplies or delete important government data. But those well-known costs will
encourage governments and corporations to back up valuable data in several places and build redundancies into vital
control systems. Such safeguards would mean cyber attacks cause temporary inconvenience, but are not likely to
cause widespread, permanent damage. If an attacker wants to turn off the lights everywhere, there are easier
ways than cyber-based attacks. Alarms over shutting down computer networks overlook their resiliency.
Computers are immensely complicated and hence inherently temperamental. Designers of computer systems have always
known that. At any one time, some computers in commercial networks may be experiencing technical difficulties —as
air travelers know from experience trying to acquire boarding passes from “self-help” kiosks. Network designers build their systems to
work even when significant portions of the hardware 42 and software go offline. Such resiliency would pose a
serious obstacle to the success of a cyber attack. As new risks become known, network engineers will build in
more robust defenses. Finally, even if nations could build cyber weapons that could shut down networks on a
large scale, they may never use them. Such a weapon could be equally dangerous for the attacker as for the defender
if its effects spread beyond the target system. The more networked an attacker’s economy and military, the
more exposed it will be to such harms. Even if the attacker could deploy a prophylactic defense for its own computers, it would still need
those computers to communicate with external networks in other countries. A world paralyzed by computer problems would prevent
the attacking nation from reaping the benefits of the Internet . Unless it were prepared to isolate itself from the
world economy for a lengthy period of time, a nation would not likely deploy an all-destructive cyber weapon .
To think of cyber as a weapon of mass destruction is like noticing that a laptop computer is light enough to
swing, while also encased in unyielding metal, and then to conclude that a laptop computer is well suited to
deploy as a war club. That conclusion is not demonstrably false. But it misses the main point . The most
attractive aspect of cyber operations from a tactical standpoint is that they can be customized, allowing attacks
to be highly focused and ratcheted up or dialed back, according to circumstances. Their most effective use is
when they are used for espionage and covert action goals, rather than strategic strikes. Their military value will
come as an aid to other forms of hostilities, such as diplomatic and economic pressure or kinetic attacks. Cyber
weapons have far more value as a more precisely tuned means of coercion between nations, rather than as a
weapon of mass destruction.

No scenario fo cyberr escalation.


Erica D. Borghard 19, Assistant Professor in the Army Cyber Institute at the United States Military Academy at
West Point, and Shawn W. Lonergan, Assistant Professor of International Relations in the Department of Social
Science at USMA, “Cyber Operations as Imperfect Tools of Escalation”, Strategic Studies Quarterly, Fall 2019, p.
123-124

However, there are important empirical reasons to suspect that the risks of cyber escalation may be exaggerated.
Specifically, if cyberspace is in fact an environment that (perhaps even more so than others) generates severe escalation
risks, why has cyber escalation not yet occurred? Most interactions between cyber rivals have been
characterized by limited volleys that have not escalated beyond nuisance levels and have been largely
contained below the use-of-force threshold.5 For example, in a survey of cyber incidents and responses between 2000 and 2014,
Brandon Valeriano et al. find that “rivals tend to respond only to lower-level [cyber] incidents and the response tends to
check the intrusion as opposed to seek escalation dominance. The majority of cyber escalation episodes are at
a low severity threshold and are non-escalatory. These incidents are usually ‘tit-for- tat’ type responses within
8
DebateUS!
NATO Cyber Coop Aff
one step of the original incident.”6 Even in the two rare examples in which states employed kinetic force in
response to adversary cyber operations—the US counter-ISIL drone campaign in 2015 and Israel’s airstrike against Hamas cyber operatives in 2019— the
use of force was circumscribed and did not escalate the overall conflict (not to mention that force was used against nonstate
adversaries with limited potential to meaningfully escalate in response to US or Israeli force).7

We posit that cyber


escalation has not occurred because cyber operations are poor tools of escalation. In particular, we
argue that this
stems from key characteristics of offensive cyber capabilities that limit escalation through four
mechanisms. First, retaliatory offensive cyber operations may not exist at the desired time of employment.
Second, even under conditions where they may exist, their effects are uncertain and often relatively limited.
Third, several attributes of offensive cyber operations generate important tradeoffs for decision-makers that
may make them hesitant to employ capabilities in some circumstances. Finally, the alternative of cross-domain
escalation—responding to a cyber incident with noncyber, kinetic instruments—is unlikely to be chosen except under rare
circumstances, given the limited cost-generation potential of offensive cyber operations. In this article, we define cyber
escalation and then explore the implications of the technical features and requirements for offensive cyber operations. We also consider potential
alternative or critical responses to each of these logics. Finally, we evaluate the implications for US policy making.

Cyber attacks on the Ukraine limited and largely failed

Evan Dyer, 5-22, 22, Russia's dreaded cyberwarriors seem to be struggling in Ukraine,
https://www.cbc.ca/news/politics/russia-ukraine-cyber-cyberwar-1.6455055
One day after Russian tanks broke through Ukrainian border posts on February 24, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) issued
a rare "Shields Up" alert warning that "every organization — large and small — must be prepared to respond to disruptive cyber activity." The expectation
was that Russia would attack not only Ukraine but also Ukraine's western allies. For some reason, that hasn't really happened in a big way. "We haven't
seen anything that we can directly attribute to Russia turning its sights to Canada," Sami Khoury, head of the Canadian Centre for Cyber Security, told CBC
News. "There's been probably spillover effects in some cases, but we haven't seen anything that is directly targeted at the Canadian infrastructure or
Canadian ecosystem." Instead, Russia has found itself being hacked — in one instance with embarrassing results that surely must have
marred President Vladimir Putin's Victory Day extravaganza. As RuTube, Russia's version of YouTube, was taken down by hackers, YouTube itself remained
online in Russia and continued sharing videos demonstrating Ukraine's dominance of the information space in this war. Hacktivist groups such as Network
Battalion 65 have stolen reams of emails and data from Russian government and corporate sites. In March, for the first time ever, more Russian email
credentials were leaked online than those of any other nation. Russian hackers even failed to disrupt voting in the Eurovision Song Contest. (Ukraine won.)
Just as Russia's armoured divisions entered this conflict with a fearsome reputation that turned out to be
wildly overblown, the reach of Moscow's cyber legions may have been overestimated . And just as Russia's war has
diminished the reputation of Russian arms, it might also lead to a reassessment of nations' relative strengths in the virtual world. Fearing the worst
Ukraine had every reason to expect the worst. Online attacks have been happening there since war began in 2014. A Russian "persistent threat group"
known as Sandworm was behind a December 2015 attack on the Ukrainian electrical grid that caused widespread power outages. A year later, in
December 2016, the Ukrainian financial system was targeted by the Black Energy malware attack which also caused power cuts in Kyiv. Electronic
espionage agency getting major funding boost to ward off cyber attacks Foreign minister decries sexual violence in Ukraine; top commander highlights
information warfare Canadian intelligence agency calls out false Russian claim that Ukraine is harvesting organs Then in June 2017, the same group struck
again with a powerful new malware called Petya, causing chaos at government ministries, forcing banks to close, jamming telecom networks and again
disrupting Ukraine's electrical grid. Airports and railways were affected and Chernobyl's radiation monitoring system went offline. Ukrainian and western
officials blamed the attacks on Russia's GRU (main intelligence directorate) and SVR (foreign intelligence service). Last year, Ukraine's SBU security service
reported it had "neutralized" an average of four cyberattacks per day. So it was widely assumed that an army of bots would act as vanguard for any real
invasion by attempting to cut power and communications, clog transportation links and generally sow confusion. Russia did try something modest along
those lines. A laptop screen displays a warning message in Ukrainian, Russian and Polish that appeared on the official website of the Ukrainian Foreign
Ministry after a massive cyberattack on January 14, 2022. (Valentyn Ogirenko/Illustration/Reuters) In mid-January, a cyberattack hit about 70 Ukrainian
government websites hours after talks between Russia and NATO failed to produce the concessions the Kremlin was hoping for. "All information about you
has become public, be afraid and expect the worst," said a pop-up screen message. "This is for your past, present and future." It repeated familiar Kremlin
tropes about Nazis and persecution of Russian-speakers. In addition to hitting government and military sites, the distributed denial of service (DDOS)
attacks also targeted two banks, shutting down ATMs and credit card transactions. Hack and attack Russia
launched another cyberattack
on Ukraine on the day of the invasion with a piece of malware called Hermetic Wiper that targeted hard drives. Last week, the Canadian government
accused the Russian military of having "directly targeted the Viasat KA-SAT satellite Internet service in Ukraine" in February. The U.K. government says the
attack also hit collateral targets such as central European wind farms. But the trains continued to run and the Ukrainian
9
DebateUS!
NATO Cyber Coop Aff
government continued to function. The attack was much less damaging than the 2007 attack on Estonia, or the attacks that preceded the
2008 invasion of Georgia. Ali Dehghantanha, Canada Research Chair in Cybersecurity and Threat Intelligence at the University of Guelph, said Russia may
have underused its offensive cyber capabilities because it was confident of a swift military victory. But Ukraine
is also better defended after
years of successive attacks, he added. "Because of their previous story with Russia," said Dehghantanha, "going back
to the time of the conflict in Crimea, Ukraine — with the support of Western allies — did a very good job in
protecting its physical infrastructure this time." Western involvement Those western partners include Canada's digital counter-
espionage agency, the Communications Security Establishment. "While we can't speak about specific operations, we can confirm that CSE has been
tracking cyber threat activity associated with the current crisis," the CSE's Ryan Foreman told CBC News. "CSE has been sharing valuable cyber threat
intelligence with key partners in Ukraine and continues to work with the Canadian Armed Forces in support of Ukraine." CSE also has to worry about
Canada's own assets, of course. A message demanding money appears on the monitor of a payment terminal at a branch of Ukraine's state-owned bank
Oschadbank in Kyiv after Ukrainian institutions were hit by a wave of cyber attacks on June 27, 2017. (Valentyn Ogirenko/Reuters) For years, major
cyberattacks on North American assets have been landing with some regularity. CISA has compiled a long list of American online assets it sees as coveted
targets for Russia's disruption and theft operations, including "COVID-19 research, governments, election organizations, health care and pharmaceutical,
defence, energy, video gaming, nuclear, commercial facilities, water, aviation, and critical manufacturing." "Russia has significant cyber capabilities and a
demonstrated history of using them irresponsibly. This includes SolarWinds cyber compromise, COVID-19 vaccine development, Georgia's democratic
process and NotPetya malware," Foreman told CBC. Shotgun tactics Dehghantanha said state-sponsored hackers are now shifting away from building the
most sophisticated malware to employing more of a scattergun approach — one that involves installing simpler backdoors into a wide range of less well-
defended infrastructure targets.

No catastrophic cyberattacks---25 years of deterrence empirics prove they stay low-level and
non-escalatory.
Lewis 20---senior vice president and director of the Technology Policy Program at the Center for Strategic and
International Studies). Lewis, James. 2020. “Dismissing Cyber Catastrophe.” Center for Strategic & International
Studies. August 17, 2020. https://www.csis.org/analysis/dismissing-cyber-catastrophe.

A catastrophic cyberattack was first predicted in the mid-1990s. Since then, predictions of a catastrophe have appeared
regularly and have entered the popular consciousness. As a trope, a cyber catastrophe captures our imagination, but as analysis, it remains entirely imaginary
and is of dubious value as a basis for policymaking. There has never been a catastrophic cyberattack. To qualify as a catastrophe, an event must
produce damaging mass effect, including casualties and destruction. The fires that swept across California last summer
were a catastrophe. Covid-19 has been a catastrophe, especially in countries with inadequate responses. With man-made actions, however, a catastrophe is harder
to produce than it may seem, and for cyberattacks a catastrophe requires organizational and technical skills most actors still do

not possess. It requires planning, reconnaissance to find vulnerabilities, and then acquiring or building attack tools—things that require resources and experience. To achieve mass effect, either a
few central targets (like an electrical grid) need to be hit or multiple targets would have to be hit simultaneously (as is the case with urban water systems), something that is itself an operational challenge. It
is easier to imagine a catastrophe than to produce it. The 2003 East Coast blackout is the archetype for an attack on the U.S. electrical grid. No one died in this blackout, and services were restored in a few

days. As electric production is digitized, vulnerability increases, but many electrical companies have made cybersecurity a priority. Similarly, at water
treatment plants, the chemicals used to purify water are controlled in ways that make mass releases difficult. In any case, it would take a massive amount of chemicals to poison large rivers or lakes, more

than most companies keep on hand, and any release would quickly be diluted. More importantly, there are powerful strategic constraints on those who
have the ability to launch catastrophe attacks. We have more than two decades of experience with the use of cyber techniques and
operations for coercive and criminal purposes and have a clear understanding of motives, capabilities, and intentions . We can be guided by the methods of the Strategic

Bombing Survey, which used interviews and observation (rather than hypotheses) to determine effect. These methods apply equally to cyberattacks. The
conclusions we can draw from this are: Nonstate actors and most states lack the capability to launch attacks that cause physical damage at any level, much less a catastrophe. There have been regular
predictions every year for over a decade that nonstate actors will acquire these high-end cyber capabilities in two or three years in what has become a cycle of repetition. The monetary return is negligible,
which dissuades the skilled cybercriminals (mostly Russian speaking) who might have the necessary skills. One mystery is why these groups have not been used as mercenaries, and this may reflect either a

There is enough uncertainty among potential attackers


degree of control by the Russian state (if it has forbidden mercenary acts) or a degree of caution by criminals.

about the United States’ ability to attribute that they are unwilling to risk massive retaliation in response to a
catastrophic attack. (They are perfectly willing to take the risk of attribution for espionage and coercive cyber
actions.) No one has ever died from a cyberattack, and only a handful of these attacks have produced physical damage. A cyberattack is not a nuclear weapon, and
it is intellectually lazy to equate them to nuclear weapons. Using a tactical nuclear weapon against an urban center would produce several
hundred thousand casualties, while a strategic nuclear exchange would cause tens of millions of casualties and immense physical destruction. These are catastrophes that some hack cannot duplicate. The

State use of cyber operations is consistent with their broad national strategies
shadow of nuclear war distorts discussion of cyber warfare.

and interests. Their primary emphasis is on espionage and political coercion. The United States has opponents and is in
10
DebateUS!
NATO Cyber Coop Aff
conflict with them, but they have no interest in launching a catastrophic cyberattack since it would certainly produce an

equally catastrophic retaliation.

Their goal is to stay below the “use-of-force” threshold and undertake damaging cyber actions against the United States, not start a war. This has implications for the discussion of inadvertent
escalation, something that has also never occurred. The concern over escalation deserves a longer discussion, as there are both technological and strategic constraints that shape
and limit risk in cyber operations, and the absence of inadvertent escalation suggests a high degree of control for cyber capabilities

by advanced states. Attackers, particularly among the United States’ major opponents for whom cyber is just one of the tools for confrontation,
seek to avoid actions that could trigger escalation. The United States has two opponents (China and Russia) who are capable of damaging cyberattacks.
Russia has demonstrated its attack skills on the Ukrainian power grid, but neither Russia nor China would be well served
by a similar attack on the United States. Iran is improving and may reach the point where it could use cyberattacks to cause major damage, but it would only do so when it
has decided to engage in a major armed conflict with the United States. Iran might attack targets outside the United States and its allies with less risk and continues to experiment with cyberattacks against

catastrophe scenarios is that they discount the robustness and


Israeli critical infrastructure. North Korea has not yet developed this kind of capability. One major failing of

of modern economies. These economies present multiple targets and configurations; they are harder to damage through cyberattack
resilience

than they look, given the growing (albeit incomplete) attention to cybersecurity; and experience shows that people
compensate for damage and quickly repair or rebuild. This was one of the counterintuitive lessons of the Strategic Bombing Survey. Pre-war planning assumed
that civilian morale and production would crumple under aerial bombardment. In fact, the opposite occurred. Resistance hardened and production was restored.1 This is a short overview of why catastrophe

after 25 years without a single


is unlikely. Several longer CSIS reports go into the reasons in some detail. Past performance may not necessarily predict the future, but

catastrophic cyberattack, we should invoke the concept cautiously, if at all. Why then, it is raised so often? Some of the explanation for the emphasis on cyber
catastrophe is hortatory. When the author of one of the first reports (in the 1990s) to sound the alarm over cyber catastrophe was asked later why he had warned of a cyber Pearl

Harbor when it was clear this was not going to happen, his reply was that he hoped to scare people into action. "Catastrophe is nigh; we must act" was possibly a reasonable
strategy 22 years ago, but no longer. The resilience of historical events to remain culturally significant must be taken into account for an objective assessment of cyber warfare, and this will require the

The long experience of living under the shadow of nuclear annihilation still shapes
United States to discard some hypothetical scenarios.

American thinking and conditions the United States to expect extreme outcomes. American thinking is also
shaped by the experience of 9/11, a wrenching attack that caught the United States by surprise. Fears of another 9/11
reinforce the memory of nuclear war in driving the catastrophe trope, but when applied to cyberattack, these scenarios do not track with operational requirements or the nature of opponent strategy and
planning. The contours of cyber warfare are emerging, but they are not always what we discuss. Better policy will require greater objectivity.

Cyber-attacks are good---key to enhanced precision in crisis bargaining.


Rabkin, 17 — *Jeremy A. Rabkin; PhD, Harvard University; Professor of Law at the Antonin Scalia Law School, George Mason University.
Professor Rabkin serves on the Board of Directors of the U.S. Institute of Peace (originally appointed by President George W. Bush in 2007, then
appointed for a second term by President Barack Obama and reconfirmed by the Senate in 2011). He also serves on the Board of Academic Advisers
of the American Enterprise Institute and on the Board of Directors of the Center for Individual Rights, a public interest law firm based in
Washington, D.C. **John Yoo; Emanuel Heller Professor of Law and director of the Korea Law Center, the California Constitution Center, and the
Law School’s Program in Public Law and Policy. (2017; “Striking Power: How Cyber, Robots, and Space Weapons Change the Rules for War;” Ch. 1—
We Must Think Anew; //GrRv)

Instead, we question the idea that nations should look to formal treaties and rules to produce lasting limits on war.
Despite the recent deterioration in the Syrian civil war, nation-states have generally refrained from the use of chemical
weapons against each other since the end of World War I. They have followed the Geneva Conventions on prisoners of war, though not consistently. Nations
have observed other norms in the breach, chief among them the immunity of the civilian population and resources from attack. World War Il not only
saw the aerial bombing of cities and the nuclear attacks on Japan, but the years since have seen precision targeting of terrorists off the battlefield, attacks on urban
infrastructure, and the acceptance of high levels of collateral damage among civilians. International lawyers and diplomats may proclaim that nations follow universal
rules, either because of morality or a sense of legal obligation, but the record of practice tells a far different story. Efforts
to impose more specific
and demanding rules, such as limiting targeted drone attacks, banning cyber attacks, or requiring human control of robotic weapons,
11
DebateUS!
NATO Cyber Coop Aff
will similarly fail because they cannot take into account unforeseen circumstances, new weapons and
military situations, and the immediate exigencies of war. Just as new technology led to increases in economic productivity, so too has
it allowed nations to make war more effectively.

Nations will readily adhere to humanitarian standards when they gain a benefit that outweighs the cost,
as when protecting enemy prisoners of war secures reciprocal protection for a nation's own soldiers taken captive by the enemy. Limitations on the use of
weapons will follow a similar logic. Nations will be most inclined to respect legal restraints on new weapons when their use by both sides would
leave no one better off or would provide little advantage. Cyber and robotic weapons do not bear the same features as the weapons
where legal bans have succeeded, as with use of poison gas on the battlefield. Cyber and robotic weapons need not inflict
unnecessary suffering out of proportion to their military advantages, as do poisoned bullets or blinding lasers. Rather,
these weapons improve the precision of force and thereby reduce human death and destruction in war.
Nor have these new weapons technologies yet sparked a useless arms race. Nuclear weapons eventually
became opportune for arms control because larger stockpiles provided marginal, if any, benefits due to the
destructive potential of each weapon and the deterrence provided by even a modest arsenal . Mutual reductions
could leave both sides in the same position as they were before the agreement. Today, the marginal cost of nuclear weapons for the U.S. and Russia so outweighs their
marginal benefit that it is not even clear that a binding international agreement is needed to reduce their arsenals. Russia, for example, reduced its arsenal below New
START's ceilings of 1,550 nuclear warheads and 700 strategic launchers even before the U.S. approved the deal. 45 The United States likely would have reduced its forces
to those levels even if the Senate had refused to consent to the treaty, a position the executive branch also took in 2002 with the Treaty of Moscow's deep reduction in
nuclear weapons. Today's
new weapons do not yet bear these characteristics. The marginal gains in deploying
these weapons will likely be asymmetric across nations insofar as some nations will experience much
greater gains in military capability by developing cyber and drone technology. Put differently, prohibition or
regulation of these new weapons will not have equal impacts on rival nations. Indeed, we do not even now have enough
information to understand which nations will benefit and which will not, which makes any form of international ban even less likely.

No large-scale cyberattacks---attribution deters AND complexity overwhelms despite


technical advancements.
Miguel Alberto N. Gomez, 11-6-2018 - senior researcher at the Center for Security Studies at ETH Zurich; "In
Cyberwar, There Are Some (Unspoken) Rules," Foreign Policy, https://foreignpolicy.com/2018/11/06/in-
cyberwar-there-are-some-unspoken-rules-international-law-norms-north-korea-russia-iran-stuxnet/
Unlike conventional instruments, cyberoperations do not come with a return address. Technical evidence such as an IP address provides victims with a
possible source but not necessarily the identity of the attacker. Furthermore, the presence of certain artifacts does not confirm the intent of the aggressor.
Malicious code for use in espionage can just as well be employed as a first step for later, more damaging operations. Taken together, these factors would
seem to encourage instability within cyberspace, as Wheeler argues. However, when viewed through the lens of preexisting strategic interactions and
interests, the opposite may in fact be true.

Attribution becomes less of an obstacle when judgments are informed by tactical and strategic analysis . For
instance, the appearance of individuals in unmarked uniforms carrying modern Russian weaponry in Ukraine were attributable
to Russia, given the characteristics of these individuals as well as the surrounding context that preceded their
appearance.

Different actors behave in a distinct manner that allows analysts—private threat assessment organizations and national
intelligence services alike— to identify and classify individuals and groups . When analyzed alongside the prevailing political, economic, and
military environment, both the identity and intent of the supposedly nonattributable actor usually become clearer. The
intent of those deploying Stuxnet limited the pool of suspects to those with both the intent and the capabilities to execute this operation. Without the
benefit of anonymity, aggressors are less inclined to engage in activities that significantly alter the current
military balance for fear of provoking the opposite party.
For example, the long-running series of defacements and denial-of-service operations between India and Pakistan reflects this dynamic. Given the stable
nature of this rivalry, both sides have opted for a tit-for-tat approach with respect to disruptive behavior. The defacement of an Indian website is met with
12
DebateUS!
NATO Cyber Coop Aff
the defacement of a corresponding Pakistani website in a matter of days with neither side opting for a more vigorous response to the provocations of the
other.

The aftermath of Stuxnet prompted Iran to act more aggressively in cyberspace in the years following its discovery, but Tehran’s operations did not do
much damage. Furthermore, with
states reserving the right to respond with conventional military means to
cyberthreats, the necessity for restraint becomes even greater.

Because decision-makers know the risks, cybercapable states routinely punch below their weight or decide to employ
cyberoperations in a limited manner. A review of cyberoperations from 2006 to 2016 highlights that despite the advancements of
numerous actors, operations capable of causing physical damage are limited. More recently, the announcement that the United States
would deter Russia from interfering in its midterm elections by calling it out rather than using more aggressive means underlines this point.

This applies to the targeting of critical infrastructure , such as power grids and water treatment facilities, managed by
industrial control systems that are demonstrated to be vulnerable to relatively simple exploits. When a state decides to target industrial
controls, it does so with a specific intent that is informed by its strategic objectives. These objectives are discernible through
its actions in other domains. These constraints even apply to perceived rogue states such as North Korea.
A review of North Korean cyberoperations from 2008 to 2014 illustrates that most of Pyongyang’s attacks caused low-level disruptions to private and
nonmilitary systems of adversaries, which include the United States, Japan, and South Korea. In addition, these often coincided with significant historical,
political, or military events. The same is true in the case of Iran. The nature and timing of these incidents is telling, as a similar pattern is observed with
respect to the physical domain. North Korean behavior, barring its invasion of South Korea in 1950, has not been severe enough to invite a massive
response. Provocations such as missile tests or the shelling of a South Korean-held island have invited international condemnation or a limited military
response—but no more and with limited impact on North Korean behavior.

Although the 2014 Sony Pictures hack, which leaked confidential information and later involved physical threats against cinemas that screened The
Interview, may appear to be a departure from this behavior, the operation did not disrupt the current strategic balance between North Korea and its
adversary, in this case the United States. Nor did the U.S. government seem to think the hack merited a more vigorous response other than the recent
complaint filed by the U.S. Justice Department. For the most part, the intent of the Sony hack appears to have been meant to signal the North Korean
regime’s displeasure through a display of its prowess in cyberspace but no more. Even
with the more recent WannaCry ransomware
attack, its effects, while broad in scope, had no lasting strategic implications that might have resulted in
escalation.

Countries that have invested significant resources in cyberspace don’t lack the ability to act more effectively within this domain. They
are making a
conscious decision to rely on less sophisticated operations based on their strategic calculus—the same calculus
that leads a government such as North Korea’s to employ violent rhetoric and limited military operations to signal
its displeasure without risking direct confrontation .

If critical industrial control systems are so easily compromised, one would expect governments to target these
vulnerable systems more frequently rather than resort to mere disruption. While reports do suggest that North Korea has
the capability to disrupt critical infrastructure such as power grids, acting on this is another matter altogether—much in the same way
that having significant conventional military power does not merit its immediate use. There would be grave consequences .

Cybercriminals and script kiddies may see in these vulnerable systems an opportunity for profit or mischief. But attributional
analysis that looks
beyond technological features and includes tactical and strategic attributes can help distinguish between state-
associated and independent criminal actors .
There is a vast body of experience in dealing with cases of cybercrime. While the corresponding institutions and legislation are far from perfect, they do
offer a course of action if actors are classified under this category. Subjecting state-associated actors to this form of punishment, however, may not be as
effective in deterring malicious behavior in this domain. Previous indictments against Chinese hackers appear to have had limited effect in deterring
economic espionage. It is too early to tell if recent legal actions against North Korea, Russia, and China will have any noticeable effects in cyberspace.

Wheeler correctly presents cyberspace as a vulnerable domain that continues to lack a set of norms that regulates aggressive tendencies. But that
doesn’t mean that state actors will immediately take the opportunity to fully exploit this situation to further
13
DebateUS!
NATO Cyber Coop Aff
their interests. They are acutely aware of the consequences of overly aggressive cyberoperations and therefore
actively attempt to limit the impact of their activities by either narrowing the scope of their operations or resorting to
techniques that do minimal damage and are easily contained.

Adequate health care, housing, education, and clean water and air are increasingly out of reach for large
sections of the population, even in wealthy countries in North America and Europe, while transportation is becoming more difficult in
the United States and many other countries due to irrationally high levels of dependency on the automobile and disinvestment in public transportation.
Urban structures are more and more characterized by gentrification and segregation, with cities becoming the playthings of the well-to-do while
marginalized populations are shunted aside. About half a million people, most of them children, are homeless on any given night in the United States.14
New York City is experiencing a major rat infestation, attributed to warming temperatures, mirroring trends around the world.15

In the United States and other high-income countries , life expectancy is in decline , with a remarkable resurgence of Victorian
illnesses related to poverty and exploitation. In Britain, gout, scarlet fever, whooping cough, and even scurvy are now resurgent, along with tuberculosis.
With inadequate enforcement of work health and safety regulations, black lung disease has returned with a vengeance in U.S. coal country.16 Overuse
of antibiotics, particularly by capitalist agribusiness, is leading to an antibiotic-resistance crisis, with the dangerous growth of
superbugs generating increasing numbers of deaths, which by mid–century could surpass annual cancer deaths, prompting the World
Health Organization to declare a “global health emergency.” 17 These dire conditions, arising from the workings of the system, are
consistent with what Frederick Engels, in the Condition of the Working Class in England, called “social murder.”18

At the instigation of giant corporations, philanthrocapitalist foundations, and neoliberal governments, public education has been restructured around
corporate-designed testing based on the implementation of robotic common-core standards. This is generating massive databases on the student
population, much of which are now being surreptitiously marketed and sold.19 The corporatization and privatization of education is feeding the
progressive subordination of children’s needs to the cash nexus of the commodity market. We are thus seeing a dramatic return of Thomas Gradgrind’s
and Mr. M’Choakumchild’s crass utilitarian philosophy dramatized in Charles Dickens’s Hard Times: “Facts are alone wanted in life” and “You are never to
fancy.”20 Having been reduced to intellectual dungeons, many of the poorest, most racially segregated schools in the United States
are mere pipelines for prisons or the military.21

More than two million people in the United States are behind bars , a higher rate of incarceration than any other country in the
world, constituting a new Jim Crow. The total population in prison is nearly equal to the number of people in Houston, Texas, the fourth largest
U.S. city. African Americans and Latinos make up 56 percent of those incarcerated, while constituting only about 32 percent of the U.S. population. Nearly
50 percent of American adults, and a much higher percentage among African Americans and Native Americans, have an immediate family member who
has spent or is currently spending time behind bars. Both black men and Native American men in the United States are nearly three times, Hispanic men
nearly two times, more likely to die of police shootings than white men.22 Racial divides are now widening across the entire planet.

Violence against women and the expropriation of their unpaid labor, as well as the higher level of exploitation of their paid labor,
are integral to the way in which power is organized in capitalist society—and how it seeks to divide rather than unify the
population. More than a third of women worldwide have experienced physical/sexual violence. Women’s bodies, in particular, are objectified, reified, and
commodified as part of the normal workings of monopoly-capitalist marketing.23

The mass media-propaganda system, part of the larger corporate matrix, is now merging into a social media-based propaganda system that
is more porous and seemingly anarchic, but more universal and more than ever favoring money and power. Utilizing modern
marketing and surveillance techniques, which now dominate all digital interactions, vested interests are able to tailor their messages, largely unchecked, to
individuals and their social networks, creating concerns about “fake news” on all sides.24 Numerous business entities promising technological
manipulation of voters in countries across the world have now surfaced, auctioning off their services to the highest bidders.25 The elimination of net
neutrality in the United States means further concentration, centralization, and control over the entire Internet by monopolistic service providers.

Elections are increasingly prey to unregulated “dark money” emanating from the coffers of corporations and the
billionaire class. Although presenting itself as the world’s leading democracy , the United States, as Paul Baran and Paul
Sweezy stated in Monopoly Capital in 1966, “is democratic in form and plutocratic in content.”26 In the Trump administration, following
a long-established tradition, 72 percent of those appointed to the cabinet have come from the higher corporate echelons, while others have been drawn
from the military.27
14
DebateUS!
NATO Cyber Coop Aff
War, engineered by the United States and other major powers at the apex of the system, has become perpetual in strategic oil
regions such as the Middle East, and threatens to escalate into a global thermonuclear exchange. During the Obama
administration, the United States was engaged in wars/bombings in seven different countries—Afghanistan, Iraq, Syria, Libya,
Yemen, Somalia, and Pakistan.28 Torture and assassinations have been reinstituted by Washington as acceptable instruments of war against
those now innumerable individuals, group networks, and whole societies that are branded as terrorist. A new Cold War and nuclear arms
race is in the making between the United States and Russia, while Washington is seeking to place road blocks to
the continued rise of China. The Trump administration has created a new space force as a separate branch of the military in an attempt to
ensure U.S. dominance in the militarization of space. Sounding the alarm on the increasing dangers of a nuclear war and of climate destabilization, the
distinguished Bulletin of Atomic Scientists moved its doomsday clock in 2018 to two minutes to midnight, the closest since 1953, when it marked the
advent of thermonuclear weapons.29

Increasingly severe economic sanctions are being imposed by the United States on countries like Venezuela and Nicaragua, despite their democratic
elections—or because of them. Trade
and currency wars are being actively promoted by core states, while racist barriers
against immigration continue to be erected in Europe and the United States as some 60 million refugees and internally displaced
peoples flee devastated environments. Migrant populations worldwide have risen to 250 million, with those residing in high-income countries constituting
more than 14 percent of the populations of those countries, up from less than 10 percent in 2000. Meanwhile, ruling circles and wealthy countries seek to
wall off islands of power and privilege from the mass of humanity, who are to be left to their fate.30

More than three-quarters of a billion people, over 10 percent of the world population, are chronically malnourished.31 Food
stress in the United States keeps climbing, leading to the rapid growth of cheap dollar stores selling poor quality and toxic food. Around forty million
Americans, representing one out of eight households, including nearly thirteen million children, are food insecure.32 Subsistence farmers are being
pushed off their lands by agribusiness, private capital, and sovereign wealth funds in a global depeasantization process that constitutes the greatest
movement of people in history.33 Urban overcrowding and poverty across much of the globe is so severe that one can now reasonably refer to a “planet
of slums.”34 Meanwhile, the world housing market is estimated to be worth up to $163 trillion (as compared to the value of gold mined over all recorded
history, estimated at $7.5 trillion).35

The Anthropocene epoch, first ushered in by the Great Acceleration of the world economy immediately after the Second World War, has
generated enormous rifts in planetary boundaries, extending from climate change to ocean acidification, to the
sixth extinction, to disruption of the global nitrogen and phosphorus cycles, to the loss of freshwater, to the
disappearance of forests, to widespread toxic-chemical and radioactive pollution .36 It is now estimated that 60 percent of
the world’s wildlife vertebrate population (including mammals, reptiles, amphibians, birds, and fish) have been wiped out since 1970, while the worldwide
abundance of invertebrates has declined by 45 percent in recent decades.37 What climatologist James Hansen calls the “species exterminations”
resulting from accelerating climate change and rapidly shifting climate zones are only compounding this general
process of biodiversity loss. Biologists expect that half of all species will be facing extinction by the end of the century.38

If present climate-change trends continue, the “global carbon budget” associated with a 2°C increase in average
global temperature will be broken in sixteen years (while a 1.5°C increase in global average temperature—staying beneath which is the
key to long-term stabilization of the climate—will be reached in a decade). Earth System scientists warn that the world is now
perilously close to a Hothouse Earth, in which catastrophic climate change will be locked in and irreversible.39
The ecological, social, and economic costs to humanity of continuing to increase carbon emissions by 2.0 percent a year as in
recent decades (rising in 2018 by 2.7 percent—3.4 percent in the United States), and failing to meet the minimal 3.0 percent annual reductions in
emissions currently needed to avoid a catastrophic destabilization of the earth’s energy balance, are simply
incalculable.40

Nevertheless, major energy corporations continue to lie about climate change, promoting and bankrolling
climate denialism—while admitting the truth in their internal documents. These corporations are working to accelerate the extraction and
production of fossil fuels, including the dirtiest, most greenhouse gas-generating varieties, reaping enormous profits in the process. The melting of the
Arctic ice from global warming is seen by capital as a new El Dorado, opening up massive additional oil and gas reserves to be exploited without regard to
the consequences for the earth’s climate. In response to scientific reports on climate change, Exxon Mobil declared that it intends to extract and sell all of
the fossil-fuel reserves at its disposal.41 Energy corporations continue to intervene in climate negotiations to ensure that any agreements to limit carbon
15
DebateUS!
NATO Cyber Coop Aff
emissions are defanged. Capitalist countries across the board are putting the accumulation of wealth for a few above
combatting climate destabilization, threatening the very future of humanity.

No scenario fo cyberr escalation.


Erica D. Borghard 19, Assistant Professor in the Army Cyber Institute at the United States Military Academy at
West Point, and Shawn W. Lonergan, Assistant Professor of International Relations in the Department of Social
Science at USMA, “Cyber Operations as Imperfect Tools of Escalation”, Strategic Studies Quarterly, Fall 2019, p.
123-124

However, there are important empirical reasons to suspect that the risks of cyber escalation may be exaggerated.
Specifically, if cyberspace is in fact an environment that (perhaps even more so than others) generates severe escalation
risks, why has cyber escalation not yet occurred? Most interactions between cyber rivals have been
characterized by limited volleys that have not escalated beyond nuisance levels and have been largely
contained below the use-of-force threshold.5 For example, in a survey of cyber incidents and responses between 2000 and 2014,
Brandon Valeriano et al. find that “rivals tend to respond only to lower-level [cyber] incidents and the response tends to
check the intrusion as opposed to seek escalation dominance. The majority of cyber escalation episodes are at
a low severity threshold and are non-escalatory. These incidents are usually ‘tit-for- tat’ type responses within
one step of the original incident.”6 Even in the two rare examples in which states employed kinetic force in
response to adversary cyber operations—the US counter-ISIL drone campaign in 2015 and Israel’s airstrike against Hamas cyber operatives in 2019— the
use of force was circumscribed and did not escalate the overall conflict (not to mention that force was used against nonstate
adversaries with limited potential to meaningfully escalate in response to US or Israeli force).7

We posit that cyber escalation has not occurred because cyber operations are poor tools of escalation. In particular, we
argue that this stems from key characteristics of offensive cyber capabilities that limit escalation through four
mechanisms. First, retaliatory offensive cyber operations may not exist at the desired time of employment.
Second, even under conditions where they may exist, their effects are uncertain and often relatively limited.
Third, several attributes of offensive cyber operations generate important tradeoffs for decision-makers that
may make them hesitant to employ capabilities in some circumstances. Finally, the alternative of cross-domain
escalation—responding to a cyber incident with noncyber, kinetic instruments—is unlikely to be chosen except under rare
circumstances, given the limited cost-generation potential of offensive cyber operations. In this article, we define cyber
escalation and then explore the implications of the technical features and requirements for offensive cyber operations. We also consider potential
alternative or critical responses to each of these logics. Finally, we evaluate the implications for US policy making.

No cyber attacks – civilian harm, can only be used once, can be reversed to target the
attacker, retribution, resource limits, need luck, lack of assets

Maness & Valeriano, 2015, Ryan C. Maness, Northeastern University, Department of Political Science, Brandon
Valeriano, University of Glasglow, Cyber War versus Cyber Realities: Cyber Conflict in the International System,
Kindle Edition, page number at end of card

We find that the security dilemma has no place in these international interactions. The cyber world is nebulous;
an infiltration against a military facility in this realm could bleed into the public sector. Malicious cyber incidents
16
DebateUS!
NATO Cyber Coop Aff
on infrastructure have been and will continue to be rare to nonexistent because states are restrained due to
the high probability of civilian harm, the nature of the weapons (single use), and the weak payoffs if utilized
(Gartzke 2013). These types of offensive cyber actions are just as unlikely as interstate nuclear or chemical
weapons attacks. There is a system of normative restraint in cyber operations based on the conditions of
collateral damage, plus the factors of blowback and replication. Foreign policy tactics in the cyber world can
be replicated and reproduced. Any cyber weapon used can be turned right back on its initiator. On top of this,
it is likely that severe cyber operations will be bring retribution and consequences that many states are not
willing to accept. We have seen many interstate conflicts since the advent of the Internet age, but the largest
and only cyber operation thus far during a conventional military conflict, the 2008 Russo-Georgian skirmish,
consisted of almost trivial DDoS and vandalism. Since then, Russia has even avoided using cyber weapons
during the Crimean and larger Ukrainian crises of 2014. Other operations are mainly propaganda operations
or occur in the realm of espionage. That the United States did not use cyber tactics against Iraq, Afghanistan, or
Libya, at least as directed at the executive level, signifies that cyber tactics are typically restrained despite
significant constituencies in the military that want to use the weapons. Stuxnet is the outlier, as our data
demonstrate, not the norm or the harbinger of the future to come. Cyber operations are limited in that their
value is negligible, the consequences of a massive cyber incident are drastic, and the requirements to carry
one out are vast. The idea of a lone cyber hacker being able to bring states to their knees is a fantastic one.
Cyber operations like Stuxnet require an exceptional amount of funds, technical knowledge, luck, and on-the-
ground assets for successful implementation. Massive and truly dangerous cyber operations are beyond the
means of most countries. These statements are not opinions, but contentions made based on the facts at hand
and the data we have collected. We also see regionalism dominate in cyberspace. Despite the vastness and
transboundary capacity of the Internet, most operations are limited to local targets connected to traditional
causes of conflict, such as territorial disputes and leadership disagreements. Issues are important (Mansbach
and Vasquez 1981) in world politics and in cyber politics. This is why international relations scholarship is so
important in relation to the cyber question. Cyber operations are not taken devoid of their international and
historical contexts. What has happened in the past will influence how future technologies are leveraged and
where they are applied. The goal of this book will be to use this theoretical frame to explain the cyber conflict
dynamics of rival states, as well as non-state actors willing and able to launch cyber malice. Valeriano, Brandon;
Maness, Ryan C. (2015-04-27). Cyber War versus Cyber Realities: Cyber Conflict in the International System (pp.
16-17). Oxford University Press. Kindle Edition.
17
DebateUS!
NATO Cyber Coop Aff
18
DebateUS!
NATO Cyber Coop Aff
No Russia Cyber Attack

Multiple reasons there will be no Russian cyber attack

Aldanova, 5-30, 22, Dina Aldanova is a Master’s Candidate at Georgetown University’s Eurasian, Russian, and East
European Studies Program, Will Russia Launch a New Cyber Attack on America?, https://nationalinterest.org/blog/techland-
when-great-power-competition-meets-digital-world/will-russia-launch-new-cyber-attack?page=0%2C1

Although Putin’s intentions are far from clear, his decision to pursue a cyberattack on the United States’ critical infrastructure
that would instantly shut down electricity or disrupt clean water supply, the offense might come unexpectedly, and soon.
Policy circles in Washington are now debating how Vladimir Putin might respond to a major contraction of the Russian
economy and

clear signs that Moscow is losing the war in Ukraine. Some posit that a cornered president, furious and facing a near defeat,
might indeed respond brutally—moving the proxy confrontation of a new Cold War front to a cyber battlefield, where Russia
has a greater advantage, and launching a massive cyberattack against the United States. However, several key factors call this
thesis into question Similar to Iran and North Korea, Russia is known to be responsible for some of the most aggressive,
large-scale cyberattacks. However, these cyber tactics have played a rather peripheral role, either in supporting
conventional warfare or through disinformation campaigns that serve to spread chaos and panic among targeted
societies. For the first time, a known state-backed attack occurred in 2007 and lasted for twenty-two days when the Russian
military intelligence unit, the GRU, targeted Estonian commercial, government, and Domain Name System (DNS) servers,
and online banking systems. The attacks fell under the Denial of Service (DoS) and Distributed Denial of Service (DDoS)
categories that include methods such as ping flooding, spam distribution, botnets, and phishing emails. In 2008, as a part of
hybrid warfare amid the occupation of Abkhazia and South Ossetia, Russia defaced Georgian state websites. In 2015,
following the annexation of Crimea and the occupation of eastern Ukraine, a GRU proxy group named Sandworm attacked
the Ukrainian power grid and deprived more than 200,000 people of electricity for six hours. In 2017, the NotPetya malware
attack directed at Ukraine had an unprecedented impact hitting major Western companies in Europe and the United States
such as Mondelez International and Maersk, and even striking back at Russian oil company Rosneft. It paralyzed thousands
of networks. The global cost the malware had provoked reached $10 billion—encapsulating the most consequential cyber
attack in history. In addition, just a month ago, Russia unsuccessfully attempted to attack the Ukrainian power grid with
advanced malware classified as a wiper. Overseas, a Russian group of hackers called FancyBear meddled with the United
States 2015 presidential campaigns and 2016 federal elections at the county level. To this point, while the Russian cyber
tactics are common and multifarious, they represent a secondary function in hybrid warfare that Moscow conducts
along with disinformation campaigns and conventional military operations. Nevertheless, cybersecurity experts
speculate on a range of consequences in a worst-case cyber scenario: Russia might attempt to attack U.S. critical
infrastructure, turn off the lights, target the operation of ATMs and credit card systems, attack Amazon’s cloud, disrupt the
transportation and supply of clean water, and target pharmaceuticals companies’ manufacturing facilities, power grids, and
colonial pipelines. But will such a threat manifest? Not only would a cyberattack against the United States contradict the
historically peripheral nature of Russian cyber warfare, but Russia’s cyber capacity would be insufficient for the task.
For the past several years, the West has largely overestimated Russian military capabilities in conventional warfare.
U.S. intelligence agencies predicted the 2022 war in Ukraine would be the most destructive the European continent has seen
since the end of World War II, expecting the fall of Kyiv to come within days. However, the still ongoing, drawn-out war has
revealed weaknesses in the Russian armed forces, its military arsenal, and strategic leadership. Russian officials, for their
part, underestimated the strength of the Ukrainian resistance and the united position of the international community.
Spending slightly more than 4 percent of the country’s GDP on the military, the Russian president mobilizes domestic
support for the military budget by articulating the external threat of NATO. In a relatively undigitized society like Russia,
lobbying to spend more on the cyber budget would prove less effective. Taking this into account, it seems possible the West
could be overestimating Russian cyber competence as well.

Furthermore, Russia is unlikely to wage a cyberattack on the United States due to fear of retaliation on multiple
fronts. Russian society is already experiencing the consequences that the war has wrought: an economic crisis and the
psychological pressure of being cast as a global pariah. In case of a Russian cyberattack, the consequences of U.S.
19
DebateUS!
NATO Cyber Coop Aff
cyber retaliation would hit the public first. Given current conditions, depriving people of water and electricity could
trigger public discontent on an unprecedented scale. Decades of increasingly authoritarian leadership have
undoubtedly engendered public grievances hidden deep within society. At some point, this simmering disgruntlement
can boil over into outrage. Putin can ill afford to front further domestic unrest now.

Current U.S. cyber capabilities could also contribute to the fear of retaliation. For the past few years, the United
States has developed an impressive cyberinfrastructure, restructured its system of governance, and invested in cyber
training and education. As Richard Clarke and Robert Knake emphasize in their book, The Fifth Domain, following the
Cold War strategy of deterrence and containment, the United States has largely restrained itself from involvement in cyber
counter activities. Although for a long time America has focused on defensive cyber policy, today, the U.S. Cyber Command
prioritizes offensive measures. As such, in 2019, the United States successfully targeted the Iranian intelligence service and
missile launch system as a response to an Iranian strike against an American drone and U.S. oil tankers. Earlier in 2012, the
Stuxnet computer worm, designed in cooperation with Israel, successfully infiltrated nuclear facilities in Iran. In addition to
an offensive preference, a more consolidated system of governance and a set of regulations have advanced U.S.
cybersecurity. A clear allocation of roles and responsibilities between the Department of Homeland Security and U.S. Cyber
Command and the relevant leadership improved the system of reporting incidents and information sharing. It facilitated
communication within federal agencies and between the government, the private sector, and the public. U.S. private
enterprises now spend billions of dollars on cybersecurity, employee training, and encrypted channels. The United
States also takes a leading role in collaborating with strategic allies on sharing best practices, detecting flaws in
networks, and promoting cyber hygiene.  International cooperation to this degree is not an asset that Russia benefits from.
With the support of NATO Cooperative Cyber Defense Center of Excellence’s research and development projects, expertise,
and training, U.S. retaliation to a potential Russian cyber attack could be not only detrimental but even more profound as a
multilateral response. Based on all this, the fear of retaliation could indeed prevent Putin from engaging in offensive
cyber operations against the United States. Finally, Putin has lost the upper hand in launching an attack by surprise.
For instance, Russia invaded Georgia during the Olympics Games in Beijing in 2008, and Ukraine during the Sochi
Winter Olympics in 2014. When Putin waged war on Ukraine in 2022—incidentally, immediately following the
Beijing Winter Olympic Games—the West anticipated it. Putin invaded Ukraine anyway. He is unlikely to act
recklessly in this way again, considering the failures the Russian military has experienced since the invasion.
Furthermore, knowing that the United States and European allies have shielded up, Putin has no incentive to strike.
Nevertheless, would Putin wait for a more favorable moment? Or scale back a potential attack, for instance, by meddling in
the U.S. midterm elections in November?
20
DebateUS!
NATO Cyber Coop Aff
No Impact

Cyber weapons have no real impact

Valeriano, 5-30, 22, Brandon Valeriano is a senior fellow at the Cato Institute and a distinguished senior fellow at
the Marine Corps University., War Is Still War: Don’t Listen to the Cult of Cyber,
https://nationalinterest.org/blog/techland-when-great-power-competition-meets-digital-world/war-still-war-
don%E2%80%99t-listen-cult-cyber

Cyber cultists are still searching for the grand example of success in cyberwar. Yet, there
remains no central strategic purpose behind
cyber warfare. It’s not an effective coercive tactic or a useful form of espionage . Instead, cyberwar is a tool of disruption.
Even Russian president Vladimir Putin noted this recently when he complained that “serious attacks [against Russia] were inflicted on the official websites
of the authorities. Attempts of illegal penetration into corporate networks of leading Russian companies are also recorded much more often.” In
the
lead up to the Ukraine war, predictions of dramatic “shock and awe” in cyberspace abounded, but there is
little to show for it besides “attempts of illegal penetration.” Even demonstrating coordination with battlefield operations seems
beyond reach for cyber warriors. Microsoft’s special report on Russia’s cyber activity in Ukraine did not demonstrate
coordination between cyber actions and conventional attacks despite reports to the contra ry. The evidence of
coordination is noted by pointing out that cyberattacks come after battlefield failures, which is hardly the evidence of complementary cyber activity many
had been expecting. The hope for the cultists is that cyber operations will replace traditional mortars and bombs, lancing out through the fiber cables to
strike at the enemy and bend them to the will of the attacker. It is thought that cyber capabilities provide a state with the means to achieve its objectives
without firing a shot. The reality is that attackers
hardly demonstrate an impact on weakly protected critical
infrastructure, let alone the battlefield. Cyber capabilities mostly make it easier to communicate and organize. The prime example might
be GIS Arta, Ukraine’s “Uber-like” system of allocated military fires on a selected target. Instead of cyber operations providing a direct path to victory,
algorithms rather allocate forces based on distance, readiness, and capability, much like the Uber system does when finding a ride in the middle of New
York City. While the GIS Arta system solves one problem by quickly allocating force on a contested battlefield, it also makes the user dependent on the
system, creating new vulnerabilities. The problem with many technologies is that they create new problems while solving other challenges. The internet
was created without security in mind; soon, an entire industry was created to solve the problems introduced by the internet. The dominant cultural trait of
the cyber warrior is making and solving problems with the same technology. Those asking why Ukraine did not experience the predicted cyberwar skip the
basic question of why we would assume that cyber operations could be leveraged for battlefield effect in the first place. This goes to the heart of the
question of the culture of the cyber warrior: those who believe in the ideology of cyberwar expect the battlefield to be transformed by cyber. Sadly,
those with experience in the technology and its uses by governments and militaries will feign shock when the
predictions of excited outsiders do not come to fruition. Transformative cyberwar is simply magic, and it is
often trumpeted by charlatans. The central cultural trait of the mythical cyber warrior is a belief in magic. There is little difference between
Harry Potter’s adventures and the fiction of the cyber hacker capable of bringing down countries from their basement. So, do you believe in cyber magic or
not?
21
DebateUS!
NATO Cyber Coop Aff
Escalation Answers

Cyber risk low – limited utility, no escalation empirically denied

Lonegran, 4-15, 22, ERICA D. LONERGAN is Assistant Professor in the Army Cyber Institute at West Point and a
Research Scholar at the Saltzman Institute of War and Peace Studies at Columbia University. Previously, she
served as Senior Director on the U.S. Cyberspace Solarium Commission. The views expressed here are her own,
The Cyber-Escalation Fallacy What the War in Ukraine Reveals About State-Backed Hacking,
https://www.foreignaffairs.com/articles/russian-federation/2022-04-15/cyber-escalation-fallacy

During a Senate Intelligence Committee hearing in March, Senator Angus King, an independent from Maine, pressed General Paul Nakasone, the head of
U.S. Cyber Command and director of the National Security Agency, about the lack of significant cyber-operations in Russia’s war in Ukraine. After all,
Russia has long been known for targeting Western countries, as well as Ukraine itself, with cyberattacks. Echoing the surprise of many Western observers,
King said, “I expected to see the grid go down, communications too, and that hasn’t happened.” Indeed, although President Joe Biden and members of his
administration have also warned of potential Russian cyberattacks against the United States, there were remarkably few signs of such activity during the first
six weeks of the war. That is not to say that cyber-activity has been entirely absent. Proxy cyber-groups and hackers have mobilized on both sides, ranging
from Ukraine’s 400,000-strong “IT Army” to Russia’s Conti ransomware group. Sandworm, an outfit linked to Russian military intelligence, also has a long
record of cyberattacks against Ukraine. Yet since
the war began, such operations have mostly been limited to low-
cost, disruptive incidents rather than large-scale attacks against critical civilian and military
infrastructure. Two potential exceptions only underscore the relatively limited role of cyber-operations. There is some evidence that at the start of
the war Russian-linked actors conducted a cyberattack against Viasat, a U.S.-based Internet company that provides satellite Internet to the Ukrainian military
and to customers in Europe. But the impact was temporary and, more important, did not meaningfully affect the Ukrainian military’s ability to communicate.
Additionally, Ukrainian officials recently announced that, in early April, the Sandworm group attempted, but failed, to carry out a cyberattack against
Ukraine’s power grid. While the hackers appeared to have gained access to a company that delivers power to two million Ukrainians, they were thwarted by
effective defenses before being able to cause any damage or disruption. In fact, the negligible role of cyberattacks in the Ukraine conflict should come as no
surprise. Through war simulations, statistical analyses, and other kinds of studies, scholars have
found little evidence that cyber-operations provide effective forms of coercion or that they cause
escalation to actual military conflict. That is because for all its potential to disrupt companies, hospitals, and utility grids during
peacetime, cyberpower is much harder to use against targets of strategic significance or to achieve
outcomes with decisive impacts, either on the battlefield or during crises short of war. In failing to
recognize this, U.S. officials and policymakers are approaching the use of cyberpower in a way that may be doing more harm than good—treating cyber-
operations like any other weapon of war rather than as a nonlethal instrument of statecraft and, in the process, overlooking the considerable opportunities as
well as risks they present. THE MYTH OF CYBER-ESCALATION Much of the current understanding in Washington about the role of cyber-operations in
conflict is built on long-standing but false assumptions about cyberspace. Many scholars have asserted that cyber-operations could easily lead to military
escalation, up to and including the use of nuclear weapons. Jason Healey and Robert Jervis, for example, expressing a widely held view, have argued that an
incident that takes place in cyberspace, “might cross the threshold into armed conflict either through a sense of impunity or through miscalculation or
mistake.” Policymakers have also long believed that cyberspace poses grave perils. In 2012, Secretary of Defense Leon Panetta warned of an impending
“cyber-Pearl Harbor,” in which adversaries could take down critical U.S. infrastructure through cyberattacks. Nearly a decade later, FBI Director
Christopher Wray compared the threat from ransomware—when actors hold a target hostage by encrypting data and demanding a ransom payment in return
for decrypting it—to the 9/11 attacks. And as recently as December 2021, Secretary of Defense Lloyd Austin noted that in cyberspace, “norms of behavior
aren’t well-established and the risks of escalation and miscalculation are high.” Seemingly buttressing these claims has been a long record of cyber-
operations by hostile governments. In recent years, states ranging from Russia and China to Iran and North Korea have used cyberspace to conduct large-
scale espionage, inflict significant economic damage, and undermine democratic institutions. In January 2021, for example, attackers linked to the Chinese
government were able to breach Microsoft’s Exchange email servers, giving them access to communications and other private information from companies
and governments, and may have allowed other malicious actors to conduct ransomware attacks. That breach followed on the heels of a Russian intrusion
against the software vendor SolarWinds, in which hackers were able to access a huge quantity of sensitive government and corporate data—an espionage
treasure trove. Cyberattacks have also inflicted significant economic costs. The NotPetya attack affected critical infrastructure around the world—ranging
from logistics and energy to finance and government—causing upward of $10 billion in damage. But the assumption that cyber-operations play a central role
in either provoking or extending war is wrong. Hundreds
of cyber-incidents have occurred between rivals with long
histories of tension or even conflict, but none has ever triggered an escalation to war. North Korea,
for example, has conducted major cyberattacks against South Korea on at least four different occasions, including the “Ten
22
DebateUS!
NATO Cyber Coop Aff
Days of Rain” denial of service attack—in which a network is flooded with an overwhelming number of requests, becoming temporarily inaccessible to
users—against South Korean government websites, financial institutions, and critical infrastructure in 2011 and the “Dark Seoul” attack in 2013, which
disrupted service across the country’s financial and media sectors. No cyber operation has ever triggered a war. It
would be reasonable to
expect that these operations might escalate the situation on the Korean Peninsula , especially because North
Korea’s war plans against South Korea reportedly involve cyber-operations. Yet that is not what happened. Instead, in each case, the South
Korean response was minimal and limited to either direct, official attribution to North Korea by government officials or more indirect public suggestions that
Pyongyang was likely behind the attacks. Similarly, although
the United States reserves the right to respond to
cyberattacks in any way it sees fit, including with military force, it has until now relied on
economic sanctions, indictments, diplomatic actions, and some reported instances of tit-for-tat
cyber-responses. For example, following Russia’s interference in the 2016 U.S. presidential election, the Obama administration expelled 35
Russian diplomats and shuttered two facilities said to be hubs for Russian espionage. The Treasury Department also levied economic sanctions against
Russian officials. Yet according to media reports, the administration ultimately rejected plans to conduct retaliatory cyber-operations against Russia. And
although the United States did use its own cyber-operations to respond to Russian attacks during
the 2018 midterm elections, it limited itself to temporarily disrupting the Internet Research
Agency, a Russian troll farm. These measured responses are not unusual. Despite decades of malicious behavior
in cyberspace—and no matter the level of destruction—cyberattacks have always been contained below the level of
armed conflict. Indeed, researchers have found that major adversarial powers across the world have routinely observed a “firebreak” between
cyberattacks and conventional military operations: a mutually understood line that distinguishes strategic interactions above and below it, similar to the
threshold that exists for the employment of nuclear weapons. But it is not just that cyber-operations do not lead to conflict. Cyberattacks can also be useful
ways to project power in situations in which armed conflict is expressly being avoided. This is why Iran, for example, might find cyberattacks against the
United States, including the 2012–13 denial of service attacks it conducted against U.S. financial institutions, appealing. Since Iran likely prefers to avoid a
direct military confrontation with the United States, cyberattacks provide a way to retaliate for perceived grievances, such as U.S. economic sanctions in
response to Iran’s nuclear program, without triggering the kind of escalation that would put the two countries on a path to war. THE ADVANTAGE OF
AMBIGUITY In addition to the ways they are used, cyber-operations
also have two general qualities that tend to
distinguish them from conventional military operations. First, they typically have limited,
transient impact—especially when compared with conventional military action. As the Hoover Institute
fellow Jacquelyn Schneider recently told The New Yorker, “If you’re already at a stage in a conflict where you’re willing to drop bombs, you’re going to
drop bombs.” Unlike traditional military hardware, cyberweapons are virtual: even at their most destructive, they rarely have effects in the physical world. In
the extraordinary instances when they do—such as the Stuxnet cyberattack, which caused the centrifuges used to enrich uranium in Natanz, Iran, to speed up
or slow down—cyber-operations do not inflict the kind of damage that can occur in even a minor precision missile strike. And when
states have
launched cyberattacks against civilian infrastructure, such as Russia’s 2015 hit on Ukraine’s
power grid, the impact has been short-lived. To date, cyberattacks have never caused direct
physical harm; the only known indirect death associated with a cyberattack occurred in 2020,
when a German patient with a life-threatening condition died as a result of a treatment
interruption caused by a ransomware attack on a hospital’s servers. In practice, governments themselves have also
recognized the contrasting impacts of cyberattacks and conventional military attacks. Consider the incident between Iran and the United States that occurred
in the summer of 2019: according to reports in the U.S. media, when Iran attacked oil tankers in the region and downed a U.S. drone, the Trump
administration chose to respond in cyberspace, allegedly by hacking Iranian computer systems to degrade their ability to conduct further attacks against oil
tankers. What stands out about this case is that there was a credible military option on the table that was subsequently revoked: President Donald Trump
called off plans to conduct military strikes against Iranian targets. At the time, Trump tweeted that he changed his mind after learning of the potential for
civilian casualties. By implication, a cyber-operation may have been seen as less risky precisely because it was unlikely to cause loss of life or even major
destruction. Maxar Technologies / Reuters Second,
in contrast to most military strikes, cyber-operations tend to
be shrouded in secrecy and come with plausible deniability. Analysts have argued that uncertainty about responsibility
makes interactions in cyberspace perilous and undermines deterrence. Cloaked in anonymity, so the logic goes, malicious actors can provoke conflict while
remaining in the shadows. It is true that false-flag cyberattacks are common. For example, when a group linked to the Chinese government conducted cyber-
operations against Israel in 2019 and 2020, it masqueraded as Iranian, presumably to confuse Israeli attribution efforts. Yet secrecy need not have negative
implications: it can provide opportunities for states to maneuver in crises without the drawbacks that more conventional uses of hard power might have, such
as exacerbating domestic political tensions. It can also offer a way to explore the extent to which the other side is willing to negotiate or resolve the crisis:
ambiguity creates breathing space. For example, when the United States withdrew from the Iran nuclear deal in 2018, experts worried that Iran might
retaliate, perhaps by attacking U.S. personnel or U.S. interests in the Middle East. Instead, Iran appeared to respond with increased cyber-activity that was
ambiguous and not escalatory. Although the Iranian cyber-operations were noted within a day of the U.S. announcement, they were not the kind of massive
attack that many commentators had anticipated; they mostly appeared to be attempts to conduct reconnaissance and probe for vulnerabilities. If Iran intended
23
DebateUS!
NATO Cyber Coop Aff
for this activity to be uncovered, it would largely serve symbolic purposes—communicating Iran’s presence to the United States. Put
simply,
cyber-operations by their very nature are designed to avoid war. They can act as a less costly
alternative to conflict because they are ambiguous, rarely break things, and don’t kill people . By
continuing to depict cyberspace as an escalatory form of warfare itself, policymakers risk overstating the role of cyber-operations in armed conflict and
missing their true importance. TOOLS NOT WEAPONS The recognition that cyber-operations are unlikely to lead to military escalation—and that they play
at most a supporting rather than decisive role in actual armed conflicts—has direct consequences for U.S. policy and strategy. For one thing, it means that
the United States may have greater room to use cyberspace to achieve objectives without precipitating new crises or exacerbating existing ones. Since 2018,
for example, the U.S. Defense Department has treated cyberspace as an arena in which the military can operate more routinely and proactively rather than
wait to respond to an adversary’s activity. According to the Pentagon, Washington needs to “defend forward to disrupt or halt malicious cyber activity at its
source.” This approach encompasses maneuvering on networks controlled by U.S. adversaries or third parties and even conducting offensive cyber-
operations. At the time that the 2018 cyber strategy was released, many experts expressed alarm that it could provoke military escalation. Adding to the
concerns, in the 2019 National Defense Authorization Act, Congress authorized the secretary of defense to conduct cyber-operations as a traditional military
activity, which meant that cyber-operations would no longer be treated as a form of covert action requiring a presidential finding to be approved. Yet in the
four years since the defend forward concept was implemented, the escalation that many feared has not materialized. This should give some assurances to
policymakers that the United States can continue to conduct offensive cyber-operations without risking a wider conflict. In 2021, for example, U.S. Cyber
Command, working with a partner government, conducted a cyber-operation to limit the ability of the Russian-linked criminal group REvil to conduct
ransomware attacks. Several months later, U.S. officials acknowledged that the military had “imposed costs” against ransomware groups. There is also some
evidence that efforts to counter Russian cyber-activity during the current Ukraine crisis may have blunted a more effective Russian cyberoffensive, with
Nakasone alluding to work done by the Ukrainians and others to hinder Moscow’s plans. But just because the Pentagon’s plan has not led to escalation does
not mean it is tool the U.S. can use to solve all of the cyber challenges it faces. For the very same reasons that offensive cyber-operations have not led to
escalation, their constraints should cast doubt on the notion that the United States can use them to coerce adversaries into changing their behavior or punish
them by inflicting high costs. Cyber operations rarely break things, or cause loss of life. Second, the reality that cyber-operations are used by states in many
different ways means that policymakers need to develop a more nuanced approach for responding to cyberthreats. Because cyber-operations are consistently
seen as representing an existential threat to the United States, Washington has tended to deal with cyber-incidents of contrasting scope and scale with the
same policy tools. For instance, senior U.S. officials described both Russia’s 2016 election interference and 2021 SolarWinds operation as acts of war. But
the first was a cyber-enabled information operation and the second was in fact a large-scale cyber-espionage campaign—and neither resembled open war in
any conventional sense. Moreover, the policy responses in both of these cases (as in many other cyber-incidents) were similar: a combination of public
attribution, indictments, and sanctions. Instead of responding with inflammatory language and standard forms of retaliation, policymakers should consider
how to employ cybertools and non-cybertools in ways that are tailored to specific incidents, taking into account the extent and gravity of a given operation.
Responses can also be proportionate without being symmetrical. Rather than responding in kind, the United States should apply varying and more creative
approaches that reflect differences in adversaries’ centers of gravity. What is important to Beijing and therefore what may motivate its behavior is different
from what is important to Moscow, Tehran, and so on. A one-size-fits-all approach to adversary cyber-operations may raise particular problems in the
Ukraine conflict. Anticipating potential Russian cyberattacks against member states, NATO leaders have reaffirmed that Article 5, the treaty’s collective
defense clause, applies to cyberspace, but they have also expressed ambiguity about what specific operations might trigger it. A lack of clarity about how
thresholds and responses are defined risks undermining the credibility of this pledge and the effectiveness of NATO’s overall cyberstrategy.

Russia and the US will both avoid nuclear use

Dr. Matthias Schulze is the deputy head of the security division at the German Institute for International and
Security Affairs (SWP). He also runs percepticon.de blog and podcast on cybersecurity issues, 4-15, 22, Can
Russia and the West Avoid a Major Cyber Escalation?, https://nationalinterest.org/blog/techland-when-great-
power-competition-meets-digital-world/can-russia-and-west-avoid-major-cyber

The central premise of our argument is that Russia and NATO member states will want to avoid a direct
military clash at nearly all costs (Putin’s desire to protect his regime from imminent collapse might be an
exception). If anything, the tragedies of the war in Ukraine reveal the enormous economic and human costs of
conventional battles involving the Russian military behemoth. Although the risk of an accidental or unwanted
war between Russia and NATO is always present and has increased, both sides will want to reduce it. That is
how to interpret Russian President Vladimir Putin’s recent allusion to nuclear war: He rattled the atom in order
not to have to use it. Similarly, President Joe Biden’s warning about the certainty of “World War 3” if Russia
attacked NATO was a rhetorical device to reduce its chances. Both sides have signaled that they wish to avoid
an epochal war among them; they threaten it in order not to fight it. This is conventional deterrence thinking
at its finest. Familiar red lines are reinforced so that all sides can see them plainly amid the crisis.
24
DebateUS!
NATO Cyber Coop Aff
25
DebateUS!
NATO Cyber Coop Aff
Grid Answers

Cyber attacks won’t take down the grid


Victoria Craig 16, Analyst at Fox Business, Citing the Senior Manager of Industrial Control Systems at Mandiant,
“The U.S. Power Grid is 'Vulnerable,' But Don't Panic Just Yet”,
http://www.foxbusiness.com/features/2016/02/02/u-s-power-grid-is-vulnerable-but-dont-panic-just-yet.html

While
The idea of the nation's power grids becoming the next battleground for cyber warriors could make hacking into consumers’ credit card accounts and personal information seem like child’s play.

U.S. power companies are likely targeted by foreign governments and others in increasingly sophisticated breaches, actually shutting
off the lights and causing chaos is far more complicated than many pundits make it seem. Dan Scali, senior
manager of industrial control systems at Mandiant, a cybersecurity consulting arm of FireEye ( FEYE), explained that
while cyber criminals may gain access to power and utility data systems, it doesn’t necessarily mean the result will be a
power outage and a total takedown of power grid control systems. In other words, the power grid is controlled by
more than just a panel of digital buttons. “Losing the control system is bad from the perspective that it takes you out of your normal mode of
operations of being able to control everything from one command center, but it doesn’t mean you’ve lost control or all the lights go out [in the city],” Scali

explained. While many of the systems have been modernized to include digitized control panels, if a hacker were to infiltrate the system, a utility worker

could still have the ability to manually control the machines by flipping a switch, pushing a button, or tripping a breaker. As the world saw
with the recent attack in Ukraine, which caused a blackout for 80,000 customers of the nation’s western utility, the biggest problem may be ensuring the power grid’s control systems are not vulnerable to
cyber break ins. The January attack in Ukraine was likely caused by a corrupted Microsoft Word attachment that allowed remote control over the computer, according to the U.S. Department of Homeland

there was no evidence from the incident in Ukraine that the hacker’s malware was able to physically
Security. Scali said

shut down the power. “It wiped out machines, deleted all the files. Kill disk malware made it impossible to remotely control things. It caused chaos on the business network, and the area
where control system operations sat. But the attacker, we believe, would have had to actually used the control system to cause load shedding, which caused the power to go out, or trip breakers to cause the
actual problem. Malware itself didn’t turn the power out,” Scali said. He said what most likely happened in that incident was the hacker stole user credentials and logged into the system remotely. The
bottom line: Yes, a similar event could happen in the U.S. And corporate America is concerned. A recent survey released in January on the state of information security, conducted by consulting firm
Pricewaterhouse Coopers, showed cybersecurity as one of the biggest concerns among the top brass at U.S. power and utilities firms. Part of the problem, Brad Bauch, security and cyber sector leader at PwC
said, is the interconnectedness of the industry’s tools. “Utilities want to be able to get information out of [their] systems to more efficiently operate them, and also share that information with customers so
they have more real-time information into their usage,” he explained. While allowing access to their own consumption data allows the companies to give their customers more of what they want, it also

opens up a host of access points for hackers, making the systems more vulnerable than they otherwise would be. But to say that the power grid is susceptible to cyber
hackers is a bit of an oversimplification.

No grid impact---it’s overhyped.


Freedberg 14 (Sydney J, “Cyberwar: What People Keep Missing About The Threat,” Jan 6,
http://breakingdefense.com/2014/01/cyberwar-what-people-keep-missing-about-the-threat/, CMR)

Cites:

--Peter W. Singer – former director of the Center for 21st Century Security and Intelligence and a senior fellow in
the Foreign Policy program

--Allan A. Friedman – Research Scientist at the Cyber Security Policy Research Institute at George Washington
University's School of Engineering

Singer and Friedman also do a valuable service in beating back the hype about “Cyber Pearl Harbors” and “Cyber
9/11s” or the US suffering countless millions of “attacks.” Those alarmist statistics lump together everything
from a virus easily stopped by someone’s firewall to credit card theft to the loss of secret schematics for the F-35 stealth
fighter. Those “attacks” vary from trivial, to significant losses for one particular business, to actual matters of national security, but

none of them does as much damage as a good old-fashioned bomb, they argue. Even if hackers shut down the
national electrical grid for weeks on end, bad as that would be, it wouldn’t be as bad as a single nuclear explosion. “It’s a
26
DebateUS!
NATO Cyber Coop Aff
lot like ‘Shark Week,’” Singer said about the overhyped dangers. “Squirrels have taken down the power grid more times
than the zero times hackers have.” There’s lots of talk about how the attacker always has the advantage in cyberspace, he told an audience at
Brookings this afternoon, but “a true cyber offense, an effective one, a Stuxnet style [attack] is something quite difficult .”

The grid is strong now---energy efficiency, new tech, and cycle generation.
Krysti Shallenberger 17, Utility Dive associate editor, 1-5-2017, "Predictions 2017: 8 sector insiders on what's
next for power markets and regulation," Utility Dive, http://www.utilitydive.com/news/predictions-2017-8-
sector-insiders-on-whats-next-for-power-markets-and-re/433358/
The traditional drivers of infrastructure additions were load growth and connecting distant generation sources to population centers. However, that has changed. Load growth is negligible in many areas. (At

more efficient technology, specifically energy efficiency and


PJM we forecast peak load growth of less than half of one percent per year.) At the same time,

new natural gas combined cycle generation closer to load centers, has changed power flow patterns, which
reduces the need for additional large-scale transmission expansion projects. The reduction in larger scale
projects has allowed focus to be shifted to resolving aging infrastructure concerns on lower-voltage facilities.
More efficient technologies, the capacity performance construct and upgrades to the system have made the grid
increasingly robust and resilient. Last summer, for example, was the first time PJM met a peak demand of more than
150,000 megawatts without invoking emergency procedures and while net exporting power.
27
DebateUS!
NATO Cyber Coop Aff
AT: NC3
No NC3 hacking.
Futter ’16 [Andrew; 2016; International Politics Professor at the University of Leicester; “War Games Redux?
Cyberthreats, US–Russian Strategic Stability, and New Challenges for Nuclear Security and Arms Control,”
European Security 25(2), p. 171-172]

It is of course highly unlikely that either the USA or Russia has plans – or perhaps more importantly, the desire – to
fully undermine the other’s nuclear command and control systems as a precursor to some type of disarming first strike, but the
perception that nuclear forces and associated systems could be vulnerable or compromised is persuasive. Or as Hayes (2015) puts it, “The risks of cyber
disablement entering into our nuclear forces are real”. While the
growing possibility of “cyber disablement” should not be
overstated (notions of a “cyber-Pearl Harbor” (Panetta 2012) or “cyber 9–11” (Charles 2013) have done little to help
understand the nature of the challenge), cyberthreats are nevertheless an increasingly important component of the contemporary US–Russia strategic context. This is
particularly the case when they are combined with other emerging military-technical developments and programmes. The net result, especially given the current downturn in US–Russian strategic relations,
and the way cyber is exacerbating the impact of other problematic strategic dynamics, is that is seems highly unlikely that either the USA or Russia will make the requisite moves to de-alert nuclear forces
that the new cyber challenges appear to necessitate, or for that matter to (re)embrace the “deep nuclear cuts” agenda any time soon.

Assessing the options for arms control and enhancing mutual security

Given the new challenges presented by cyber to both US and Russian nuclear forces and to US–Russia strategic stability, it is important to consider what might be done to help mitigate and guard against
these threats, and thereby help minimise the risks of unintentional launches, miscalculation, and accidents, and perhaps create the conditions for greater stability, de-alerting, and further nuclear cuts. While
there is unlikely to be a panacea or “magic bullet” that will reduce the risk of cyberattacks on US and Russian nuclear forces to zero – be they designed to launch nuclear weapons or compromise the systems
that support them – there are a number of options that might be considered and pursued in order to address these different types of threats and vulnerabilities. None, of these however, will be easy.

The most obvious and immediate priority for both the USA and Russia is working (potentially together) to harden and better protect nuclear systems against possible cyberattack, intrusion, or cyber-induced
accidents. In fact, in October 2013 it was announced that Russian nuclear command and control networks would be protected against cyber incursion and attacks by “special units” of the Strategic Missile
Forces (Russia Today 2014). Other measures will include better network defences and firewalls, more sophisticated cryptographic codes, upgraded and better protected communications systems (including
cables), extra redundancy, and better training and screening for the practitioners that operate these systems (see Ullman 2015). However, and while comprehensive reviews are underway to assess the
vulnerabilities of current US and Russian nuclear systems to cyberattacks, it may well be that US and Russian C2 infrastructure becomes more vulnerable to cyber as it is modernised and old analogue

a result, and while nuclear weapons and command and control infrastructure
systems are replaced with increasingly hi-tech digital platforms . As

are likely to be the best protected of all computer systems, and “air gapped”14 from the wider Internet – this does
not mean they are invulnerable or will continue to be secure in the future, particularly as systems are modernised or become more complex (Fritz 2009).
Or as Peggy Morse, ICBM systems director at Boeing, put it, “while its old it’s very secure” (quoted in Reed 2012).

It's false – totally disconnected from the internet.


Caylor ’16 [Matt; 2-1-16; Command and Staff College; “The Cyber Threat to Nuclear Deterrence,” War on the
Rocks, http://warontherocks.com/2016/02/the-cyber-threat-to-nuclear-deterrence/]

The perception that cyber threats will ultimately undermine the relevance or effectiveness of nuclear deterrence is
flawed in at least three keys areas. First among these is the perception that nuclear weapons or their command and control systems
are similar to a heavily defended corporate network. The critical error in this analogy is that there is an expectation
of IP-based availability that simply does not exist in the case of American nuclear weapons — they are not
online. Even with physical access, the proprietary nature of their control system design and redundancy of the National
Command and Control System (NCCS) makes the possibility of successfully implementing an exploit against either a
weapon or communications system incredibly remote. Also, whereas the cyber domain is characterized by
significant levels of risk due to a combination of bias toward automated safeguards and the liability of single human
failures, nuclear weapon safety and surety are predicated on balanced elements of stringent human interaction
and control. From two-person integrity in physical inspections and loading, to the rigorous mechanisms and
authority required for weapons release, human beings serve as a multi-factor safeguard while retaining the
ultimate role to protect the integrity of nuclear deterrence against cyber threats .
To a large degree, the potential vulnerabilities caused by wireless communications and physical intrusions into areas holding nuclear material are already
mitigated via secure communications that are not linked to the outside and multiple layers of physical security systems. While there has been a great deal
of publicity surrounding the Y-12 break-in of 2012, the truth is that the three people involved never got near any nuclear material or technology.
28
DebateUS!
NATO Cyber Coop Aff
Without state-level resourcing in the billions of dollars, the
technical sophistication required to pursue a Stuxnet-like attack
against nuclear weapons is most likely beyond the capability of even the most gifted group of hackers. For all intents,
this excludes terrorist organizations and cyber criminals from the field of threats and restricts it to those nations that
already possess nuclear weapons. Nuclear-weapon states, however, have the full-spectrum cyber threat capability referenced in
the Defense Science Board report and would most likely be influenced by an understanding of the elements of classic
nuclear deterrence strategy. In the case of first strike, no cyber weapon could be expected to perform at a rate higher
than any conventional anti-nuclear capability (i.e., not 100 percent effective). Therefore, an adversary’s nuclear threat
would be perceived to endure, thereby negating and dissuading the effort to use and employ a cyber weapon
against an adversary’s nuclear force. Additionally, just as missile defense systems have been historically controversial due to perceived
destabilizing effects, it is reasonable to conclude that these nuclear-weapon states would view the attempt to deploy a cyber capability against their
nuclear stockpiles from a similar perspective.

Finally, the very existence of nuclear weapons is often enough to alter the risk analysis of an adversary. With virtually
no chance of remote or unauthorized detonation (which would be the desired results of a sabotage event), the most probable cyber threat to
any nuclear stockpile is that of espionage. Attempted cyber intrusions at the U.S. National Nuclear Security Agency (NNSA) and
its efforts to bolster cybersecurity initiatives provide clear evidence that this is already underway. However, theft of design
information or even more robust intelligence on the location of stored nuclear weapons cannot eliminate the
potential destruction that even a handful of nuclear weapons can bring to an adversary. Knowledge alone,
particularly the imperfect knowledge that cyber espionage is likely to offer, is incapable of drastically altering an
adversary’s risk calculus. In fact, quite the opposite is true. An adversary with greater understanding of the
nuclear capabilities of a rival is forced to consider courses of action to prevent escalation, potentially increasing the
credibility of a state’s nuclear deterrence.
Despite the growing sophistication in cyber capabilities and the willingness to use them for espionage or in concert with kinetic attack, the strategic value
of nuclear weapons has not been diminished. The insulated architecture combined with a robust and redundant command-
and-control system makes the existence of any viable cyber threat of exploitation extremely low. With the list
of capable adversaries limited by both funding and motivation, it is highly unlikely that any nation will possess,
or even attempt to develop, a cyber weapon sufficient to undermine the credibility of nuclear weapons . In both
psychological and physical terms, the threat of the megabyte will never possess the ability to overshadow the destructive
force of the megaton. Although the employment of cyberspace for military effect has brought new challenges to the international community, the
role of nuclear weapons and their associated deterrence against open and unconstrained global aggression are as relevant now as they were in the Cold
War.
29
DebateUS!
NATO Cyber Coop Aff
---C2 Hacking
No nuclear hacking---US C2 is rigorously firewalled and isolated from other
channels, making access impossible---two-person integrity and multi-layered
safeguards prevent miscalc---that’s Caylor
Hacking impossible. Nukes aren’t online.
Fung 16, MSc, international relations. Reporter focusing on telecommunications, media, and competition.
Citing Maj. General Jack Weinstein. (Brian, 5-26-2016, "The real reason America controls its nukes with
ancient floppy disks", Washington Post,
https://www.washingtonpost.com/news/the-switch/wp/2016/05/26/the-real-reason-america-controls-
its-nukes-with-ancient-floppy-disks/)
As it happens, a similar logic underpins the U.S. military’s continued use of floppy disks. The fact that
America’s nuclear forces are disconnected from digital networks actually acts as a buffer against
hackers. As Maj. General Jack Weinstein told CBS’s “60 Minutes” in 2014: Jack Weinstein: I'll tell you, those
older systems provide us some -- I will say huge safety when it comes to some cyber issues that we
currently have in the world. Lesley Stahl: Now, explain that. Weinstein: A few years ago we did a complete
analysis of our entire network. Cyber engineers found out that the system is extremely safe and
extremely secure on the way it's developed. Stahl: Meaning that you're not up on the Internet kind of
thing? Weinstein: We're not up on the Internet. Stahl: So did the cyber people recommend you keep it the
way it is? Weinstein: For right now, yes. In other words, the rise of hackers and cyberwarfare is exactly why
even technologically obsolete systems can still serve a valuable purpose.

Deterrence checks.
Lewis 18, PhD, a senior vice president at the Center for Strategic and International Studies (CSIS). (James
Andrew, 1-1-2018, “Rethinking Cybersecurity: Strategy, Mass Effect, and States”, pg. 28-29,
https://www.jstor.org/stable/resrep22408.8?seq=1#metadata_info_tab_contents) *language edited---
brackets
If it was possible to use a cyber attack to simultaneously [devastate] cripple strategic forces and launch a
massive attack on critical infrastructure, an opponent might be tempted, but this would require a high
degree of certainty that all strategic delivery systems could be taken offline by a cyber attack. This is
unlikely, and it is more probable that a cyber attack will not be 100 percent effective. Some targeted
weapons or systems will still operate. Saying that the United States can only shoot 50 missiles at your
capital instead of 100 is not much of a comfort. In a larger armed conflict, this kind of reduction in enemy
tactical capabilities can be valuable, but if the goal is to attack without fear of retaliation, it is insufficient.

Their cyber-threat construction causes over-reaction and turns case


Valeriano 15, Senior Lecturer at the University of Glasgow in Politics and Global Security. (Brandon, Cyber War
Versus Cyber Realities: Cyber Conflict in the International System p. 2-4
30
DebateUS!
NATO Cyber Coop Aff
Currently, the cyberspace arena is the main area of international conflict where we see the development of a fear-based
process of threat construction becoming dominant. The fear associated with terrorism after September 11, 2001, has dissipated, and in
many ways has been replaced with the fear of cyber conflict, cyber power, and even cyber war.' With the emergence of an Internet society and rising
interconnectedness in an ever more globalized world, many argue that we must also fear the vulnerability that these connections bring about. Advances
and new connections such as drones, satellites, and cyber operational controls can create conditions that interact to produce weaknesses in the security
dynamics that are critical to state survival. Dipert (2010: 402) makes the analogy that surfing in cyberspace is like swimming in a dirty pool. The
developments associated with Internet life also come with dangers that are frightening to many. In order to provide an alternative to the fear-based
discourse, we present empirical evidence about the dynamics of cyber conflict. Often realities will impose a cost on exaggerations and hyperbole. We view
this process through the construction of cyber threats. The contention is that the cyber world is dangerous, and a domain where traditional security
considerations will continue to play out. A recent Pew Survey indicates that 70 percent of Americans see cyber incidents from other countries as a major
security threat to the United States, with this threat being second only to that from Islamic extremist groups.2 This fear is further deepened by hyperbolic
statements from the American elite. US President Barack Obama has declared that the "cyber threat is one of the most serious economic and national
security challenges we face as a nation."3 Former US Defense Secretary Leon Panetta has gone further, stating, "So, yes, we are living in that world. I
believe that it is very possible the next Pearl Harbor could be a cyber attack ... [that] would have one hell of an impact on the United States of America.
That is something we have to worry about and protect against."4 United States elites are not alone in constructing the cyber threat. Russian President
Vladimir Putin, in response to the creation of a new battalion of cyber troops to defend Russian cyberspace, noted, "We need to be prepared to effectively
combat threats in cyberspace to increase the level of protection in the appropriate infrastructure, particularly the information systems of strategic and
critically important facilities."* The social construction of the cyber threat is therefore real; the aim of this book is to find out if these elite and public
constructions are backed with facts and evidence. First, we should define some of our terms to prepare for further engagement of our topic. This book is
focused on international cyber interactions. The prefix cyber simply means computer or digital interactions, which are directly related to cyberspace, a
concept we define as the networked system of microprocessors, mainframes, and basic computers that interact at the digital level. Our focus in this
volume is on what we call cyber conflict, the use of computational technologies for malevolent and destructive purposes in order to impact, change, or
modify diplomatic and military interactions among states. Cyber war would be an escalation of cyber conflict to include physical destruction and death.
Our focus, therefore, is on cyber conflict and the manifestation of digital animosity short of and including frames of war. These terms will be unpacked in
greater detail in the chapters that follow. The idea that conflict is the foundation for cyber interactions at the interstate level is troubling. Obviously many
things are dangerous, but we find that the
danger inherent in the cyber system could be countered by the general
restraint that might limit the worst abuses in the human condition. By countering what we assert to be an unwarranted
construction of fear with reality, data, and evidence, we hope to move beyond the simple pessimistic construction of how digital interactions take place,
and go further to describe the true security context of inter- national cyber politics. In this project we examine interactions among interstate rivals, the
most contentious pairs of states in the international system. The animosity between rivals often builds for centuries, to the point where a rival state is
willing to harm itself in order to harm its rival even more (Valeriano 2013). If the cyber world is truly dangerous, we would see evidence of these
disruptions among rival states with devastating effect. Rivals fight the majority of wars, conflicts, and disputes (Diehl and Goertz 2000), yet the evidence
presented here demonstrates that the cyber threat is restrained at this point.6 Overstating
the threat is dangerous because the
response could then end up being the actual cause of more conflict. Reactions to threats must be proportional
to the nature of the threat in the first place. Otherwise the threat takes on a life of its own and becomes a self-
fulfilling prophecy of all-out cyber warfare. Furthermore, there is a danger in equivocating the threat that comes from non-state cyber
individuals and the threats that come from state-affiliated cyber actors not directly employed by governments. If the discourse is correct, non-state
entities such as terrorist organizations or political activist groups should be actively using these malicious tactics in cyberspace in order to pro- mote their
goals of fear and awareness of their plight. If the goal is to spread fear and instability among the perceived enemies of this group, and cyber tactics are the
most effective way to do this, we should see these tactics perpetrated—and perpetrated often—by these entities. This book examines how state-affiliated
non-state actors use cyber power and finds that their actual capabilities to do physical harm via cyberspace are quite limited. This then leaves rogue actors
as the dangerous foes in the cyber arena. While these individuals can be destructive, their power in no way compares to the resources, abilities, and
capabilities of cyber power connected to traditional states. The future is open, and thus the
cyber world could become dangerous, yet
the norms we see developing so far seem to limit the amount of harm in the system. If these norms hold, institutions will
develop to manage the worst abuses in cyberspace , and states will focus on cyber resilience and basic defense rather
than offensive technologies and digital walls. Cyberspace would therefore become a fruitful place for developments for our globalized society. This arena
could be the place of digital collaboration, education, and exchanges, communicated at speeds that were never before possible. If states fall into
the trap of buying into the fear-based cyber hype by developing offensive weapons under the mistaken belief that these
actions will deter future incidents, cyberspace is doomed. We will then have a restricted technology that prevents the developments that are
inherent in mankind's progressive nature.
31
DebateUS!
NATO Cyber Coop Aff
North Korea Turn

Unrestrained cyber-attacks key to North Korean revenue.


Mathews 19
Lee Mathews; Writer for Forbes, citing a U.N. Security Council report. (3-11-2019; "North Korean Hackers Have Raked in $670 Million Via Cyberattacks;"
Forbes; https://www.forbes.com/sites/leemathews/2019/03/11/north-korean-hackers-have-raked-in-670-million-via-cyberattacks

Some of the most infamous cyberattacks in the past 5 years have been linked to North Korea's state-sponsored
hackers. They're a highly-skilled group and their operations have proven to be extremely lucrative. A recent
report commissioned by the U.N. Security Council has put an approximate figure on their ill-gotten gains. The expert
panel assembled by the United Nations asserts that Pyongyang's hackers have hauled in around $ 670 million in foreign
currency and cryptocurrency. The 2015 attack on the Central Bank of Bangladesh that was one of the most sensational attacks linked to North Korean
hackers, who made off with $81 million. In 2018 India's Cosmos Bank was hacked to the tune of $13.5 million. Earlier this year those same hackers
infiltrated the Bank of Chile's ATM network and siphoned off $10 million. North Korea's hackers have successfully attacked
numerous cryptocurrency exchanges, too. Cybersecurity experts at Group-IB estimated last year that they were responsible for around 65%
of all crypto exchange hacks. Between January 2017 and September 2018 it's believed that those attacks resulted in more than $570 million in losses.

Clearly hacking provides Kim Jong-un's regime with a vital stream of revenue. Tough international sanctions
make it difficult for North Korea to bring in legitimate funds from outside its borders. That's one key reason that the
widespread adoption of cryptocurrencies has been such a boon for Pyongyang. Sanctions aren't an effective blocking tool because
cryptocurrency transactions aren't processed by regulated financial institutions (at least in most countries). Another is that
crypto transactions can e incredibly difficult to trace. It's not an impossible task, but the process can be very complex and time consuming. That helps
rogue nations and cybercriminals alike keep their financial moves hidden from law enforcement agencies. Shadowy
operations are nothing new in North Korea. Criminal activity has been part of the government's playbook for several decades.

Drying up North Korean cash flows causes them to sell nukes to terrorists.
Park & Miller 16 — June Park; Postdoctoral Fellow at the Centre on Asia & Globalisation of the Lee Kuan Yew
School of Public Policy at the National University of Singapore, Lecturer of Global Affairs and Government at
George Mason University Korea (GMUK) via the Global Affairs Program and the Schar School of Policy and
Government at George Mason University. Berkshire Miller; Senior visiting fellow with the Japan Institute of
International Affairs, Distinguished Fellow with the Asia-Pacific Foundation of Canada, and Senior Fellow on East
Asia for the Tokyo-based Asian Forum Japan and the New York-based EastWest Institute. (“The Scariest Thing
North Korea Could Ever Do: Sell a Nuclear Weapon”, https://nationalinterest.org/blog/the-buzz/the-scariest-
thing-north-korea-could-ever-do-sell-nuclear-18313 //AP)

As North Korea’s economic position worsens, the risk that it sells its nuclear weapons technology grows. Pyongyang
conducted its fifth nuclear test on 9 September, accompanied by claims it has developed a warhead that can be mounted onto rockets. This test is
estimated to have been at a yield of 25–30 kilotons — significantly larger than previous tests. While the magnitude of the test alarmed some US
policymakers, Washington’s foreign policy remains focused on the Middle East. Similarly, North Korea’s subsequent missile tests that ended in failure on
15 and 20 October gained little attention. There appears to be a de facto acceptance by some in the Obama administration that North
Korea will
not agree to denuclearize — regardless of the concessions. Earlier this month, Obama’s top intelligence chief, James Clapper,
remarked at an event hosted by the Council on Foreign Relations that “the notion of getting the North Koreans to denuclearize is probably a lost cause.”
Despite Clapper’s remarks, the Obama administration as a whole continues to insist that a nuclear North Korea is not an option regardless of their
unwillingness to disarm. Meanwhile, concerns
remain about the possible transfer of North Korea’s nuclear technology and
knowledge to non-state actors. Hillary Clinton considers their “quest for a nuclear weapon” a grave threat because “the greatest threat
of all would be terrorists getting their hands on loose nuclear material .” So how likely is North Korea to engage in
32
DebateUS!
NATO Cyber Coop Aff
a nuclear arms sale with a terrorist group? Up until this point, proliferation of North Korea’s weapons of mass destruction seemed to be
restricted to sovereign states. But this has not stopped apprehension from some in the intelligence community — spurred by Pyongyang’s connections to
Libya’s Gaddafi regime and ties to Syria’s failed nuclear weapons program. Overthe years North Korea has created a web of foreign
connections to peddle its missiles and components. As talks on denuclearization remain non-existent and foreign sanctions against the
regime tighten, there are startling concerns that a cash-strapped Pyongyang may resort to dealing with its finances
through the black-market with terrorist groups or organized crime syndicates. While the threat may seem fanciful — even
for a state as repugnant to international rules as North Korea — the risks are real. The official and unofficial transfer of nuclear
technology has always been a method of global outreach for North Kor

ea. Nuclear proliferation to non-state actors is a viable option for this regime when it feels threatened,
economically cornered and politically unstable. Pyongyang is strapped for funds despite China’s less than ideal compliance of UN
sanctions — which has kept the little trade they have alive and enabled the state to continue to obtain materials and funds for missile tests . As tougher
sanctions are imposed, North Korea will be pressured into securing funds via alternative channels. When
the state’s cash flows and resources
dry up, selling nuclear technology to the highest bidder may become a tantalizing option for the Kim regime.
33
DebateUS!
NATO Cyber Coop Aff
Solvency Answers – Article 5 Answers

Too many disagreements for Article 5 invocation

Jeane Kirkpatrick Visiting Research Fellow, American Enterprise Institute, 4-25, 22, The Russian cyber threat is
here to stay and NATO needs to understand it, https://www.aei.org/op-eds/the-russian-cyber-threat-is-here-to-
stay-and-nato-needs-to-understand-it/

Even if allies wanted to trigger Article 5 over cyber operations, disagreements about the definitions
of threats, origins of attacks, and pain thresholds in cyberspace can derail the process. Collective
retaliation requires a unanimous vote across NATO; building unity across these points is nearly
impossible for most cyber activity. Unlike missile attacks or tanks in the streets, few “red lines” exist
to distinguish cybercrime, cyber espionage, and cyber disruption from digital acts of war. Beyond the
bureaucratic and logistical limitations of elevating cyber to a casus belli, focusing on cyber-attacks as
acts of war distracts from the more likely Russian digital assaults below the level of armed conflict.
These include ransomware attacks and supply chain infiltrations that look like criminal activity or
espionage. The Kremlin is particularly adept at the latter. In the SolarWinds compromise, Russia
hacked one company’s software product to access networks of Fortune 500 companies and U.S.
government agencies. Spillover from operations in Ukraine poses an additional risk. The Russians have
already deployed several digital tools to destroy computer data, resulting in corrupted computers for
Ukrainian companies with government support roles. The same malicious software has also affected
several Latvian and Lithuanian businesses. The danger is another situation like NotPetya in 2017,
where malware self-replicated, spread past Ukrainian targets to cripple networks in over 150 countries,
and created $10 billion in damages. Each of these scenarios are much more likely than a “cyber
doomsday” that would justify an Article 5 response from NATO members. To be fair, policymakers’
fears of cyber war have led to some positive developments for the alliance. For instance, over the last
several years, NATO has developed its own framework for combining cyber and conventional military
capabilities in warfighting. But allies remain unprepared to deal with “death by 1000 cuts” in
cyberspace. Concentrating only on acts of war comes at the expense of addressing the cumulative
costs of low-level cyber threats over time. It leads to an overreliance on cyber deterrence or
defensive whack-a-mole strategies, neither of which are sustainable. Threats of retaliation simply
don’t deter most cyber-attacks, and it is unrealistic for defensive measures to stop every hacker.
34
DebateUS!
NATO Cyber Coop Aff

Securitization K Links
Their cyber-attack rhetoric justifies endless militarism that guarantees violence and
serial policy failure
Lawson 11 - (Sean, assistant professor in the Department of Communication at the University of Utah, "Beyond Cyber Doom," Jan 25, 2011,
http://mercatus.org/publication/beyond-cyber-doom)//a-berg

Recently, news
media and policy makers in the United States have turned their attention to prospective threats to and
through an ethereal and ubiquitous technological domain referred to as “cyberspace.” “Cybersecurity”
threats include attacks on critical infrastructures like power, water, transportation, and communication systems launched via cyberspace, but also terrorist use
of the Internet, mobile phones, and other information and communication technologies (ICTs) for fundraising, recruiting, organizing, and carrying out attacks (Deibert &
Rohozinski, 2010). Frustration over a perceived lack of public and policy maker attention to these threats, combined with a belief that “exaggeration” and “appeals to
emotions like fear can be more compelling than a rational discussion of strategy” (Lewis, 2010: 4), a number of cybersecurity proponents have deployed what one
scholar has called “cyber-doom
scenarios” (Cavelty, 2007: 2). These involve hypothetical tales of cyberattacks resulting
in the mass collapse of critical infrastructures, which in turn leads to serious economic losses, or even total economic, social, or civilizational
collapse. These tactics seem to have paid off in recent years, with cybersecurity finding its way onto the agendas of top civilian and military policy makers alike. The
results have included the creation of a White House “cybersecurity czar,” the creation of the military’s U.S. Cyber Command, the drafting of a plan for Trusted Identities
in Cyberspace, and the consideration of several cybersecurity-related pieces of legislation by the U.S. Congress (Gates, 2009; Olympia J. Snowe Press Releases 2009b;
Rotella, 2009; Lieberman et al., 2010; Rockefeller & Snowe, 2010; Schmidt, 2010; Hathaway, 2010). This paper examines the cyber-doom scenarios upon which so much
of contemporary, U.S. cybersecurity discourse has relied. It seeks to: 1) place cyber-doom scenarios into a larger historical context; 2) assess how realistic cyber-doom
scenarios are; and 3) draw out the policy implications of relying upon such tales, as well as alternative principles for the formulation of cybersecurity policy. To address
these issues, this paper draws from research in the history of technology, military history, and disaster sociology. This paper argues that 1) cyber-doom
scenarios are the latest manifestation of long-standing fears about “technology-out-of-control” in Western
societies; 2) tales of infrastructural collapse leading to social and/or civilizational collapse are not supported
by the current body of empirical research on the subject; and 3) the constant drumbeat of cyber-doom
scenarios encourages the adoption of counter-productive policies focused on control, militarization,
and centralization. This paper argues that cybersecurity policy should be 1) based on more realistic understandings of what is possible that are informed by
empirical research rather than hypothetical scenarios and 2) guided by principles of resilience, decentralization, and self-organization. Cybersecurity Concerns: Past and
Present Over the last three decades, cybersecurity
proponents have presented a shifting and sometimes ambiguous
case for what exactly is being threatened, and by whom, in and through cyberspace . During the 1980s, the main
cybersecurity concern was foreign espionage via the exploitation of the United States’s increasing dependence on computers and networks. Then, in the 1990s, experts
began writing about the supposed threat to civilian critical infrastructures by way of cyberterrorism conducted by non-state actors. In the opening days of the Bush
administration, cybersecurity proponents replaced non-state actors with state actors as the dominant threat subject.1 But in the immediate aftermath of 9/11, the threat
perception shifted back to non-state terrorists using cyberspace to attack critical infrastructure. Then, in the run-up to the Iraq war in 2003, state actors once more
became the supposed threat subjects, with Saddam Hussein’s Iraq making the list of states with a cyberwar capability (Weimann, 2005: 133–134; Bendrath, 2001;
Bendrath, 2003; Cavelty, 2007). Recent
policy documents identify a combination of state actors working directly or
indirectly via non-state proxies—e.g. “patriotic hackers” or organized crime —to target information in the form of private
intellectual property and government secrets (The White House, 2009a; White House Press Office, 2009; Langevin et al., 2009). In the last three years, several high-
profile “cyberattack” incidents have served to focus attention on cybersecurity even more sharply than before. These have included two large-scale cyberattacks
attributed to Russia: one against the Baltic nation of Estonia in the spring of 2007 (Blank, 2008; Evron, 2008) and one against the nation of Georgia in July and August of
2008 that coincided with a Russian invasion of that country (The Frontrunner, 2008; Bumgarner & Borg, 2009; Korns & Kastenberg, 2008; Nichol, 2008). In January
2010, Google’s accusations of Chinese cyberattacks against it garnered a great deal of press attention and were featured prominently in Secretary of State Clinton’s
speech on “Internet Freedom” (Clinton, 2010).2 Most recently, some many have speculated that a computer worm called Stuxnet may have been a cyberattack by Israel
on Iranian nuclear facilities (Mills, 2010). Cyber-Doom Scenarios Despitepersistent ambiguity in cyber-threat perceptions, cyber-
doom scenarios have remained an important tactic used by cybersecurity proponents. Cyber-doom
scenarios are hypothetical stories about prospective impacts of a cyberattack and are meant to serve as
cautionary tales that focus the attention of policy makers, media, and the public on the issue of
cybersecurity. These stories typically follow a set pattern involving a cyberattack disrupting or destroying critical infrastructure. Examples include
attacks against the electrical grid leading to mass blackouts, attacks against the financial system leading to economic losses or complete
economic collapse, attacks against the transportation system leading to planes and trains crashing, attacks against dams leading floodgates to open, or
attacks against nuclear power plants leading to meltdowns (Cavelty, 2007: 2). Recognizing that modern infrastructures are closely
interlinked and interdependent, such scenarios often involve a combination of multiple critical infrastructure systems failing simultaneously, what is sometimes referred
to as a “cascading failure.” This was the case in the “Cyber Shockwave” war game televised by CNN in February 2010, in which a computer worm spreading among cell
35
DebateUS!
NATO Cyber Coop Aff
phones eventually led to serious disruptions of critical infrastructures (Gaylord, 2010). Even more ominously, in their recent book, Richard Clarke and Robert Knake
(2010: 64–68) present a scenario in which a cyberattack variously destroys or seriously disrupts all U.S. infrastructure in only fifteen minutes, killing thousands and
wreaking unprecedented destruction on U.S. cities. Surprisingly, some argue that we have already had attacks at this level, but that we just have not recognized that they
were occurring. For example, Amit Yoran, former head of the Department of Homeland Security’s National Cyber Security Division, claims that a “cyber- 9/11” has
already occurred, “but it’s happened slowly so we don’t see it.” As evidence, he points to the 2007 cyberattacks on Estonia, as well as other incidents in which the
computer systems of government agencies or contractors have been infiltrated and sensitive information stolen (Singel, 2009). Yoran is not alone in seeing the 2007
Estonia attacks as an example of the cyber- doom that awaits if we do not take cyber threats seriously. The speaker of the Estonian parliament, Ene Ergma, has said that
“When I look at a nuclear explosion, and the explosion that happened in our country in May, I see the same thing” (Poulsen, 2007). Cyber-doom scenarios are not new. As
far back as 1994, futurist and best-selling author Alvin Toffler claimed that cyberattacks on the World Trade Center could be used to collapse the entire U.S. economy. He
predicted that “They [terrorists or rogue states] won’t need to blow up the World Trade Center. Instead, they’ll feed signals into computers from Libya or Tehran or
Pyongyang and shut down the whole banking system if they want to. We know a former senior intelligence official who says, ‘Give me $1 million and 20 people and I will
shut down America. I could close down all the automated teller machines, the Federal Reserve, Wall Street, and most hospital and business computer systems’” (Elias,
1994). But we have not seen anything close to the kinds of scenarios outlined by Yoran, Ergma, Toffler, and others. Terrorists did not use cyberattack against the World
Trade Center; they used hijacked aircraft. And the attack of 9/11 did not lead to the long-term collapse of the U.S. economy; we would have to wait for the impacts of
years of bad mortgages for a financial meltdown. Nor did the cyberattacks on Estonia approximate what happened on 9/11 as Yoran has claimed, and certainly not
nuclear warfare as Ergma has claimed. In fact, a scientist at the NATO Co-operative Cyber Defence Centre of Excellence, which was established in Tallinn, Estonia in
response to the 2007 cyberattacks, has written that the immediate impacts of those attacks were “minimal” or “nonexistent,” and that the “no critical services were
permanently affected” (Ottis, 2010: 72). Nonetheless, many cybersecurity
proponents continue to offer up cyber-doom scenarios that
not only make analogies to weapons of mass destruction (WMDs) and the terrorist attacks of 9/11, but also hold out
economic, social, and even civilizational collapse as possible impacts of cyberattacks . A report from the Hoover
Institution has warned of so-called “eWMDs” (Kelly & Almann, 2008); the FBI has warned that a cyberattack could have the same impact as a “well- placed bomb”
(FOXNews.com, 2010b); and official DoD documents refer to “weapons of mass disruption,” implying that cyberattacks might have impacts comparable to the use of
WMD (Chairman of the Joint Chiefs of Staff 2004, 2006). John Arquilla, one of the first to theorize cyberwar in the 1990s (Arquilla & Ronfeldt, 1997), has spoken of “a
grave and growing capacity for crippling our tech-dependent society” and has said that a “cyber 9/11” is a matter of if, not when (Arquilla, 2009). Mike McConnell, who
has claimed that we are already in an ongoing cyberwar (McConnell, 2010), has even predicted that a cyberattack could surpass the impacts of 9/11 “by an order of
magnitude” (The Atlantic, 2010). Finally, some have even compared the impacts of prospective cyberattacks to the 2004 Indian Ocean tsunami that killed roughly a
quarter million people and caused widespread physical destruction in five countries (Meyer, 2010); suggested that cyberattack could pose an “existential threat” to the
United States (FOXNews.com 2010b); and offered the possibility that cyberattack threatens not only the continued existence of the United States, but all of “global
civilization” (Adhikari, 2009). In response, criticshave noted that not only has the story about who threatens what, how,
and with what potential impact shifted over time, but it has done so with very little evidence provided to
support the claims being made (Bendrath, 2001, 2003; Walt, 2010). Others have noted that the cyber-doom scenarios offered for
years by cybersecurity proponents have yet to come to pass and question whether they are possible at all
(Stohl, 2007). Some have also questioned the motives of cybersecurity proponents. Various think tanks, security firms, defense contractors,
and business leaders who trumpet the problem of cyber attacks are portrayed as self- interested
ideologues who promote unrealistic portrayals of cyber-threats (Greenwald, 2010). While I am sympathetic to these arguments, in
this essay I would like for a moment to assume that mass disruption or destruction of critical infrastructure systems are possible entirely through the use of cyberattack.
Thus, the goal in this paper will be 1) to understand the origins of such fears, 2) to assess whether the supposed second-order effects (i.e. economic, social, or
civilizational collapse) of cyberattack are realistic, and 3) to assess the policy implications of relying upon such scenarios. Cyber-Doom and Technological Pessimism
Several scholars have asked why there is such a divergence between cyber-doom scenarios and the few incidents of actual cyberattack that we have thus far witnessed
(Stohl, 2007; Weimann, 2008: 42). They have resolved the paradox, in part, by pointing to the fact that fears
of cyberterrorism and cyberwar
combine a number of long-standing human fears, including fear of terrorism (especially since 9/11), fear of the
unknown, and fear of new technologies (Stohl, 2007; Weimann, 2008: 42; Embar-Seddon, 2002: 1034). Here I will focus on the third of these, the
fear of “technology out of control” as an increasingly prominent fear held by citizens of Western, industrial
societies over the last century. Concerns about cybersecurity are but the latest manifestation of this fear. Historians of
technology have written extensively about the rise of the belief in “autonomous technology” or “technological determinism” in Western societies, as well as the
increasingly prominent feelings of pessimism and fear that have come along with these beliefs. While many in the nineteenth century believed that technological
innovation was the key to human progress (Hughes, 2004), throughout the course of the twentieth century, many began to question both humanity’s ability to control its
creations, as well as the impacts of those creations. Thus, we have seen the emergence of “the belief that technology is the primary force shaping the post-modern world”
(Marx, 1997: 984) but also “that somehow technology has gotten out of control and follows its own course, independent of human direction” (Winner, 1977: 13). As a
result, we
have also seen the emergence of an increasing sense of “technological pessimism ” (Marx, 1994: 238), a
sense of ambivalence towards technology in which we at once marvel at the innovations that have made
modern life possible, but also “a gathering sense . . . of political impotence ” and “the feeling that our collective life in society is
uncontrollable” as a result of our increasing dependence upon technology (Marx, 1997: 984). Technological determinism, both optimistic and pessimistic, is found in a
number of recent and influential scholarly and popular works that address the role of technological change in society. These include Manuel Castells’ mostly optimistic
work, which identifies information and knowledge working on themselves in a feedback loop as being the core of the new economy (Castells, 2000), and Kevin Kelly’s
more recent and more pessimistic work that posits definition an emergent, self-reinforcing, technology dependent society he calls the “technium” (Kelly, 2010). The
character of the technologies that are most prominent in our lives has indeed changed over the last century, from individual mechanical devices created by individual
inventors to large socio-technical systems created and managed by large, geographically dispersed organizations (Marx, 1994: 241; Marx, 1997: 972–974). In the
twentieth century, we came to realize that “Man now lives in and through technical creations” (Winner, 1977: 34) and to “entertain the vision of a postmodern society
dominated by immense, overlapping, quasi-autonomous technological systems,” in which society itself becomes “a meta-system of systems upon whose continuing
ability to function our lives depend.” It is no wonder that the “inevitably diminished sense of human agency” that attends this vision should lead to pessimism and fear
directed at technology (Marx, 1994: 257). That these fears are manifest in contemporary concerns about cybersecurity should not come as a surprise. Scholars have
36
DebateUS!
NATO Cyber Coop Aff
noted that our reactions to new technologies are often “mediated by older attitudes” (Marx, 1994: 239) which often include a familiar “pattern of responses to new
technologies that allure [and] threaten” (Simon, 2004: 23). Many
of the concerns found in contemporary cybersecurity discourse
are not unique, but rather, have strong corollaries in early 20th-century concerns about society’s increasing
reliance upon interdependent and seemingly fragile infrastructure systems of various types, including
electronic communication networks. Early forms of electronic communication, including the radio, telegraph, and telephone, sparked fear and
anxiety by government officials and the public alike that are similar to contemporary concerns about cybersecurity. The U.S. Navy was initially reluctant to adopt the
radio, in part because of concern over what today would be called “information assurance” (Douglas, 1985). The early twentieth century saw an explosion in the number
of amateur radio users in the United States who could not only “listen in” on military radio traffic, but who could also broadcast on the same frequencies used by the
military. Amateur broadcasts could clog the airwaves, preventing legitimate military communications, but could also be used to feed false information to ships at sea. In
response, the Navy worked to have amateurs banned from the airwaves. They succeeded only in 1912 after it was reported that interference by amateur radio operators
may have hampered efforts to rescue survivors of the Titanic disaster. After 1912, amateurs were limited to the shortwave area of the electromagnetic spectrum and
during World War I, the U.S. government banned amateur radio broadcast entirely (Douglas, 2007: 214–215). Contemporary cybersecurity concerns also echo the fears
and anxieties that telephone and telegraph systems caused in the early 20th century. Along with transcontinental railroad networks, these “new networks of long-
distance communication,” which could not be “wholly experienced or truly seen,” were the first of the kind of large, complex, nation-spanning, socio- technical systems
that were at the heart of the last century’s increasing technological pessimism (MacDougall, 2006: 720). The new communication networks were often portrayed in
popular media as constituting a new space, a separate world dominated by crime, daring, and intrigue (MacDougall, 2006: 720–721). While the new communication
network “gave new powers to its users, [it] also compounded the ability of distant people and events to affect those users’ lives” (MacDougall, 2006: 718). In short, it
introduced the power and danger of “action at a distance— the ability to act in one place and affect the lives of people in another” (MacDougall, 2006: 721). Many
worried that the combination of action at a distance and the relative anonymity offered by the new communication networks would allow people to more readily engage
in immoral activities like gambling, that the networks would become tools of organized crime, and even that nefarious “wire devils” could use the telegraph to crash the
entire U.S. economy (MacDougall, 2006: 724–726). Even if particular nefarious actors could not be identified, the mere fact of a “complex interdependence of technology,
agriculture, and national finance” that was difficult if not impossible to apprehend was itself enough to cause anxiety (MacDougall, 2006: 724). As in cybersecurity
discourse, these fears were reflective of a more generalized anxiety about the supposed interdependence and fragility of modern, industrial societies. This anxiety
shaped the thinking of military planners on both sides of the Atlantic. Early airpower theorists in the United States and the United Kingdom had these beliefs at the heart
of their plans for the use of strategic bombardment. For example, in his influential 1925 book, Paris, or the Future of War, B.H. Liddell Hart (1925: 41) argued that “A
modern state is such a complex and interdependent fabric that it offers a target highly sensitive to a sudden and overwhelming blow from the air.” He continued, “a
nation’s nerve-system, no longer covered by the flesh of its troops, is now laid bare to attack, and, like the human nerves, the progress of civilization has rendered it far
more sensitive than in earlier and more primitive times” (Hart, 1925: 37). In the United States, Major William C. Sherman, who co-authored the 1922 Air Tactics text
used to train American pilots, believed industrialization to be both a blessing and a curse and his “industrial fabric” theory of aerial bombardment started from the
assumption that the “very quality of modern industry renders it vulnerable” to aerial attack (Sherman, 1926: 217–218). Like cyberwar theorists today, airpower
theorists argued that the unique vulnerabilities resulting from society’s new-found dependence on interlocking webs of production, transportation, and communication
systems could be exploited to cause almost instantaneous chaos, panic, and paralysis in a society (Konvitz, 1990; Biddle, 2002). But just as neither telegraph “wire
devils” nor nefarious Internet hackers were the cause of the economic troubles of 1929 or 2008, so too did the predictions of quick victory from the air miss their mark.
In the next section, we will see that modern societies and the systems upon which they rely have proved far more resilient than many have assumed. History & Sociology
of Infrastructure Failure Even today, planning for disasters and future military conflicts alike, including planning for future conflicts in cyberspace, often relies upon
hypothetical scenarios that begin with the same assumptions about infrastructural and societal fragility found in early 20th-century theories of strategic bombardment.
Some have criticized what they see as a reliance in many cases upon hypothetical scenarios over empirical data (Glenn, 2005; Dynes, 2006; Graham & Thrift, 2007: 9–10;
Ranum, 2009; Stiennon, 2009). But, there exists a body of historical and sociological data upon which we can draw, which casts serious doubt upon the assumptions
underlying cyber- doom scenarios. Work by scholars in various fields of research, including the history of technology, military history, and disaster sociology has shown
that both infrastructures and societies are more resilient than often assumed by policy makers. WWII Strategic Bombing Interwar assumptions about the fragility of
interdependent industrial societies and their vulnerability to aerial attack proved to be inaccurate. Both the technological infrastructures and social systems of modern
cities proved to be more resilient than military planners had assumed. Historian Joseph Konvitz (1990) has noted that “More cities were destroyed during World War II
than in any other conflict in history. Yet the cities didn’t die.” Some critical infrastructure systems like power grids even seem to have improved during the war. Historian
David Nye (2010: 48) reports that the United Kingdom, Germany, and Italy all “increased electricity generation.” In fact, most wartime blackouts were self-inflicted and
in most cases did not fool the enemy or prevent the dropping of bombs (Nye, 2010: 65). Similarly, social systems proved more resilient than predicted. The postwar U.S.
Strategic Bombing Survey, as well as U.K. studies of the reaction of British citizens to German bombing, all concluded that though aerial bombardment led to almost
unspeakable levels of pain and destruction, “antisocial and looting behaviors . . . [were] not a serious problem in and after massive air bombings” (Quarantelli, 2008:
882) and that “little chaos occurred” (Clarke, 2002: 22). Even in extreme cases, such as the the atomic bombing of Hiroshima, social systems proved remarkably resilient.
A pioneering researcher in the field of disaster sociology describes that within minutes [of the Hiroshima blast] survivors engaged in search and rescue, helped one
another in whatever ways they could, and withdrew in controlled flight from burning areas. Within a day, apart from the planning undertaken by the government and
military organizations that partly survived, other groups partially restored electric power to some areas, a steel company with 20 percent of workers attending began
operations again, employees of the 12 banks in Hiroshima assembled in the Hiroshima branch in the city and began making payments, and trolley lines leading into the
city were completely cleared with partial traffic restored the following day (Quarantelli, 2008: 899). Even in the most extreme cases of aerial attack, people neither
panicked, nor were they paralyzed. Strategic bombardment alone was not able to exploit infrastructure vulnerability and fragility to destroy the will to resist of those
that were targeted from the air (Freedman, 2005: 168; Nye, 2010: 43; Clodfelter, 2010). In the aftermath of the war, it became clear that theories about the possible
effects of aerial attack had suffered from a number of flaws, including a technological determinist mindset, a lack of empirical evidence, and even willfully ignoring
evidence that should have called into question assumptions about the interdependence and fragility of both technological and social systems. In the first case, Konvitz
(1990) has argued that “The strategists’ fundamental error all along had been [giving] technology too much credit, and responsibility, for making cities work—and
[giving] people too little.” In his study of U.S. bombardment of Germany, Clodfelter (2010) concluded that the will of a nation is determined by multiple factors, both
social and technical, and that it therefore takes more than targeting any one technological system or social group to break an enemy’s will to resist. Similarly, Konvitz
(1990) concluded that, “Immense levels of physical destruction simply did not lead to proportional or greater levels of social and economic disorganization.” Next,
theories of strategic bombardment either suffered from a lack of supporting evidence or even ignored contradictory evidence. Lawrence Freedman (2005: 168) has
lamented that interwar theories of strategic bombardment were implemented despite the fact that they lacked specifics about how results would be achieved or
empirical evidence about whether those results were achievable at all. Military planners were not able to point to real-world examples of the kind of social or
technological collapse that they claimed would result from aerial attack. But they were not deterred by this lack of empirical evidence. Instead, they maintained that “The
fact that infrastructure systems had not failed . . . is no proof that they are not susceptible to failure” and instead “emphasized how air raids could exploit the same kind
of collapse that might come in peace” (Emphasis added. Konvitz, 1990). Airpower theorists were not even deterred by seemingly contradictory evidence. Instead, such
evidence was either ignored or explained away. For example, during the 1930s, New York City suffered a series of blackouts that demonstrated that the social disruption
caused by the sudden lack of power was not severe. In response, airpower theorists argued that the results would have been different had the blackouts been the result
of intentional attack (Konvitz, 1990). But the airpower theorists missed the mark in that prediction too. Instead of leading to panic or paralysis, intentional aerial
37
DebateUS!
NATO Cyber Coop Aff
bombardment of civilians “angered them and increased their resolution” (Nye, 2010: 43; Freedman, 2005: 170). The social reaction to strategic bombardment is just one
example of how efforts both to carry out, but also to defend against, such attacks often led to results that were the opposite of what was predicted or intended. One study
of the mental-health effects among victims of strategic bombing found that excessive precautionary measures taken in an attempt to prevent the panic and paralysis
predicted by theorists did more to “weaken society’s natural bonds and, in turn, create anxious and avoidant [sic] behavior” than did the actually bombing (Jones et al.,
2006: 57). Similarly, in cases of intentional, self-inflicted blackouts, fear of what might happen to society were the power grid to fail led to a self-inflicted lack of power
that not only did not have the desired military effect but may also have been an example of excessive, counter- productive precaution (Nye, 2010: 65). The flawed
assumptions and predictions of the airpower theorists had political and military impacts as well. By creating fear of a massive, German reprisal from the air, the promise
of mass destruction from the air that military planners had offered civilian policy makers factored heavily into the British decision not to enter the war sooner to stop
Hitler’s aggression (Biddle, 2002: 2). Once the war began, the failure of the theorists’ vision did not lead them to give up on the dream of strategic bombardment, but
only to “heavier, less discriminate bombing.” As historian Tami Davis Biddle (2002: 9) has argued, “The result was nothing less than a form of aerial Armageddon played
out over the skies of Germany and Japan.” Disaster Myths Even though the vision of the airpower theorists had been proven false, assumptions about the fragility of
modern societies did not disappear when the war ended. The first use of atomic weapons at the close of the war combined with the beginning of the Cold War nuclear
stand-off with the Soviet Union kept the old assumptions alive. Surely, U.S. military planners believed, atomic weapons could achieve what strategic bombardment with
conventional weapons had not. Thus, fearing “that the American civilian population might collapse in the face of atomic attack,” the U.S. military began to support
empirical research into the ways that people respond in disaster situations (Quarantelli, 2008: 896). Ironically, the results of that research have consistently called into
question the military assumptions that were the original motivation for funding the study of disasters. Disaster
researchers have worked to
define more clearly the concepts at the heart of dominant assumptions about how people respond to
disaster. Official planning documents, news, and entertainment media alike often assume that in crisis
situations people will either be paralyzed or panicked. On the one hand, paralysis can involve “passivity
and inaction” in the face of an overwhelming situation (Quarantelli, 2008: 887). This reaction is dangerous
because individuals, groups, and entire societies are not able to help themselves and others if they are
paralyzed by fear. On the opposite extreme, psychologists and sociologists have defined panic as a heightened level of fear and emotion by an individual or
group leading to a degradation of rational thinking and decision-making, a breakdown of social cohesion, and ultimately to injudicious and counterproductive actions
that bring more harm or threat of harm (Clarke, 2002: 21; Clarke & Chess, 2009: 998–999; Jones et al., 2006: 58). In short, both paralysis and panic are maladaptive
responses to fear, one an under-reaction, the other an overreaction. Perhaps surprisingly, empirical research has shown repeatedly that “contrary to . . . popular
portrayals” by media and officials, “group panic is relatively rare” (Clarke, 2002: 21). Even specific antisocial behaviors such as looting, which is often believed to be a
widespread problem in the wake of most disasters, has proven to be “unusual in the typical natural and technological disasters that afflict modern, Western-type
societies” (Quarantelli, 2008: 883). Instead of panic or paralysis, “decades of disaster research shows that people behave rationally in the face of danger” (Dynes, 2006).
Empirical research has shown that “survivors usually quickly moved to do what could be done in the situation,” that their “behavior is adaptive” rather than maladaptive,
and that such behavior usually includes “widespread altruism that leads to free and massive giving and sharing of goods and services.” The survivors themselves “are
truly the first responders in disasters” (Quarantelli, 2008: 885–888). Instead of panic or paralysis leading to social collapse, existing social bonds and norms of behavior
are the key assets to effective response, in part because they serve to constrain tendencies towards paralysis, panic, antisocial, or other types of maladaptive behavior
(Johnson, 1987: 180). Blackouts These results have been confirmed by studying various disasters both large and small, intentional and accidental, technological and
natural, including large-scale blackouts, hurricanes, and terrorist attacks. For example, attacks upon the electrical grid are often featured prominently in cyber-doom
scenarios. But historically, just what has happened when the power has gone out? As mentioned above, a series of blackouts in New York City in the 1930s indicated that
people did not panic and society did not collapse at the loss of electrical power (Konvitz, 1990). That pattern continued through the remainder of the last century, where
“terror, panic, death, and destruction were not the result” of power outages. Instead, as Nye (2010: 182–183) has shown, “people came together [and] helped one
another,” just as they do in most disaster situations. In August 2003, many initially worried that the two-day blackout that affected 50 million people in the United States
and Canada was the result of a terrorist attack. Even after it was determined that it was not, some wondered what might happen if such a blackout were to be the result
of intentional attack. One commentator hypothesized that an intentional “outage would surely thwart emergency responders and health-care providers. It’s a scenario
with disastrous implications” (McCafferty, 2004). But the actual evidence from the actual blackout does not indicate that there was panic, chaos, or “disastrous
implications.” While the economic costs of the blackout were estimated between four and ten billion dollars (Minkel, 2008; Council, 2004), the human and social
consequences were quite minor. Few if any deaths are attributed to the blackout.3 A sociologist who conducted impromptu field research of New York City residents’
responses to the incident reported that there was no panic or paralysis, no spike in crime or antisocial behavior, but instead, a sense of solidarity , a concern to help
others and keep things running as normally as possible, and even a sense of excitement and playfulness at times (Yuill, 2004). For example, though the sudden loss of
traffic lights did lead to congestion, he notes that the situation was mitigated by “people spontaneously taking on traffic control responsibilities. Within minutes, most
crossing points and junctions were staffed by local citizens directing and controlling traffic . . . All of this happened without the assistance of the normal control culture;
the police were notably absent for long periods of the blackout” (Yuill, 2004). James Lewis (2006) of the Center for Strategic and International Studies has observed that
“The widespread blackout did not degrade U.S. military capabilities, did not damage the economy, and caused neither casualties nor terror.” Despite the fact that
historical and sociological evidence has shown that “People are irked but not terrified at the prospect” of power loss (Nye, 2010: 191), and, therefore, that intentional
attacks on the power grid are “not likely to cause the same type of immediate fear and emotion” as a conventional attack (Stohl, 2007), scenarios in which the loss of
power leads to panic, chaos, and social collapse persist because of the persistence of a technological determinist mindset among officials, the media, and the general
public. Nye has observed that most reports that are written about blackouts after the fact focus on technical reasons for failures and technical or bureaucratic changes to
avoid such failures in the future (Nye, 2010: 4). Not surprisingly, most of the policy response to the 2003 blackout has fit this pattern (Minkel, 2008). What gets
overlooked in these accounts and the types of policy responses they encourage is the human capacity for “adaptation and improvisation in the face of crisis” (Nye, 2010:
195). 9/11 & Katrina As mentioned above, some have argued that a so-called “cyber-9/11” could approximate or even exceed the impacts of the terrorist attacks of
September 11, 2001. Others, including the sponsors of cybersecurity legislation, as well as a former White House cybersecurity czar, have spoken of a possible “cyber-
Katrina” (Epstein, 2009; Olympia J. Snowe Press Releases 2009b). But, in both of those cases, people generally responded in the ways that they have in other disasters,
without panic, paralysis, or social collapse. Disaster sociologist Lee Clarke has noted that on 9/11, “people did not become hysterical but instead created a successful
evacuation” (Clarke, 2002: 23). That evacuation of Lower Manhattan, which involved nearly half a million people, “was a self-organized volunteer process that could
probably never have been planned on a government official’s clipboard” (Glenn, 2005). At the economic level, the Congressional Research Service concluded that “The
loss of lives and property on 9/11 was not large enough to have had a measurable effect on the productive capacity of the United States” (Makinen, 2002). A more recent
report by the Center for Risk and Economic Analysis of Terrorism Events showed that the overall economic impacts of the 9/11 attacks were even lower than initially
estimated, indicating that the U.S. economy is more resilient in the face of disaster and intentional attack than commonly assumed (2010a). At the geopolitical level, if the
goal of the terrorists was to drive the United States from the Middle East, then the 9/11 attacks backfired. Just as World War Two aerial bombardment often served to
strengthen rather than weaken the will to resist among targeted populations, Freedman (2005: 169) has observed that “The response [to 9/11] was not to encourage the
United States to abandon any involvement with the conflicts of the Muslim world but to draw them further in.” Finally, analysis of Hurricane Katrina by disaster
sociologists has show that while there was some looting and antisocial behavior in the immediate aftermath of the disaster, people generally did not panic and Katrina
38
DebateUS!
NATO Cyber Coop Aff
did not result in the kind of social chaos and collapse often implied in media coverage of the event. Quarantelli (2008: 888–889) reports that “pro-social and very
functional behavior dwarfed on a very large scale the antisocial behavior that also emerged. . . . [This] prevented the New Orleans area from a collapse into total social
disorganization.”4 Like the attacks of 9/11, though the economic impacts of Katrina were severe, especially for those areas in the Golf Coast that were immediately
affected, Katrina did not have the effect of collapsing the entire U.S. economy. And while some suggested that U.S. military operations in Iraq slowed the National Guard’s
response to Katrina (Gonzales, 2005), there was no indication that military response to Katrina had a negative effect upon U.S. military operations overseas or overall
military readiness. The
empirical evidence provided to us from historians and sociologists about the impacts of
infrastructure disruption, both intentional and accidental, as well as peoples’ collective response to
disasters of various types, calls into question the kinds of projections one finds in the cyber-doom
scenarios. If the mass destruction of entire cities from the air via conventional and atomic weapons
generally failed to deliver the panic, paralysis, technological and social collapse, and loss of will that was
intended, it seems unlikely that cyberattack would be able to achieve these results. It also seems
unlikely that a “cyber-9/11” or a “cyber-Katrina” would result in the loss of life and physical destruction
seen in the real 9/11 and Katrina. And if the real 9/11 and Katrina did not result in social or economic
collapse, nor to a degradation of military readiness or national will, then it seems unlikely that their “cyber”
analogues would achieve these results. Policy Implications None of the discussion above should suggest, however, that we should not take
cybersecurity seriously, that we should not take measures to secure our critical infrastructures, or that we should not prepare to mitigate against the effects of a large-
scale cyberattack should it occur. Rather, it should suggest that taking
these issues seriously requires that we re-evaluate the
assumptions upon which policymaking proceeds, that we can only make effective policy if we begin with a
realistic assessment of what is possible based on empirical research . Thus, in the remainder of this essay, I identify potential
negative policy implications of cyber-doom scenarios and offer a set of principles that can be used to guide the formulation and evaluation of cybersecurity policy.
Negative Impacts of Flawed Assumptions The language that we use to frame problems opens up some avenues for
response while closing off others. In cyber-doom scenarios, cybersecurity is framed primarily in terms of
“war” and, with the use of terms like “cyber-9/11” and “cyber-Katrina,” in terms of large-scale “disaster.”
This war/disaster framing can lead to a militarist, command and control mindset that is ultimately
counter-productive. A war framing implies the need for military solutions to cybersecurity challenges,
even though most of what gets lumped under the term “cyberwar” are really acts of crime, espionage, or
political protest, and even though it is not at all clear that a military response is either appropriate or
effective (Lewis, 2010). Nonetheless, the establishment of the military’s U.S. Cyber Command (USCYBERCOM) has been the most significant U.S. response yet to
perceived cyber-threats. Such a response is fraught with danger. First, the very existence of USCYBERCOM, which has both an offensive and defensive mission, could
undermine the U.S. policy of promoting a free and open Internet worldwide by encouraging greater Internet censorship and filtering, as well as more rapid militarization
of cyberspace (Cavelty, 2007: 143). For example, some have already called for USCYBERCOM to launch strikes on WikiLeaks, which leaked hundreds of thousands of
classified U.S. documents about the wars in Iraq and Afghanistan (McCullagh, 2010b; Whitton, 2010; Thiessen, 2010). Such a response would only serve to create a “say-
do gap” (Mullen, 2009) that potential adversaries could use to justify their own development and use of offensive cyber weapons and efforts to thwart whatever
possibility there is for international cooperation on cybersecurity. Second, there is the danger of “blow back.” In a highly interconnected world, there is no guarantee that
an offensive cyberattack launched by the United States against another country would not result in serious collateral damage to noncombatants or even end up causing
harm to the United States (Cavelty, 2007: 143). Such “blow back” may have occurred in a recent case where the United States military took down a Jihadist discussion
forum, causing collateral damage to noncombatant computers and websites, as well as undermining an ongoing U.S. intelligence gathering operation (Nakashima, 2010).
Third, there is the risk of conflict escalation from cyberattack to physical attack. If the United States launched a cyberattack against a state or non-state actor lacking the
capability to respond in kind, that actor might chose to respond with physical attacks (Clarke, 2009). There have even been calls for the United States to respond with
conventional military force to cyberattacks that amounted to little more than vandalism (Zetter, 2009; Dunn, 2010). Finally, a 2009 review of U.S. military strategy
documents, combined with statements from officials, further adds to the confusion and potential for escalation by indicating that nuclear response remains on the table
as a possible U.S. response to cyberattack (Markoff & Shanker, 2009; Owens et al., 2009). Next, a disaster
framing portends cybersecurity
planning dominated by the same “command and control [C2] model” rooted in flawed assumptions of
inevitable “panic” and “social collapse” that has increasingly dominated official U.S. disaster planning
(Quarantelli, 2008: 897). The result has been ever more centralized, hierarchical, and bureaucratic disaster
responses that increasingly rely upon the military to restore order and official control first and foremost
(Quarantelli, 2008: 895–896; Alexander, 2006; Lakoff, 2006 ). The result can be a form of “government paternalism” in which officials panic
about the possibility of panic and then take actions that exacerbate the situation by not only failing to provide victims with the help they need, but also preventing them
from effectively helping themselves (Dynes, 2006; Clarke & Chess, 2009: 999–1001). This phenomenon was on display in the official response to Hurricane Katrina
(Clarke & Chess, 2009: 1003–1004). In the realm of cybersecurity, there are already provisions for the military’s USCYBERCOM to provide assistance to the Department
of Homeland Security in the event of a domestic cyber emergency (Ackerman, 2010). Reminiscent of self-imposed blackouts during WWII, Senator Joseph Lieberman’s
proposal for a so-called “Internet kill switch,” which would give the president the authority to cut U.S. Internet connections to the rest of the world in the event of a large-
scale cyberattack,5 is the ultimate expression of the desire to regain control by developing the means to destroy that which we fear to lose. The
war/disaster
framing at the heart of cyber-doom scenarios and much of contemporary U.S. cybersecurity discourse risks
focusing policy on the narrowest and least likely portion of the overall cybersecurity challenge—i.e.
39
DebateUS!
NATO Cyber Coop Aff
acts of “cyberwar” leading to economic, social, or civilizational collapse—while potentially diverting attention
and resources away from making preparations to prevent or mitigate the effects of more realistic but perhaps less dramatic
scenarios. But, there are a number of principles that can guide the formulation and evaluation of cybersecurity policy that can help us to avoid these pitfalls.
40
DebateUS!
NATO Cyber Coop Aff

Securitization K Link/Cap K Link/Solvency Turn


Cyber Security solvency turn. Securitization K link, Cap K link

Ivan Arreguin-Toft, 6-17, 22, Ivan Arreguin-Toft, Ph.D. currently teaches war and cybersecurity strategy and
policy at Brown University’s Watson Institute of International and Public Affairs; where he also serves as Director
of the Security Track for the undergraduate International and Public Affairs concentration. He is formerly a
founding member of the Global Cyber Security Capacity Centre at Oxford University’s Martin School; where he
served as Associate Director of Dimension 1 (cybersecurity policy and strategy) from 2012–2015, Achieving True
Cybersecurity Is Impossible, https://nationalinterest.org/blog/techland-when-great-power-competition-meets-
digital-world/achieving-true-cybersecurity?page=0%2C1

Cybersecurity the way we like to think of it is actually impossible to achieve. That’s not to say we shouldn’t try
hard to achieve it. Nor is it the same thing as saying that our costly efforts to date have been wasted. Instead, if
our aim is to make our interactions in cyberspace more secure, we need to recognize two things.

First, part of our troubles has to do with a culture that defines things like success, victory, and security as
dichotomous rather than continuous variables. Think of a switch that’s either on or off. Second, speed is hurting
us, and calls to replace humans with much faster and “objective” machines will continue to gain momentum,
putting us at extreme risk without increasing either our security or prosperity. Let me explain.

[Cyber]security is Not a Switch

In my time in Norway a few years ago, I had the great fortune to be hosted by the Norwegian Institute for
Defense. As I toiled to recover the history of Norway’s experience under occupation by the Third Reich, I was
able most days to join my Norwegian colleagues for a communal lunch. My colleagues did me the great courtesy
of carrying on most conversations in flawless English. As an American academic accustomed to research abroad,
I anticipated that sooner or later I’d encounter a classic opening sentence of the form, “You know, the trouble
with you Americans is…” And after a month or so my unfailingly polite and generous colleagues obliged. But
what ended that sentence has stuck with me since then; and underlines a core value of study abroad at the
same time: “You know, the trouble with you Americans is, you think every policy problem has a solution;
whereas we Europeans understand that some problems you just have to learn to live with.”

The idea that part of our mission was research intended to support policies that solved problems was never
something I’d thought of as varying by culture. But as I reflected more and more on the idea, I realized that
insecurity—and by extension cyber-insecurity—would be something we Americans would have to learn to live
with.

This “switch” problem is mainly due to the relentless infiltration of market capitalist logic into problem
framing and solving. For example, corporations hire cybersecurity consultants to ensure that corporate profit-
making operations are secure from hacking, theft, disruption, and so on. When corporations pay money to
someone to solve a problem, they expect a “deliverable”: some empirical evidence that corporate operations
are now “secure.” It should go without saying that this same corporate logic infiltration—the largely North
41
DebateUS!
NATO Cyber Coop Aff
American idea that governance would be more “effective” if run via corporate profit-making logic—has
seriously degraded effective governance as well.

Cybersecurity is not a switch. It isn’t something that’s either “on” or “off,” but something that we can approach
if we have a sound strategy. And progress toward our shared ideal itself is what we should be counting as
success.

Even if we could agree to moderate our cultural insistence on measuring success or failure in terms of decisively
“solving” policy problems, we’d be left with another set of problems caused mainly by the assertion that humans
are too slow and emotional as compared to computers, which are imagined as fast (absolutely) and objective
(absolutely not). We need to challenge these ideas, because together they make up a kind of binary weapon
which leads us into very dangerous territory while at the same time doing little to advance us toward our ideal
of “cybersecurity in our time.”

So, a first critical question is, under what conditions is speed a necessary advantage? That’s where computers
come in. Few Americans will be aware, for example, that the first-ever presidential directive on cybersecurity—
NSDD-145 (1984)—was issued by President Ronald Reagan in reaction to his viewing of John Badham’s
WarGames (1983). After viewing the film, which imagines a nascent artificial intelligence called the WOPR
hijacking U.S. nuclear missile defense and threatening to start a global thermonuclear war, Reagan asked his
national security team whether the events in the film could happen in real life. When his question was later
answered in the affirmative, the Reagan administration issued the NSDD. Here’s a key bit of dialogue from
Badham’s film, which starts after a simulated nuclear attack resulted in 22 percent of Air Force officers refusing
to launch their missiles when commanded to do so:

Mr. McKittrick: I think we ought to take the men out of the loop.

GEN Berringer: Mr. McKittrick, you’re out of line sir!

McKittrick: Why am I out of line?

Cabot: Wait. Excuse me. What are you talking about? I'm sorry. What do you mean, take them ‘out of the loop’?

GEN Berringer: Gentlemen, we've had men in these silos since before any of you were watching Howdy Doody.
For myself, I sleep pretty well at night knowing those boys are down there.

McKittrick: General, we all know they're fine men. But in a nuclear war, we can't afford to have our missiles lying
dormant in those silos because those men refuse to turn the keys when the computers tell ‘em to!

Watson: You mean, when the president orders them to.

McKittrick: The president will probably follow the computer war plan. Now that’s a fact!

Watson: Well, I imagine the joint chiefs will have some input?

GEN Berringer: You’re damned tootin’!

Cabot: Well hell, if the Soviets launch a surprise attack there's no time…

Healy: Twenty-three minutes from warning to impact. Six minutes, if it’s sub-launched.
42
DebateUS!
NATO Cyber Coop Aff
McKittrick: Six minutes! Six minutes. That's barely enough time for the president to make a decision. Now once
he makes that decision, the computers should take over.

This discussion brackets two critical components of any discussion of contemporary cybersecurity. The first is
the “humans are too slow” theme, and by slow, we mean slow as compared to computers. Second, humans
have consciousness and morals and computers don’t. Computers have some version of whatever their
programmers give them. Recent advances in deep learning and artificial neural networks—in particular
foundation models—have created the impression that machine consciousness and independent creativity are
here or very near, but they are not; and moreover—and this is key—whatever these machines come up with will
always be tethered to their programmers which, to be blunt, remain mostly young, upper-middle-class males
from the northern hemisphere.

This impossibility of algorithmic objectivity is the second half of the “binary weapon” I referenced earlier: along
with speed, algorithms, code, and the like promise to be objective—but they cannot be. So as mathematician
Cathy McNeill or computer scientist (and activist) Joy Buolamwini remind us, when you get turned down for a
loan or a job and an algorithm is involved, it remains almost certain that far from being “objective” in the sense
we usually mean, some unintended but profitable bias has very likely been introduced. Ask an AI to show you a
photo of an “attractive person,” for example, and what the “objective” algorithm is likely to supply is the image
of a thin blond woman with large breasts. Imagine deploying biased algorithms in military or cybersecurity
applications, but then also imagine that your reservations about bias cause you to hesitate. You’d be terrified
that an economic or security rival less principled or cautious than you would deploy and gain a terminal
advantage over you. It’s straight out of philosopher Carl von Clausewitz’s discussion of the limitations of
restraint in war:

As the use of physical power to the utmost extent by no means excludes the cooperation of the intelligence, it
follows that he who uses force unsparingly, without reference to the bloodshed involved, must obtain a
superiority if his adversary uses less vigor in its application. The former then dictates the law to the latter, and
both proceed to extremities to which the only limitations are those imposed by the amount of counteracting
force on each side.

Having weaponized cyberspace, and corporate strategy having appropriated military metaphors, the message
is clear: restraint makes you a sucker. In terms of policy, this leaves us with a classic dilemma: we get gored
either way. If we don’t deploy AI and our competitors do, we may lose everything. But if we do deploy AI, not
fully understanding how it arrives at its conclusions but having unreasonable faith that “it must be right,
because it’s math; it’s objective,” we may lose everything as well.

So, the WarGames scene should also remind us that the particular domain in which speed is being claimed as a
necessary virtue is armed conflict. Note that Watson’s objection to the idea that “computers are in charge”
implies there should be checks and balances between decision and action; an anagram of democracy itself, with
its emphasis on deliberation and consensus. But Cabot counters by asserting, reasonably, that in war, checks and
balances are a liability: “there’s no time” (contemporary hypersonic missile technology compresses time still
further).

In the United States, the idea that we’re always at war—and its destructive impact on democracy worldwide
—emerged from the impact of a shattering moment in U.S. history: 9/11. Since then, we’ve never had the
43
DebateUS!
NATO Cyber Coop Aff
feeling we can “go back” to making washing machines and babies as we did after World War II. We are
permanently mobilized, always on alert, always at war. And in war, speed above all else seems to make sense
(think Blitzkrieg). Being “always at war” also biases politics in favor of the world’s political Right, with its claim
that checks and balances, deliberation, and popular sovereignty put citizens of democracies at too much risk.
What’s needed, then, is an unfettered executive who can act fast.

Of course, we are not at war and, as a result, speed at any cost is just as likely to lead to disaster as it did in the
summer of 1914, when all the major combatants believed whoever struck first was assured of victory, while
waiting to be attacked made you a sucker.

Cybersecurity suffers from all the same associations. Once we insist that cyberspace is an arena of conflict,
speed at any cost seems a necessity. Automating computer network defense is no longer thought of as a policy
choice but is reduced to a question of how and when. I should add that if computer network defense can be
automated, so can computer network offense, including espionage, crime, and systems compromise.

In sum, the cybersecurity we aspire to is impossible. Cybersecurity is not a switch, and automating our
defenses—computer network defense, national defense—is as likely to destroy us as save us. We live in a
world now, by choice, in which “sticks and stones can break our bones, but words can also hurt us.” As the
mass shooting at Uvalde Texas and so many others have highlighted, in this new world there is no armor, no
fortress walls, no police or army that can protect us. Along with increasingly extreme weather events, we will
have to learn to live with cyber intrusions ranging from election interference, disinformation, cybercrime, and
threats to our critical infrastructure. We will have to learn to be more self-sufficient and resilient.
44
DebateUS!
NATO Cyber Coop Aff

Threat Construction
Cyber hype is threat construction

Maness & Valeriano, 2015, Ryan C. Maness, Northeastern University, Department of Political Science, Brandon
Valeriano, University of Glasglow, Cyber War versus Cyber Realities: Cyber Conflict in the International System,
Kindle Edition, page number at end of card

Our concern is that fear dominates the international system. The contention is that harm is a constant factor in
international life (Machiavelli 2003; Hobbes 2009); everything is a danger to all, and all are a danger to most. It is
through this prism that the international affairs community approaches each technological development and
each step forward, and it does so with trepidation and weariness. Because of the hype surrounding the
development of cyber weaponry, the step toward what might be called cyber international interactions is no
different. With the advent of the digital age of cyber communications, this process of fear construction
continues to shape dialogues in international relations as cyberspace becomes a new area of contestation in
international interactions. Old paradigms focused on power politics, displays of force, and deterrence are
applied to emergent tactics and technologies with little consideration of how the new tactic might result in
different means and ends. We argue that these constructed reactions to threats have little purchase when
examined through the prism of evidence or when judged compared to the normative implications of action.
There is an advantage to bringing empirical analysis and careful theory to the cyber security debate. Valeriano,
Brandon; Maness, Ryan C. (2015-04-27). Cyber War versus Cyber Realities: Cyber Conflict in the International
System (pp. 1-2). Oxford University Press. Kindle Edition.

Foreign cyber threat rhetoric discourages a focus on domestic cyber threats

Maness & Valeriano, 2015, Ryan C. Maness, Northeastern University, Department of Political Science, Brandon
Valeriano, University of Glasglow, Cyber War versus Cyber Realities: Cyber Conflict in the International System,
Kindle Edition, page number at end of card

With a focus on offensive cyber operations and the inflated nature of mythical cyber threats, there seems to be
a misdirected application of the technology in the policy sphere (Dunn-Cavelty 2008). Instead of a revolution in
military affairs, cyber tactics just seem to have refocused the state on external threats that then escalate
through the typical process of the security dilemma. In some ways, fears of cyber conflict become self-fulfilling
prophecies. Dunn-Cavelty’s (2008) work is instructive here, as it dissects this growing cyber threat perception in
the United States and the driving engine behind it— cyber defense contracts. By focusing on the external
threats, rather than the internal criminal threat that comes from cyber enterprises, we may have missed many
opportunities at collaboration and institution building. There obviously needs to be a global accounting for cyber
actions and plans, even those that inflate cyber fears, as Clarke and Knake (2010) agree. Valeriano, Brandon;
45
DebateUS!
NATO Cyber Coop Aff
Maness, Ryan C. (2015-04-27). Cyber War versus Cyber Realities: Cyber Conflict in the International System (p.
42). Oxford University Press. Kindle Edition.

Cyber threat discourse is socially constructed

Maness & Valeriano, 2015, Ryan C. Maness, Northeastern University, Department of Political Science, Brandon
Valeriano, University of Glasglow, Cyber War versus Cyber Realities: Cyber Conflict in the International System,
Kindle Edition, page number at end of card

Our theory is social constructivist in nature (Berger and Luckmann 1967; Onuf 1989). As others, such as Dunn-
Cavelty (2008), Eriksson and Giacomello (2009), and Hansen (2011), have suggested, cyber threats are socially
constructed. The danger that cyber incidents can portray between rival factions can construct a very real threat
that will then lead to escalated tensions between these entities (Hansen 2011). Furthermore, the public as well
as corporate framing of cyber incidents as a threat, real or imagined, can lead to a change in a state’s perception
of the threat, which in turn would demand action, either diplomatically or militarily (Nissenbaum 2005; Eriksson
and Giacomello 2009). The state would find the need to securitize itself from these cyber threats, which could
spill over into more conventional responses, such as airstrikes or economic sanctions (Hansen and Nissenbaum
2009). We follow these points and agree that the nature of and response to cyber threats are socially
constructed by many diverse factors, such as government messages, media talking points, and popular culture.
This orientation makes us question the nature of the cyber discourse and focus on empirical observations rather
than the message of such attacks. Valeriano, Brandon; Maness, Ryan C. (2015-04-27). Cyber War versus Cyber
Realities: Cyber Conflict in the International System (p. 51). Oxford University Press. Kindle Edition.

Potential cyber war initiators are deterred

Maness & Valeriano, 2015, Ryan C. Maness, Northeastern University, Department of Political Science, Brandon
Valeriano, University of Glasglow, Cyber War versus Cyber Realities: Cyber Conflict in the International System,
Kindle Edition, page number at end of card

To this point, the discourse on cyber conflict, weapons, policy, and security clearly lacks an engagement of
theory and evidence in relation to the international system. There are many questions that scholars and
policymakers raise; however, there are few real deductive or inductive explorations of cyber processes by these
people. Cyber strategies and analysis at this point are entirely anti-theoretical. Many misapply basic
international relations concepts and ideas as they see fit. There is a sizable gap between a constructive analysis
of a critical international process and the actual evaluation of cyber interactions. New tactics sometimes require
new modes of thought to deal with their implications. Instead, cyber theorists seem to be focused on either
predicting a constant use of cyber tactics or misapplying deterrence logics to the study of cyber interactions. The
main flaw of the entire cyber security enterprise is a complete lack of theoretical engagement beyond a few
atypical examples— one of the few being Choucri’s (2012) examination of cyber power and lateral pressure. We
hope to rectify this problem by laying out a theory of cyber political interactions based on the principle of
restraint in cyberspace and the issue-based perspective of international politics. We argue that cyber options are
usually removed from the toolkit of responses available to a state because massive cyber operations would
escalate a conflict beyond control, would lead to unacceptable collateral damage, and would leave the initiating
side open to economic and computational retaliation. When cyber operations are used, they typically are low-
46
DebateUS!
NATO Cyber Coop Aff
scale events akin more to propaganda and espionage than warfare. This leads to cyber restraint, a form of
operations derived from deterrence theory but not dependent on it. We also argue that there will be a large
amount of regional interactions in cyberspace because these conflicts are tied to traditional reasons that states
disagree, namely territorial conflicts. Understanding these perspectives will be critical in analyzing emerging
cyber security threats Valeriano, Brandon; Maness, Ryan C. (2015-04-27). Cyber War versus Cyber Realities:
Cyber Conflict in the International System (p. 46). Oxford University Press. Kindle Edition.

An updated version of Chapter 5 is available in a forthcoming 2015 article in Armed Forces and Society titled
‘The Impact of Cyber Conflict on International Interactions’. Valeriano, Brandon; Maness, Ryan C. (2015-04-27).
Cyber War versus Cyber Realities: Cyber Conflict in the International System (Kindle Locations 132-134). Oxford
University Press. Kindle Edition.
47
DebateUS!
NATO Cyber Coop Aff

Offensive Cyber Operations Generally Cause War


48
DebateUS!
NATO Cyber Coop Aff
General Warfare

Offensive cyber operations increase escalation risks for no gain

Brandon Valeriano and Benjamin Jensen, CATO institute, January 15, 2019, The Myth of the Cyber Offense: The
Case for Restraint, https://www.cato.org/publications/policy-analysis/myth-cyber-offense-case-restraint,
Brandon Valeriano is the Donald Bren Chair of Armed Politics at Marine Corps University. Benjamin Jensen is an
associate professor at the Marine Corps University and a scholar-in-residence at American University's School of
International Service.

In the context of recent shifts in cybersecurity policy in the United States, this paper examines the character of
cyber conflict through time. Data on cyber actions from 2000 to 2016 demonstrate evidence of a restrained
domain with few aggressive attacks that seek a dramatic, decisive impact. Attacks do not beget attacks, nor do
they deter them. But if few operations are effective in compelling the enemy and fewer still lead to responses
in the domain, why would a policy of offensive operations to deter rival states be useful in cyberspace? We
demonstrate that, while cyber operations to date have not been escalatory or particularly effective in achieving
decisive outcomes, recent policy changes and strategy pronouncements by the Trump administration increase
the risk of escalation while doing nothing to make cyber operations more effective. These changes revolve
around a dangerous myth: offense is an effective and easy way to stop rival states from hacking America. New
policies for authorizing preemptive offensive cyber strategies risk crossing a threshold and changing the rules
of the game. Cyberspace to date has been a domain of political warfare and coercive diplomacy. An
offensively postured cyber policy is dangerous, counterproductive, and undermines norms in cyberspace. New
policies for authorizing preemptive offensive cyber strategies risk crossing a threshold and changing the rules
of the game. Cyberspace, to date, has been a domain of political warfare and coercive diplomacy, a world of
spies developing long-term access and infrastructure for covert action, not soldiers planning limited-objective
raids. Recent policy shifts appear to favor the soldier over the spy, thus creating a new risk of offensive cyber
events triggering inadvertent escalation between great powers.

Offensive warfare creates misperceptions

Brandon Valeriano and Benjamin Jensen, CATO institute, January 15, 2019, The Myth of the Cyber Offense: The
Case for Restraint, https://www.cato.org/publications/policy-analysis/myth-cyber-offense-case-restraint,
Brandon Valeriano is the Donald Bren Chair of Armed Politics at Marine Corps University. Benjamin Jensen is an
associate professor at the Marine Corps University and a scholar-in-residence at American University's School of
International Service.

The new policy framework for offensive cyber operations risks compounding common pathologies associated
with strategic assessments and planning. 62 Removing interagency checks increases the risks that an
operation will backfire on the attacker or compromise ongoing operations. Misperception is pervasive in
insulated decisionmaking processes for several reasons.63 First, small groups unchecked by bureaucracy tend
to produce narrow plans prone to escalation during crises.64 Second, leaders often give guidance to planners
49
DebateUS!
NATO Cyber Coop Aff
during crises that reflects their political bias or personality traits rather than a rational assessment of threats
and options.65 Third, offensive bias in planning may have little to do with the actual threat and more to do
with a cult of the offensive and the desire of officers to ensure their autonomy and resources.66 Removing
interagency checks therefore risks compounding fundamental attribution errors and other implicit biases.
Cyber operations are too important to be left to the generals at Cyber Command alone.

Offensive cyber risks escalation

Brandon Valeriano and Benjamin Jensen, CATO institute, January 15, 2019, The Myth of the Cyber Offense: The
Case for Restraint, https://www.cato.org/publications/policy-analysis/myth-cyber-offense-case-restraint,
Brandon Valeriano is the Donald Bren Chair of Armed Politics at Marine Corps University. Benjamin Jensen is an
associate professor at the Marine Corps University and a scholar-in-residence at American University's School of
International Service.

Contrary to observed patterns of limited disruption and espionage, Cyber Command sees cyberspace as a
domain fraught with increasing risk, where great powers such as China and Russia will undermine American
power. The only solution, from this perspective, is to go on the offense. Yet, the benefits of an offensive
posture, especially in cyberspace, are mostly illusory to date. Instead, the cyber domain tends to be optimized
for defense and deception, not decisive offensive blows. Not only is offense likely the weaker form of
competition in cyberspace, it also risks inadvertent escalation. The fear, suspicion, and misperception that
characterize interstate rivalries exacerbate the risk of offensive action in cyberspace.

Offensive cyber risks fear and overreaction, risk China-Russia retaliation

Brandon Valeriano and Benjamin Jensen, CATO institute, January 15, 2019, The Myth of the Cyber Offense: The
Case for Restraint, https://www.cato.org/publications/policy-analysis/myth-cyber-offense-case-restraint,
Brandon Valeriano is the Donald Bren Chair of Armed Politics at Marine Corps University. Benjamin Jensen is an
associate professor at the Marine Corps University and a scholar-in-residence at American University's School of
International Service.

New policy options proposed by Cyber Command and the Trump administration risk exacerbating fear in other
countries and creating a self-reinforcing spiral of tit-for-tat escalations that risk war even though each actor feels
he is acting defensively—or, as it is called in the scholarly literature, a security dilemma.52 As shown above,
most cyber operations to date have not resulted in escalation. The cyber domain has been a world of spies
collecting valuable information and engaging in limited disruptions that substitute for, as well as complement,
more conventional options. Shifting to a policy of preemptive offensive cyber warfare risks provoking fear and
overreaction in other states and possibly producing conflict spirals. Even limited-objective cyber offensive
action defined as “defending forward” can be misinterpreted and lead to inadvertent escalation.53 As the
historian Cathal Nolan puts it, “intrusions into a state’s strategically important networks pose serious risks
and are therefore inherently threatening.” More worryingly, with a more offensive posture, it will be
increasingly difficult for states to differentiate between cyber espionage and more damaging degradation
operations.55 What the United States calls defending forward, China and Russia will call preemptive strikes.
Worse still, this posture will likely lead great powers to assume all network intrusions, including espionage,
50
DebateUS!
NATO Cyber Coop Aff
are preparing the environment for follow-on offensive strikes. According to cybersecurity scholar Ben
Buchanan, “in the [aggressor] state’s own view, such moves are clearly defensive, merely ensuring that its
military will have the strength and flexibility to meet whatever comes its way. Yet potential adversaries are
unlikely to share this perspective.”56 The new strategy risks producing a “forever cyber war” prone to
inadvertent escalation because it implies all cyber operations should be interpreted as escalatory by
adversaries.57
51
DebateUS!
NATO Cyber Coop Aff
Offensive warfare strategies make war more likely

Gartzke & Lindsay, 2015, Erik Gartzke is professor of political science at the University of California, San Diego.
Jon R. Lindsay is assistant professor of digital media and global affairs at the Munk School of Global Affairs,
University of Toronto, Weaving Tangled Webs: Offense, Defense, and Deception in Cyberspace,
http://deterrence.ucsd.edu/_files/Weaving%20Tangled%20Webs_%20Offense%20Defense%20and
%20Deception%20in%20Cyberspace.pdf

General offensive thinking leads to preparation and war One type of cyber threat inflation, therefore, is the
attempt to represent cyberspace as categorically offense dominant when there may in fact be relatively
affordable defenses. Doomsday scenarios such as a “cyber Pearl Harbor” are useful in the pursuit of
bureaucratic resources and autonomy. The potential for deception in cyberspace thus fosters a more politically
motivated form of deception. Deception-prone environments increase the risk of threat inflation. A state that
believes it is in an offense-dominant world may invest more in military and intelligence resources than is
necessary or pursue capabilities of the wrong or suboptimal type. Yet if offense dominance does not apply to
the most important targets—since they are protected by complexity and deception—then over-arming and
sowing fear are wasteful and destabilizing. Resources that could be allocated elsewhere will instead be
expended for unnecessary security measures. Such efforts might even interfere with economically productive
aspects of the Internet. There is also the potential for tragedy if officials hastily resort to aggression in the
mistaken belief that relations are fundamentally unstable. The disaster of 1914, when great powers rushed
headlong into costly deadlock, reflected, in part, the impact of a mistaken “ideology of the offensive” applied
inappropriately to what clearly turned out to be a defense-dominant reality.

Cyber Warfare can start with commanders alone

Fred Kaplan, June 17, 2019, We’ve Entered a New Age of Cyberwar,
https://slate.com/news-and-politics/2019/06/trump-cyber-russia-power-hacking.html,

There’s another disturbing development in cyberwar: The whole enterprise has slipped out of the oversight
and control of our political leaders. Last summer, President Donald Trump signed a classified directive giving
U.S. Cyber Command leeway to mount cyberoffensive operations at its own initiative. Before then, such
operations—even tactical operations on the battlefield—had to be personally approved by the president. The
premise of the old policy—during the Bush II and Obama administrations—was that cyberweapons were
something new: Their effects were somewhat unpredictable and could spiral out of control. Now, with the new
directive, these concerns seem to have vanished—though it’s not clear why. One consequence is that Cyber
Command now feels less constrained about going on the offensive. And indeed, the Times reports—and my
own sources confirm—the command has stepped up cyberoffensive operations, in frequency and scale. The
Times reports that Donald Trump wasn’t even fully briefed on the hacking of Russia’s power grid, in part
because officials feared that he might “countermand” the order—suggesting the hack was in place before they
told Trump anything about it—and that he might tell foreign officials about it, carelessly or otherwise. Whatever
the reason, Trump wasn’t fully briefed because he didn’t have to be.
52
DebateUS!
NATO Cyber Coop Aff
Offensive operations increase instability

Rebecca Slaton, February 2017, https://www.belfercenter.org/publication/why-cyber-operations-do-not-always-


favor-offense, Why Cyber Operations Do Not Always Favor the Offense, Rebecca Slayton is Assistant Professor at
Cornell University with a joint appointment in the Science and Technology Studies Department and the Judith
Reppy Institute for Peace and Conflict Studies.

Creating unnecessary vulnerabilities. Making offensive cyber operations a national priority can increase
instabilities in international relations and worsen national vulnerabilities to attack. But because the skills
needed for offense and defense are similar, military offensive readiness can be maintained by focusing on
defensive operations that make the world safer, rather than on offensive operations.

Managing complexity. The ease of both offense and defense increases as organizational skills and capability in
managing complex technology improve; it declines as the complexity of cyber operations rises. What appears to
be offensive advantage is primarily a result of the offense's relatively simple goals and the defense's poor
management.…Prioritizing offensive operations can increase adversaries’ fears, suspicions, and readiness to take
offensive action. Cyber offenses include cyber exploitation (intelligence gathering) and cyberattack (disrupting,
destroying, or subverting an adversary’s computer systems). An adversary can easily mistake defensive cyber
exploitation for offensive operations because the distinction is a matter of intent, not technical operation. The
difficulty of distinguishing between offensive and defensive tactics makes mistrustful adversaries more
reactive, and repeatedly conducting offensive cyber operations only increases distrust. A focus on offensive
operations can also increase vulnerabilities; for example, secretly stockpiling information about vulnerabilities
in computers for later exploitation, rather than publicizing and helping civil society to mitigate those
vulnerabilities, leaves critical infrastructure vulnerable to attack.

Offensive operations against nuclear systems increase war risks

Syed Sadam Hussain Shah is a research assistant at the Center for International Strategic Studies in Islamabad.
He has published in leading journals such as the Bulletin of Atomic Scientists (“Estimating India’s nuclear weapon
producing capacity”) and CISS Insight. He recently presented and published a paper (“Indian Strategic Thought”)
at Karadeniz Teknik University, Turkey. His forthcoming paper is “Artificial Intelligence, Hacking, and Nuclear
Weapons.” He is an ethical hacker and a python programmer who has developed port scanners (a cybersecurity
tool in python computer programming language, March 2019, https://csis-prod.s3.amazonaws.com/s3fs-
public/190313_Shah_OffensiveCyber_pageproofs2.pdf, Offensive Cyber Operations and Nuclear Weapons

The potential use of offensive cyber operations against nuclear systems will increase the possibility of war in
the future and pose an urgent risk due to the vulnerabilities that exist in nuclear infrastructure. From network
attacks, man-in-the- middle attacks, packet sniffing, denial of service attacks (DDOS), Wi-Fi attacks, cyber-
spoofing, supply chain attacks, radio attacks, crypto attacks, rubber ducky attacks, air-gapped network attacks,
spyware attacks and more, malicious actors have a range of tools that can jeopardize the integrity of nuclear
command, control, and communication (C3) systems. This study will explore the offensive cyber threats that
threaten nuclear command and control systems.
53
DebateUS!
NATO Cyber Coop Aff
Cyber attacks cause nuclear war – accidents
Gady 15 (Franz Stefan, Associate Editor of The Diplomat, Senior Fellow with the EastWest Institute. Article quotes: James Cartwright, retired
US Marine Corps General and eighth Vice Chairman of the Joint Chiefs of Staff, Greg Austin of the EastWest Institute in New York, and Pavel Sharikov
of the Russian Academy of Sciences, “Could Cyber Attacks Lead to Nuclear War?”, http://thediplomat.com/2015/05/could-cyber-attacks-lead-to-
nuclear-war/)

Short fuses on U.S. and Russian strategic forces have particularly increased the risk of accidental nuclear
war, according to Cartwright, while ”the sophistication of the cyberthreat [to nuclear weapons] has
increased exponentially.” “One-half of their [U.S. and Russian] strategic arsenals are continuously maintained
on high alert. Hundreds of missiles carrying nearly 1,800 warheads are ready to fly at a moment’s notice,” a
policy report compiled by a study group chaired by the retired U.S. general summarized. “At the brink of conflict, nuclear
command and warning networks around the world may be besieged by electronic intruders whose
onslaught degrades the coherence and rationality of nuclear decision-making,” the report further points out.
The War Games-like scenario could unfold in one of the following three ways: First, sophisticated attackers
from cyberspace could spoof U.S. or Russian early warning networks into reporting that nuclear missiles
have been launched, which would demand immediate retaliatory strikes according to both nations’ nuclear warfare
doctrines. Second, online hackers could manipulate communication systems into issuing unauthorized
launch orders to missile crews. Third and last, attackers could directly hack into missile command and
control systems launching the weapon or dismantling it on site ( a highly unlikely scenario). To reduce the likelihood of such an
scenario ever occurring, Cartwright proposes that Moscow and Washington should adjust their nuclear war contingency plan timetables from
calling for missiles to be launched within 3 to 5 minutes to 24 to 72 hours. Reducing the lead time to prepare nuclear missiles for launch would not
diminish the deterrent value of the weapons, Cartwright, who headed Strategic Command from 2004 to 2007 and was vice chairman of the Joint
Chiefs of Staff before retiring in 2011, emphasized. However, the
Obama White House has so far rejected the idea,
particularly due to the recent deterioration of U.S.-Russia relations . Also, Robert Scher, Assistant Secretary of Defense for
Strategy, Plans, and Capabilities, testified in Congress this month arguing “it did not make any great sense to de-alert
forces” because nuclear missiles “needed to be ready and effective and able to prosecute the mission at any
point in time.” Cartwright’s credibility may have also suffered among Washington policy circles ever since he has been under investigation for
leaking information about the top secret Stuxnet virus – a sophisticated cyber weapon allegedly jointly developed by Israel and the United States –
to the New York Times. Nevertheless, a co-authored paper, seen in draft by The Diplomat, argues that “cyber
weapons and
strategies have brought us to a situation of aggravated nuclear instability that needs to be more explicitly and
more openly addressed in the diplomacy of leading powers, both in private and in public.” The authors, Greg Austin of the EastWest Institute in
New York (and a regular contributor to The Diplomat) and Pavel Sharikov of the Russian Academy of Sciences, have concluded that “Russia now
sees U.S. plans to disrupt the command and control of its nuclear weapons as the only actual (current) threat at the strategic level of warfare.” Laura
Saalman of the Asia Pacific Research Centre in Hawaii has also warned of the need to look at the impact of U.S. strategies and nuclear force posture
on China in a 2014 paper titled “Prompt Global Strike: China and the Spear”.

And lashout
Tilford 12 Robert, Graduate US Army Airborne School, Ft. Benning, Georgia, "Cyber attackers could shut
down the electric grid for the entire east coast" 2012, http://www.examiner.com/article/cyber-attackers-
could-easily-shut-down-the-electric-grid-for-the-entire-east-coa
To make matters worse a cyber attack that can take out a civilian power grid, for example could also cripple the U.S.
military. The senator notes that is that the same power grids that supply cities and towns, stores and gas stations, cell towers and
heart monitors also power "every military base in our country." "Although bases would be prepared to weather a
short power outage with backup diesel generators, within hours, not days, fuel supplies would run out", he said. Which means
military command and control centers could go dark. Radar systems that detect air threats to our country would
shut Down completely. "Communication between commanders and their troops would also go silent. And many weapons
systems would be left without either fuel or electric power", said Senator Grassley. "So in a few short hours or days, the mightiest
54
DebateUS!
NATO Cyber Coop Aff
military in the world would be left scrambling to maintain base functions", he said. We contacted the Pentagon
and officials
confirmed the threat of a cyber attack is something very real. Top national security officials— including the Chairman of the
Joint Chiefs, the Director of the National Security Agency, the Secretary of Defense, and the CIA Director— have said, "preventing a
cyber attack and improving the nation~’s electric grids is among the most urgent priorities of our country" (source: Congressional Record). So
how serious is the Pentagon taking all this? Enough to start, or end a war over it, for sure (see video: Pentagon declares
war on cyber attacks http://www.youtube.com/watch?v=_kVQrp_D0kY%26feature=relmfu ). A cyber attack today against the US could
very well be seen as an "Act of War" and could be met with a "full scale" US military response. That could include the use of
"nuclear weapons", if authorized by the President.

Cyberwarfare threats increase war risks

Greenburg, , 2019, Andy Greenberg is a senior writer for WIRED, covering security, privacy, information
freedom, and hacker culture. He’s the author of the forthcoming book Sandworm: A New Era of Cyberwar and
the Hunt for the Kremlin's Most Dangerous Hackers, out November 5. Greenberg's reporting on Ukraine's
cyberwar has won a Gerald, June 28, Iranian Hackers Launch a New US-Targeted Campaign as Tensions Mount,
https://www.wired.com/story/iran-hackers-us-phishing-tensions/?
fbclid=IwAR2T_Yo47vnwWH0DmOuPWJgGh5TMy6Dhty4RzBAe8jmXMao6dRsNWjRACfk

In the short span of years in which the threat of cyberwar has loomed, no one has quite figured out how to
prevent one. As state-sponsored hackers find new ways to inflict disruption and paralysis on one another, that
arms race has proven far easier to accelerate than to slow down. But security wonks tend to agree, at least, that
there's one way not to prevent a cyberwar: launching a preemptive or disproportionate cyberattack on an
opponent's civilian infrastructure. As the Trump administration increasingly beats its cyberwar drum, some
former national security officials and analysts warn that even threatening that sort of attack could do far more
to escalate a coming cyberwar than to deter it.
55
DebateUS!
NATO Cyber Coop Aff
Permanent War
Offensive cyber operations lock the US into permanent conflict

Ellers, 10-23, 19, How America's Cyber Strategy Could Create an International Crisis,
https://nationalinterest.org/blog/skeptics/how-americas-cyber-strategy-could-create-international-crisis-90526,
María Ellers is a US-Russia Relations Intern at the Center for the National Interest.

The United States has adopted a new cyber warfare strategy focused on “persistent engagement” and
“forward defense” in an attempt to thwart Chinese, Russian and other state-sponsored cyber attacks. While
this unprecedented “defend forward” approach gives America many significant advantages in navigating cyber
warfare, it also entails high-risks that could unintentionally escalate conflict. As a result, America must consider
whether its traditional understanding of concepts like offense, defense and deterrence are applicable to the
strategy of cyber warfare and whether they should continue to inform Washington’s cyber strategies. This was
the theme of a panel discussion held by the Center for the National Interest on September 10, 2019. The
discussion featured prominent experts on cyber warfare: Jason Healey, a senior research Scholar at Columbia
University’s School for International and Public Affairs and the editor of the first history of conflict in cyberspace,
A Fierce Domain: Cyber Conflict, 1986 to 2012; and Ben Buchanan, assistant professor at Georgetown University
and author of the book The Cyber Security Dilemma, which examines the intersection between cybersecurity
and statecraft. The discussion focused on unpacking Washington’s new cyber strategy while raising questions on
its effectiveness and subsequent implications on national security. Healey explained that the new strategy of
persistent engagement and forward defense is not just designed to deter cyber adversaries, but to force
adversaries to “play defense” and “raise the costs of offensive operations.” Persistent engagement refers to
the Defense Department’s initiative to counter foreign cyber threats as they emerge. Forward defense, similarly,
aims to gain the upper hand against an adversary by using direct actions to track, intercept and disrupt attacks in
foreign cyberspace before they occur. America’s two strategies work together to ensure that there is no
operational pause in American cybersecurity operations and that America has the capacity to disrupt attacks so
effectively that an adversary’s “costs of employing an attack” against the United States are “higher than its
benefits.” When deployed correctly, they put America’s enemies on the defensive and ensure that any states
attempting to launch offensive cyber operations against America would be forced to rebuild its software and
focus on its own defensive tactics instead of attacking. Healey explains that, from Washington’s perspective,
such strategies would allow the United States to dominate the cyber domain, whilst establishing a set of norms
of conduct in the cybersphere in a way that diplomatic negotiations would be unable to achieve via cyber
redlines. In this way, these strategies would act as its own deterrent mechanism by setting the “guardrails” of
permissibility in cyber warfare through the use of standard “tacit bargaining,” where states will moderate their
behavior towards the United States over the long-term, thereby creating a more stable cyber environment and
lasting U.S. superiority. Unfortunately, these strategies do have drawbacks. Although the Defense Department
contends that persistent engagement and forward defense are inherently nonaggressive, move-countering
strategies, it continues to promote them and use axioms like, “the best defense is a good offense,”—a phrase
Healey finds extremely problematic. To Healey, this illustrates a lack of understanding in Washington as to
what offense and defense actually mean in the context of cyber warfare, which could cause states to find
themselves in a position of “not just persistent, but permanent conflict.”
56
DebateUS!
NATO Cyber Coop Aff
Escalation
US offensive cyber operations are likely to escalate, and US will not remain dominant

Ellers, 10-23, 19, How America's Cyber Strategy Could Create an International Crisis,
https://nationalinterest.org/blog/skeptics/how-americas-cyber-strategy-could-create-international-crisis-90526,
María Ellers is a US-Russia Relations Intern at the Center for the National Interest.

Buchanan argues that Washington’s poor understanding of the indistinguishability between offense and defense
is the pitfall in current American cyber strategy and that the utilization of traditional militaristic concepts in the
cyber domain prevents the United States from identifying how intelligence collection can create unintended
escalation. Buchanan remains skeptical that states will be encouraged to self-regulate their behavior in
cyberspace. He worries that America’s cyber strategy may actually incentivize conflict escalation. Countries
that perceive America’s defensive strategy to be offensive in nature would be encouraged to attack the
United States in order to retaliate or acquire intelligence of their own to ensure their defense in the future.
Healey describes this as a tit-for-tat response. Should the United States continue to utilize these strategies,
then states will find themselves in a position of “not just persistent, but permanent conflict,” according to
Healey. Though a defensive strategy of retaliatory countermeasures may be intended to avoid escalation,
friction may instead lead to increasing instability in the cyber realm which could quickly spiral out of control.
America’s new cyber strategy runs the risk of creating a security dilemma in cyber warfare, an arena in which
traditional theories of deterrence are largely inapplicable. According to Healey, there exists a perceived “lack of
restraint” in cyber warfare that gives the attacker a dangerous inherent advantage. In the cyber world,
“defensive success” does not discourage attackers—advantage comes from the use of capabilities, “not their
possession.” Thus, in a domain where cyber capabilities are likely to be used as first-strike weapons, “surprising
your adversary” is much more important, further decreasing the likelihood that signaling will take place. Further
insecurity is created due to rapidly regenerating capabilities in cyberspace, causing any relative superiority
gained by the United States to be inherently fleeting and thus deterring an adversary from responding to
traditional deterrence strategies. In other words, even if the United States were to gain superiority in the cyber
field, it would not last long and would likely encourage other actors to attack the United States using newly
developed cyber technology. For Healey, this is the most destructive factor to any strategy that attempts to
deter escalating conflict.

Cyber war with China will escalate

Danny Vinik, 2015, America’s Secret Arsenal, Politico, December 9,


https://www.politico.com/agenda/story/2015/12/defense-department-cyber-offense-strategy-000331

It seems superfluous to mention, perhaps, but cyberwar with China is war with China. And a war that starts out
in the cyber realm can quickly migrate to other realms.

“I consider the current state of affairs to be extremely volatile and unstable because one could escalate a
cyberwar pretty quickly,” said Sami Saydjari, the founder of the Cyber Defense Agency consulting firm, who has
been working on cyber issues for more than three decades. “You can imagine a scenario where a country
57
DebateUS!
NATO Cyber Coop Aff
instigates a cyberwarfare-like event but does it in such a way to blame another country, which causes an
escalation between those countries, which accidentally causes a kinetic escalation, which accidentally reaches
the nuclear level. This is not an implausible scenario.”

No laws or norms increase escalation risks

Richard Van Hoojidon, January 20, 2019, https://richardvanhooijdonk.com/blog/en/the-future-of-war-will-be-


digital/ The future of war will be digital

Increased connectivity may have made our lives easier and more convenient, but it’s also made us more
vulnerable to cyber-attacks. There practically isn’t a single aspect of modern society that doesn’t rely on the
internet to a certain extent. Some countries have recognised this as an opportunity to inflict damage to their
political opponents without formally declaring war, as evidenced by the increase in the number of attacks
against the world’s digital infrastructure over the years. Does that mean that we’re on the verge of an actual,
full-scale cyberwar? That’s difficult to say at this point. There’s still a great deal of uncertainty about the whole
matter, which is even further complicated by a lack of a legal framework. No one is really sure where the line
that separates a regular cyber-attack from an act of cyberwar is drawn. Some countries are taking advantage
of this to see how far they can push before someone retaliates. And that’s where the danger lies. Until we
have laws that will clarify what cyberwar really is and eliminate grey areas, just about any incident could
theoretically spiral out of control and result in a real-world conflict.

Israel-Hamas escalation

Kate Fazzini, May 6, 2019, https://www.cnbc.com/2019/05/06/israel-conflict-live-response-to-a-cyberattack-


will-lead-to-a-shift.html, Israel says it bombed Hamas compound that committed cyberattacks

The Israel Defense Forces said Sunday it responded to a cyberattack from a Hamas-controlled compound in Gaza
with an airstrike, a rare mix of physical and cyber conflict on the world stage.

The cyberattacks emanating from the Gaza facility were aimed at harming Israeli civilians and was thwarted
online before the strike, the IDF said, though they did not immediately release further details about the
cyberattack. Executing the prime minister’s directive, the IDF said Sunday it has conducted attacks on more
than 260 military targets in Gaza, including an assault on what the Israeli military described as a “building
where Hamas cyber operatives work” and a targeted attack against a Palestinian militant commander it says
funneled money to “terror organizations operating within the Gaza Strip.”
58
DebateUS!
NATO Cyber Coop Aff
Escalation – Answers to: Incentives for Damage Limitation

No incentives to limit damage in a cyber attack

Tarah Wheeler, 2018, ForeignPolicy.com, https://foreignpolicy.com/2018/09/12/in-cyberwar-there-are-no-


rules-cybersecurity-war-defense/, In Cyberwar, There are No Rules

There is also a serious risk of collateral damage in cyberoperations. Most militaries understand that they are
responsible not only for targeting strikes so that they hit valid targets but also for civilian casualties caused by
their actions. Though significant collateral damage assessment occurs prior to the United States authorizing
cyberoperations, there is no international agreement requiring other powers to take the same care.

A major cyberattack against the United States in 2014 was a clear example of how civilians can bear the brunt of
such operations. Almost all cybersecurity experts and the FBI believe that the Sony Pictures hack that year
originated in North Korea. A hostile country hit a U.S. civilian target with the intention of destabilizing a major
corporation, and it succeeded. Sony’s estimated cleanup costs were more than $100 million. The conventional
warfare equivalent might look like the physical destruction of a Texas oil field or an Appalachian coal mine. If
such a valuable civilian resource had been intentionally destroyed by a foreign adversary, it would be
considered an act of war.
59
DebateUS!
NATO Cyber Coop Aff
Offensive Operations Spill Over, Create a Bad Precedent
OCOs set a precedent for other countries
Hakmeh and Moynihan 18 (Joyce Hakmeh (Cyber Research Fellow, International Security Department, and Co-
Editor of the Journal of Cyber Policy) and Harriet Moynihan (Associate Fellow, International Law Programme).
“Offensive Cyberattacks Would Need to Balance Lawful Deterrence and the Risks of Escalation.” Chatham House,
March 23, 2018. https://www.chathamhouse.org/expert/comment/offensive-cyberattacks-would-need-balance-
lawful-deterrence-and-risks-escalation)

Still, theUK is likely to be cautious about launching a cyber offensive as a retaliatory measure . When the UK
announced its plan to develop offensive cyber capacities in 2013 , as part of its deterrence strategy, it was the first country to
publicly declare this. The announcement raised eyebrows in some quarters, primarily on the basis that it will make it difficult to
argue against the use of offensive cyber capabilities by other states, such as China and Russia. Moreover, using
offensive cyber in retaliation for an alleged breach of international law could set a precedent in how states react
to similar situations in the future.

OCOs run the risk of escalation


Hakmeh and Moynihan 18 (Joyce Hakmeh, Cyber Research Fellow, International Security Department, and Co-
Editor of the Journal of Cyber Policy, and Harriet Moynihan, Associate Fellow, International Law Programme,
“Offensive Cyberattacks Would Need to Balance Lawful Deterrence and the Risks of Escalation.” Chatham House,
March 23, 2018. https://www.chathamhouse.org/expert/comment/offensive-cyberattacks-would-need-balance-
lawful-deterrence-and-risks-escalation)

Could the destruction of data, the hacking of websites or the periodic interruption of online services constitute a
breach of the prohibition on the use of force? The threshold for what constitutes a ‘use of force’ in terms of
cyber operations is much less clear than in relation to traditional, kinetic weaponry. This is another area where the UN
group have failed to reach agreement, with rejection of the proposed text by a few states (including Cuba, Russia and China) leaving the
process in deadlock. A report from Microsoft has urged (opens in new window) states to exercise self-restraint in the conduct of offensive operations,
pointing out that the
ultimate aim of rules guiding offensive action should be to reduce conflict between states.
International law applies to cyber operations as it does to other state activities . But further international
agreement on the way the law applies to these operations would be highly desirable . Meanwhile, the UK will be
mindful of the fact that any use of offensive cyberattacks runs the risk of setting a precedent and escalating what
is already likely to be a politically fragile situation .
60
DebateUS!
NATO Cyber Coop Aff
Offensive Cyber Operations Violate Norms
OCOs on critical infrastructure are irresponsible and escalatory
Schmitt 19 (Michael Schmitt (Chair of Public International Law at the University of Exeter Law School in the
United Kingdom, Howard S. Levie Professor at the U.S. Naval War College’s Stockton Center for the Study of
International Law, Francis Lieber Distinguished Scholar at the U.S. Military Academy at West Point, Director of
Legal Affairs for Cyber Law International. Member of the editorial board of Just Security). “U.S. Cyber Command,
Russia and Critical Infrastructure: What Norms and Laws Apply?” Just Security, June 18, 2019,
https://www.justsecurity.org/64614/u-s-cyber-command-russia-and-critical-infrastructure-what-norms-and-
laws-apply/)

More to the point, do cyber


operations into critical infrastructure abroad violate the rules of the game for
cyberspace? To begin with, they are inconsistent with accepted “norms of responsible State behavior .” For instance, the
DoD Cyber Strategy summary notes that “[t]he United States has endorsed the work done by the UN Group of Governmental Experts on Developments in
the Field of Information and Telecommunications in the Context of International Security (UNGGE) to develop a framework of responsible State behavior
in cyberspace. The
principles developed by the UNGGE include prohibitions against damaging civilian critical
infrastructure during peacetime.” Earlier, in its 2014 submission to the GGE, the United States similarly took the position that “[a] State should
not conduct or knowingly support online activity that intentionally damages critical infrastructure or otherwise impairs the use of critical infrastructure to
provide services to the public.” This position has been echoed repeatedly by other States. The
GGE, including representatives of all five Security Council
permanent members, observed in its 2015 report (which was endorsed by the General Assembly) that “[t]he most
harmful attacks using ICTs [information and communications technologies] include those targeted against the
critical infrastructure and associated information systems of a State. The risk of harmful ICT attacks against
critical infrastructure is both real and serious .” It went on to contend that “[a] State should not conduct or knowingly support ICT activity
contrary to its obligations under international law that intentionally damages critical infrastructure or otherwise impairs the use and operation of critical
infrastructure to provide services to the public” and that States should assist other States that are the target of such operations. These points were
repeated in the 2017 G7 Declaration on Responsible States Behaviour in Cyberspace. The following year, Australia, Canada
Chile, Estonia, Japan, the Netherlands, New Zealand, the Republic of Korea, and the United Kingdom emphasized in a Joint Statement on Information and
Telecommunications in the Context of International Security that, Despite
the international legal framework governing State
behaviour in cyberspace, many States, either directly or through proxies and non-State actors, undertake
malicious cyber activity directed at the essential systems, infrastructure and democratic processes of other
States. Such behaviour threatens international peace and security, undermines the rules-based international
order on which we all rely for our security, and imperils the benefits that arise from the development of
cyberspace. …. States undertaking these acts do so with flagrant disdain for their obligations, for norms of
appropriate behaviour and with reckless disregard for the consequences. Of course, whether intruding into another State’s
critical infrastructure in an effort to deter the target State’s malicious activities, or to prepare for future conflict with that State, violates this norm of
responsible State behavior is an open question over which reasonable people may disagree. It would seem clear that the
answer is best crafted
on a case-by-case basis, with a difficult to rebut presumption that cyber operations involving critical
infrastructure are off the table due to their escalatory potential.
61
DebateUS!
NATO Cyber Coop Aff
Nuclear War

Offensive cyber usage undermines crisis stability needed to lower the nuclear threshold

Cimbala, 2016, Dr. Cimbala (BA, Penn State; MA, PhD, University of Wisconsin–Madison) is Distinguished
Professor of Political Science at Penn State–Brandywine. An award-winning Penn State teacher, he is the author
of numerous works in the fields of nuclear arms control, deterrence, national security policy, and other topics.
He recently authored The New Nuclear Disorder: Challenges to Deterrence and Strategy (Ashgate, 2015). Dr.
Cimbala has served on the editorial boards of academic journals, has consulted for various US government
agencies and contractors, and has contributed to US and foreign media discussions of US national security issue,
Nuclear Deterrence in Cyber-ia: Challenges and Controveries,
https://pdfs.semanticscholar.org/9b47/a0b32cec73fbc10b8fc862e10a068416073f.pdf

Nuclear weapons, whether held back for deterrence or fired in anger, must be incorporated into systems for
C4ISR. The weapons and their C4ISR systems must be protected from attacks both kinetic and digital in nature.
In addition, decision makers who have to manage nuclear forces during a crisis should ideally have the best
possible information about the status of their own nuclear and cyber forces and command systems, about the
forces and C4ISR of possible attackers, and about the probable intentions and risk acceptance of possible
opponents. In short, the task of managing a nuclear crisis demands clear thinking and good information. But
the employment of cyber weapons in the early stages of a crisis could impede clear assessment by creating
confusion in networks and the action channels that depend on those networks.4 The temptation for early
cyber preemption might “succeed” to the point at which nuclear crisis management becomes weaker instead
of stronger. As Andrew Futter has noted, With US and Russian forces ready to be used within minutes and
even seconds of receiving the order, the possibility that weapons might be used by accident (such as the belief
that an attack was underway due to spoofed early warning or false launch commands), by miscalculation (by
compromised communications, or through unintended escalation), or by people without proper authorization
(such as a terrorist group, third party or a rogue commander) is growing. Consequently, in this new nuclear
environment, it is becoming progressively important to secure nuclear forces and associated computer systems
against cyber attack, guard against nefarious outside influence and “hacking,” and perhaps most crucially, to
increase the time it takes and the conditions that must be met before nuclear weapons can be launched.
Ironically, the downsizing of US and post-Soviet Russian strategic nuclear arsenals since the end of the Cold
War, although a positive development from the perspectives of nuclear arms control and nonproliferation,
makes the concurrence of cyber and nuclear attack capabilities more alarming. The enormous and redundant
deployments by the Cold War Americans and Soviets had at least one virtue. Those arsenals provided so much
redundancy against first-strike vulnerability that relatively linear systems for nuclear attack warning,
command and control (C2), and responsive launch under—or after—attack sufficed. At the same time, Cold
War tools for military cyber mischief were primitive compared to those available now. In addition, countries
and their armed forces were less dependent on the fidelity of their information systems for national security.
Thus, the reduction of US, Russian, and possibly other forces to the size of “minimum deterrents” might
compromise nuclear flexibility and resilience in the face of kinetic attacks preceded or accompanied by cyber
war.6 For example, Bruce Blair, nuclear policy expert and author of a number of studies on nuclear C2, has
observed that the communications and computer networks used to control nuclear forces are supposed to be
firewalled against the two dozen nations (including Russia, China and North Korea) with dedicated
62
DebateUS!
NATO Cyber Coop Aff
computerattack programs and from the thousands of hostile intrusion attempts made every day against U.S.
military computers. But investigations into these firewalls have revealed glaring weaknesses.

Since nuclear weapons are deployed primarily for the purpose of avoiding war by means of deterrence, the
relationship between evolving forms of cyber or information warfare and nuclear crisis management becomes
an important agenda item for analysts and military planners. Either information or cyber warfare has the
potential to attack or to disrupt successful crisis management on each of four important attributes.18 First,
information warfare can muddy the signals being sent from one side to the other in a crisis. This deception can
be done deliberately or inadvertently. Suppose one side plants a virus or worm in the other’s communications
networks.19 The virus or worm becomes activated during the crisis and destroys or alters information. The
missing or altered information may make it more difficult for the cyber victim to arrange a military attack. But
destroyed or altered information may mislead either side into thinking that its signal has been correctly
interpreted when in fact it has not. Thus, side A may intend to signal “resolve” instead of “yield” to its opponent
on a particular issue. Side B, misperceiving a “yield” message, may decide to continue its aggression, meeting
unexpected resistance and causing a much more dangerous situation to develop. Information warfare can also
destroy or disrupt communication channels necessary for successful crisis management. It can do so by
disrupting communication links between policy makers and military commanders during a period of high threat
and severe time pressure. Two kinds of unanticipated problems, from the standpoint of civil-military relations,
are possible under these conditions. First, political leaders may have predelegated limited authority for nuclear
release or launch under restrictive conditions: only when these few conditions obtain, according to the protocols
of predelegation, would military commanders be authorized to employ nuclear weapons distributed within their
command. Clogged, destroyed, or disrupted communications could prevent top leaders from knowing that
military commanders perceived a situation to be far more desperate—and thus permissive of nuclear initiative—
than it really was. For example, during the Cold War, disrupted communications between the US president and
secretary of defense and ballistic missile submarines, once the latter came under attack, could have resulted in a
joint decision by submarine officers and crew to launch in the absence of contrary instructions. Second,
information warfare during a crisis will almost certainly increase the time pressure under which political leaders
operate. It may do so literally, or it may affect the perceived time lines within which the policy-making process
can make its decisions. Once either side sees parts of its command, control, and communications system being
subverted by phony information or extraneous cyber noise, its sense of panic at the possible loss of military
options will be enormous. In the case of US Cold War nuclear war plans, for example, disruption of even portions
of the strategic command, control, and communications system could have prevented competent execution of
parts of the Single Integrated Operational Plan (the strategic nuclear war plan). The plan depended upon finely
orchestrated time-on-target estimates and precise damage expectancies against various classes of targets.
Partially misinformed or disinformed networks and communications centers would have led to redundant
attacks against the same target sets and, quite possibly, unplanned attacks on friendly military or civilian
installations. A third potentially disruptive effect of information warfare on nuclear crisis management is that
such warfare may reduce the search for available alternatives to the few and desperate. Policy makers searching
for escapes from crisis denouements need flexible options and creative problem solving. Victims of information
warfare may have a diminished ability to solve problems routinely, let alone creatively, once information
networks are filled with flotsam and jetsam. Questions to operators will be poorly posed, and responses (if
available at all) will be driven toward the least common denominator of previously programmed standard
operating procedures. Retaliatory systems that depend on launch-on-warning instead of survival after riding out
an attack are especially vulnerable to reduced time cycles and restricted alternatives.
63
DebateUS!
NATO Cyber Coop Aff
The propensity to search for the first available alternative that meets minimum satisfactory conditions of goal
attainment is strong enough under normal conditions in nonmilitary bureaucratic organizations.20 In civil-
military C2 systems under the stress of nuclear crisis decision making, the first available alternative may quite
literally be the last—or so policy makers and their military advisers may persuade themselves. Accordingly, the
bias toward prompt and adequate solutions is strong. During the Cuban missile crisis, for example, a number of
members of the presidential advisory group continued to propound an air strike and invasion of Cuba during the
entire 13 days of crisis deliberation. Had less time been available for debate and had President Kennedy not
deliberately structured the discussion in a way that forced alternatives to the surface, the air strike and invasion
might well have been the chosen alternative. Fourth—and finally on the issue of crisis management—
information warfare can cause flawed images of each side’s intentions and capabilities to be conveyed to the
other, with potentially disastrous results. Another example from the Cuban missile crisis demonstrates the
possible side effects of simple misunderstanding and noncommunication on US crisis management. At the
tensest period of the crisis, a U-2 reconnaissance aircraft got off course and strayed into Soviet airspace. US
and Soviet fighters scrambled, and a possible Arctic confrontation of air forces loomed. Khrushchev later told
Kennedy that Soviet air defenses might have interpreted the U-2 flight as a prestrike reconnaissance mission
or as a bomber, calling for a compensatory response by Moscow.21 Fortunately, the Soviet leadership chose
to give the United States the benefit of the doubt in this instance and to permit US fighters to escort the
wayward U-2 back to Alaska. Why this scheduled U-2 mission was not scrubbed once the crisis began has
never been fully revealed; the answer may be as simple as bureaucratic inertia compounded by
noncommunication down the chain of command by policy makers who failed to appreciate the risk of
“normal” reconnaissance under these extraordinary conditions.
64
DebateUS!
NATO Cyber Coop Aff
Increasing Offensive Warfare and War Increases Cyber War

The root cause of cyber conflicts is war

Maness & Valeriano, 2015, Ryan C. Maness, Northeastern University, Department of Political Science, Brandon
Valeriano, University of Glasglow, Cyber War versus Cyber Realities: Cyber Conflict in the International System,
Kindle Edition, page number at end of card

This leads to another issue that we raise throughout this volume: the need for the settlement of the root
causes of conflict. Cyber conflicts are not disconnected from the normal international relations policy sphere.
International cyber operations are directly connected to the long history of interactions between states.
Traditional security rivals extend to cyberspace. Ignoring this process misses the root causes of cyber conflicts
and instead commits the error of focusing on the tactic rather than the fundamental issues of disagreement
between states. Valeriano, Brandon; Maness, Ryan C. (2015-04-27). Cyber War versus Cyber Realities: Cyber
Conflict in the International System (p. 210). Oxford University Press. Kindle Edition.
65
DebateUS!
NATO Cyber Coop Aff
Implants Bad

Implants risk preemptive attacks

Fred Kaplan, June 17, 2019, We’ve Entered a New Age of Cyberwar,
https://slate.com/news-and-politics/2019/06/trump-cyber-russia-power-hacking.html

It’s this instantaneity that creates a danger. If a lot of countries are inside one another’s networks, if they’re all
able to shift from just-looking-around to unleashing-an-attack in no time, and if these countries are capable of
launching an attack and are susceptible to receiving an attack, then this creates a hair trigger. In a crisis, one or
more of these countries might launch a cyberattack, if just to preempt one of the other countries from doing it
first. The very existence of the implants makes a preemptive attack more likely. According to the Times, the U.S.
and Russia have those implants—those computer codes—in place; they’re ready to turn on.
66
DebateUS!
NATO Cyber Coop Aff
General Cyber Deterrence Answers

Cyber Deterrence isn’t credible

Austin Long, .2018, Austin Long is a Senior Political Scientist at the Rand Corporation. His research interests
include low-intensity conflict, intelligence, military operations, nuclear forces, military innovation, and the
political economy of national security. Long previously was an Associate Professor at Columbia University’s
School of International and Public Affairs. He also was an analyst and adviser to the U.S. military in Iraq (2007–
08) and Afghanistan (2011 and 2013). He was a Council on Foreign Relations International Affairs Fellow in
Nuclear Security, serving in the Joint Staff J5. Bytes, Bombs, and Spies (p. 423). Brookings Institution Press.
Kindle Edition.

Reflecting this uncertainty, the variance in the damage inflicted by a cyberattack is likely to be greater than by a
kinetic attack.24 In other words, the distribution of damage that would be inflicted by many types of
cyberattacks is likely less tightly clustered around the hoped-for or planned damage than the damage from many
types of kinetic attacks. Uncertainty about the damage a cyberattack would inflict could make kinetic threats
more effective deterrents than cyber threats. The effectiveness of deterrent threats depends on a state’s ability
to carry out the threat: deterrence by denial is less likely to succeed if one’s adversary believes a threatened
response will not achieve its military objective; and deterrence by punishment is likely to fail if the adversary
doubts the state’s attack will inflict the promised damage. Moreover, except in an all-out war, a state will want
to be confident that its attack will not inflict more damage than intended, because doing so could raise the
probability that the adversary would escalate still further. For both of these reasons—credibility and escalation
control—cyberattacks appear to be less effective than kinetic attacks as a deterrent.

Cyber deterrence risks escalation

Jason Healy, 2018, Jason Healey is Senior Research Scholar at Columbia University’s School for International
Public Affairs, specializing in cyber conflict and risk. He started his career as a U.S. Air Force intelligence officer
before moving to cyber response and policy jobs at the White House and Goldman Sachs. He was Founding
Director for Cyber Issues at the Atlantic Council, where he remains a Senior Fellow and is the editor of the first
history of conflict in cyberspace, A Fierce Domain: Cyber Conflict, 1986 to 2012. He served as Vice Chair of the
Financial Services Informational Sharing and Analysis Center (FS-ISAC) and helped create the world’s first cyber
command in 1998. He is on the DEF CON review board and the Defense Science Board task force on cyber
deterrence. Bytes, Bombs, and Spies (pp. 419-420). Brookings Institution Press. Kindle Edition.

Together these points highlight a critical but overlooked element of escalatory dynamics: deterrence works very
differently if your adversary is certain it is striking back, not first. This situation is not new to cyberspace, as
noted by Richard Betts: “Deterrence is less reliable when both sides in a conflict see each other as the
aggressor.… The side that we want to deter may see itself as trying to deter us. These situations are ripe for
miscalculation.”42 This idea is similar to Libicki’s reference to an “effect opposite to the one intended,” where
brandishing capabilities leads not to deterrence but to escalation. Perhaps the most common form of such
miscalculation is a classic security dilemma.43 Adversaries see each other building and using cyber capabilities
and instead enter a spiral of escalation. Still, this escalation is quite rational. Each adversary perceives (perhaps
67
DebateUS!
NATO Cyber Coop Aff
correctly) the buildup to be directed at it and invests ever more in building capabilities that will help it catch
up. Every new headline—about China’s cyber espionage, U.S. cyber organizations’ capability and surveillance,
Russia’s use of cyber means to bully its neighbors—just escalates the spiral higher and higher. This spiral of
escalation is fed by both emotion and the unique dynamics of cyberspace. What if seeing cyber capabilities,
especially if you are their target, leads not to fear but to anger? After all, anger often leads to optimistic
judgments such as those about the value of retaliating.44 This optimism bias feeds the natural tendency of
national security hawks, whose “preference for military action over diplomacy is often built upon the
assumption that victory will come swiftly and easily.”45 Getting “just enough” fear is a hard effect to
calibrate in the best of times, especially in a new and poorly understood area like cyber conflict. Brandishing
cyber capabilities to deliberately induce fear in the leaders of another state may overshoot that target and
cause anger and a “damaging sense of paranoia” instead, feeding the adversary’s own hawks.46 Moreover,
“conflict often hardens attitudes and drives people to extreme positions.”47 Bytes, Bombs, and Spies (p. 187).
Brookings Institution Press. Kindle Edition.

Brandishing a cyber weapon makes it more likely that adversaries will develop capabilities to
combat it

Jason Healy, 2018, Jason Healey is Senior Research Scholar at Columbia University’s School for International
Public Affairs, specializing in cyber conflict and risk. He started his career as a U.S. Air Force intelligence officer
before moving to cyber response and policy jobs at the White House and Goldman Sachs. He was Founding
Director for Cyber Issues at the Atlantic Council, where he remains a Senior Fellow and is the editor of the first
history of conflict in cyberspace, A Fierce Domain: Cyber Conflict, 1986 to 2012. He served as Vice Chair of the
Financial Services Informational Sharing and Analysis Center (FS-ISAC) and helped create the world’s first cyber
command in 1998. He is on the DEF CON review board and the Defense Science Board task force on cyber
deterrence. Bytes, Bombs, and Spies (pp. 419-420). Brookings Institution Press. Kindle Edition.

A loud shout in cyberspace has been considered difficult, since revealing a cyber capability provides the target
with suggestions for how to defeat it. Martin Libicki and others have highlighted the many difficulties involved
in basing deterrence (or coercion) on particularly threatening capabilities, not least that “brandishing a
cyberwar capability, particularly if specific, makes it harder to use such a capability because brandishing is
likely to persuade the target to redouble its efforts to find or route around the exploited flaw.”11 There are,
he points out, no May Day parades to flaunt capability. Nor, in another frequent analogy, are there mushroom
clouds over Bikini Atoll. Bytes, Bombs, and Spies (p. 177). Brookings Institution Press. Kindle Edition.

Demonstrating capabilities against Iran caused escalating weapons development and cyber
operations

Jason Healy, 2018, Jason Healey is Senior Research Scholar at Columbia University’s School for International
Public Affairs, specializing in cyber conflict and risk. He started his career as a U.S. Air Force intelligence officer
before moving to cyber response and policy jobs at the White House and Goldman Sachs. He was Founding
Director for Cyber Issues at the Atlantic Council, where he remains a Senior Fellow and is the editor of the first
history of conflict in cyberspace, A Fierce Domain: Cyber Conflict, 1986 to 2012. He served as Vice Chair of the
Financial Services Informational Sharing and Analysis Center (FS-ISAC) and helped create the world’s first cyber
68
DebateUS!
NATO Cyber Coop Aff
command in 1998. He is on the DEF CON review board and the Defense Science Board task force on cyber
deterrence. Bytes, Bombs, and Spies (pp. 419-420). Brookings Institution Press. Kindle Edition.

But despite being on the end of such a fear-inducing cyber weapon, the Iranians did not back down; instead
they accelerated development of their own capabilities. According to the four-star general then overseeing U.S.
Air Force cyber operations, the Iranian response to Stuxnet meant, “They are going to be a force to be reckoned
with.”18 Likewise, Forbes reported that “U.S. researchers have repeatedly claimed the Middle Eastern nation
has expanded its cyber divisions at a startling pace since the uncloaking of Stuxnet.”1 worse for the
conjecture, Iran did not just create new cyber capabilities, but used them to counterattack the U.S. financial
sector, “most likely in retaliation for economic sanctions and online attacks by the United States.”20 These back-
and-forth attacks by both sides continued for years. The next significant cycle started in April 2012, when a
damaging “Wiper” worm forced Iran to take some oil wells offline and wiped the hard drives of computers in its
energy sector.21 The attack was most likely the work of Israel, which was keen to disrupt the Iranian economy.
The Iranian response was symmetrical, with a nearly identical wiper attack, called Shamoon, which disrupted
30,000 computers at Saudi Aramco and more a few days later at RasGas.22 Instead of recognizing that the
Iranian attacks were a response to an earlier attack against it, the U.S. defense secretary, Leon Panetta, instead
called them “a significant escalation of the cyber threat [that] have renewed concerns about still more
destructive scenarios that could unfold.”23 Later attacks by Iran targeted U.S. financial companies and a major
casino.24 These attacks were then used in speeches and testimony to push for increasing U.S. capabilities.
Bytes, Bombs, and Spies (p. 180). Brookings Institution Press. Kindle Edition.

Cyber swagger doesn’t deter because capabilities are easy to counter

Jason Healy, 2018, Jason Healey is Senior Research Scholar at Columbia University’s School for International
Public Affairs, specializing in cyber conflict and risk. He started his career as a U.S. Air Force intelligence officer
before moving to cyber response and policy jobs at the White House and Goldman Sachs. He was Founding
Director for Cyber Issues at the Atlantic Council, where he remains a Senior Fellow and is the editor of the first
history of conflict in cyberspace, A Fierce Domain: Cyber Conflict, 1986 to 2012. He served as Vice Chair of the
Financial Services Informational Sharing and Analysis Center (FS-ISAC) and helped create the world’s first cyber
command in 1998. He is on the DEF CON review board and the Defense Science Board task force on cyber
deterrence. Bytes, Bombs, and Spies (pp. 419-420). Brookings Institution Press. Kindle Edition.

Below the threshold of death and destruction, there is ample evidence that the Iranians, Chinese, and Russians
saw U.S. cyber organizations, capabilities, and operations as a challenge to be risen to, not one from which to
back away. Indeed, the positive feedback of tit-for-tat may be the dominant dynamic of gray-zone cyber conflict.
At best, fear of U.S. cyber capabilities may have led to short-term tamping down of adversary operations until
they were able to compete on more equal terms. Given the low cost of developing cyber capabilities, in
comparison with the cost of developing more traditional military options, it did not take long for other
countries to join the fray. In particular, both Iran and North Korea increased their relative capability far more
quickly than many analysts expected. This speed of convergence in capabilities may mean that any equivalent
of the “missile gap” or “bomber gap” can be promptly closed, making U.S. military goals of “cyberspace
superiority” futile.35 When it is easy to close gaps, swaggering is counterproductive. Bytes, Bombs, and Spies
(pp. 183-184). Brookings Institution Press. Kindle Edition.
69
DebateUS!
NATO Cyber Coop Aff

No deterrence, tit for tat escalation, and with nuclear deterrence, we just got luck

Jason Healy, 2018, Jason Healey is Senior Research Scholar at Columbia University’s School for International
Public Affairs, specializing in cyber conflict and risk. He started his career as a U.S. Air Force intelligence officer
before moving to cyber response and policy jobs at the White House and Goldman Sachs. He was Founding
Director for Cyber Issues at the Atlantic Council, where he remains a Senior Fellow and is the editor of the first
history of conflict in cyberspace, A Fierce Domain: Cyber Conflict, 1986 to 2012. He served as Vice Chair of the
Financial Services Informational Sharing and Analysis Center (FS-ISAC) and helped create the world’s first cyber
command in 1998. He is on the DEF CON review board and the Defense Science Board task force on cyber
deterrence. Bytes, Bombs, and Spies (pp. 419-420). Brookings Institution Press. Kindle Edition.

Not Deterrence but Tit-for-Tat The United States has spent billions of dollars to develop cyber organizations and
capabilities, in part hoping these would be fearsome enough to intimidate adversaries. Why has more traditional
cyber deterrence seemingly caused more reaction than restraint? Certainly the famous claim that it is difficult to
attribute cyberattacks falls flat: neither Iran nor the United States had much doubt about whom its adversary
was. It is also likely that the U.S. cyber community is misreading the stabilizing impact of organizational
capability. As quoted earlier, Admiral Rogers argued that building warfighting “nuclear forces and the policy
and support structures” made deterrence “predictable,” decreasing tensions and making crises less likely, and
that cyber operations could do that as well. Perhaps, but the nuclear standoff was not as stable as Admiral
Rogers proposes, if one considers the existential scares caused by the Cuban Missile Crisis in 1962 and the
Able Archer wargame and Soviet false alarm that warned of incoming U.S. missiles in 1983.40 Bytes, Bombs,
and Spies (pp. 185-186). Brookings Institution Press. Kindle Edition.

Multiple reasons cyber ops don’t deter

Schneier, October 1, 2019, Jacquelyn Schneider is a Hoover Fellow with the Hoover Institution at Stanford
University and a nonresident fellow at the Naval War College’s Cyber and Innovation Policy Institute., Are cyber-
operations a U.S. retaliatory option for the Saudi oil field strikes? Would such action deter Iran?

As other scholars have noted, the best deterrence signals are ones that are costly, visible and credible. Here’s
why cyber-operations often fail this test: They may be hard to detect, hard to attribute to their source and hard
to turn into a credible threat, because they may rely on vulnerabilities that are easy to plug if the target knows
about them.

Turn – Viruses escape and our enemies gain access to them. They can also be reverse
engineered by enemies

Richard Van Hoojidon, January 20, 2019, https://richardvanhooijdonk.com/blog/en/the-future-of-war-will-be-


digital/ The future of war will be digital

To breach Iran’s network, the attackers first infected several computers that were outside of the network but
believed to be connected to it, hoping they’d spread the infection further. While ultimately successful, this
approach had one unintended consequence – the infection spread far beyond the original target and affected
70
DebateUS!
NATO Cyber Coop Aff
computers all over the world. And that’s one of the main problems associated with cyber weapons. Their
creators can easily lose control of them and cause far more damage than initially intended. Furthermore,
cyber weapons leave traces and can later be analysed, reverse engineered, and used against the country that
developed them. The US learned this the hard way when the hacking collective called the Shadow Brokers
somehow obtained and then leaked highly classified information about cyber weapons stockpiled by the
National Security Agency (NSA). These were later used by various hacker groups to attack a wide variety of
targets within the United States and all over the world.

Turn – destroying adversaries networks mean we can’t spy on them

Brandon Valeriano and Benjamin Jensen, CATO institute, January 15, 2019, The Myth of the Cyber Offense: The
Case for Restraint, https://www.cato.org/publications/policy-analysis/myth-cyber-offense-case-restraint,
Brandon Valeriano is the Donald Bren Chair of Armed Politics at Marine Corps University. Benjamin Jensen is an
associate professor at the Marine Corps University and a scholar-in-residence at American University's School of
International Service.

Cyber operations rarely work in isolation, and when they do, they tend to involve very sophisticated capabilities
that impose costs and risks on the attacker.36 Because such attacks can degrade or even destroy the target’s
networks and operations in the short term, they can also undermine espionage operations that rely on
gathering information over the long term. Degradation attacks therefore make up the minority (14.76 percent)
of documented operations between rival states. The majority of cyber operations were limited disruptions and
espionage.

Offensive cyber operations do not create deterrence

Brandon Valeriano and Benjamin Jensen, CATO institute, January 15, 2019, The Myth of the Cyber Offense: The
Case for Restraint, https://www.cato.org/publications/policy-analysis/myth-cyber-offense-case-restraint,
Brandon Valeriano is the Donald Bren Chair of Armed Politics at Marine Corps University. Benjamin Jensen is an
associate professor at the Marine Corps University and a scholar-in-residence at American University's School of
International Service.

To date, cyber operations do not appear to produce concessions by themselves. Offense, whether disruption,
espionage, or degradation, does not produce lasting results sufficient to change the behavior of a target
state.30 Only 11 operations (4 percent) appear to have produced even a temporary political concession, with
the majority associated with sustained, multiyear counterespionage operations by U.S. operatives usually
targeting China or Russia.31 Furthermore, each of these operations involved not just cyber actions, but other
instruments of national power, such as diplomatic negotiations, economic sanctions, and military threats.32

Other ways to deter cyber attacks

Greenberg, 2019, Andy Greenberg is a senior writer for WIRED, covering security, privacy, information freedom,
and hacker culture. He’s the author of the forthcoming book Sandworm: A New Era of Cyberwar and the Hunt
for the Kremlin's Most Dangerous Hackers, out November 5, 2018, June 18,
71
DebateUS!
NATO Cyber Coop Aff
https://www.wired.com/story/russia-cyberwar-escalation-power-grid/, How Not To Prevent a Cyberwar With
Russia

Bossert didn't confirm or deny the facts of the Times' grid-hacking report, but criticized current Trump officials
for not doing enough to deter cyberattacks from adversaries like Russia with other, more traditional means, such
as diplomacy or economic incentives and punishments. While the Trump administration imposed new sanctions
on Russia for grid-hacking and its unprecedented NotPetya cyberattack during Bossert's term, it's not clear what
if any similar measures the White House or State Department has pursued since. "I do not think they’re
sufficiently thinking through our other levers of national power, to explain what’s unacceptable and then to
start threatening or imposing consequences or inducements—carrots or sticks—to change [Russia's]
behavior." says Bossert, who has since taken a position at an as yet unnamed cybersecurity startup. "I don’t
mind escalatory bravado to some degree. But I’d be furious if that’s all we did."

Offensive deterrence approaches risk warfare

Greenberg, 2019, Andy Greenberg is a senior writer for WIRED, covering security, privacy, information freedom,
and hacker culture. He’s the author of the forthcoming book Sandworm: A New Era of Cyberwar and the Hunt
for the Kremlin's Most Dangerous Hackers, out November 5, 2018, June 18,
https://www.wired.com/story/russia-cyberwar-escalation-power-grid/, How Not To Prevent a Cyberwar With
Russia

Obama administration cybersecurity coordinator J. Michael Daniel echoed that warning, arguing that if Trump
administration and Cyber Command are indeed taking a more offensive approach to penetrating Russia's grid,
they're doing so without truly knowing the potential consequences. "This is uncharted territory in many ways.
Are we setting ourselves up for a pre-World War I situation, where activities that are designed to deter instead
prompt a response," says Daniel, now the president of the nonprofit Cyber Threat Alliance. "Are these activities
so threatening to countries that they have to take action against them? I think this is still very much an
undecided." "I think the possibility for accidents and miscalculation is high here." Even if Cyber Command
restrains itself to merely gaining access to Russian networks and placing malware "implants" that could cause
disruption without ever pulling the trigger, the threat alone would no doubt convince the Kremlin it had to
maintain the same access to American utilities' networks. After all, Russia's hackers have already
demonstrated perhaps the world's most aggressive targeting of foreign electric utility networks, triggering
blackouts in Ukraine in 2015 and 2016, and gaining deep access to American utilities' industrial control
systems in 2017.

Risks accidents and miscalculation

Greenberg, 2019, Andy Greenberg is a senior writer for WIRED, covering security, privacy, information freedom,
and hacker culture. He’s the author of the forthcoming book Sandworm: A New Era of Cyberwar and the Hunt
for the Kremlin's Most Dangerous Hackers, out November 5, 2018, June 18,
https://www.wired.com/story/russia-cyberwar-escalation-power-grid/, How Not To Prevent a Cyberwar With
Russia
72
DebateUS!
NATO Cyber Coop Aff
"The idea that we’re going to put implants in the Russian grid and they won't do the same to us is silly," Daniel
says, while emphasizing that, like Bossert, he has no independent knowledge of such activities beyond the
Times' story. Even the notion of trying to deter Russia by hacking their grid to the same degree that they've
hacked ours introduces serious potential for unintended consequences. "If the argument is that we’re going to
hold each other’s grids at risk, and that’s inherently more stabilizing, I’m not sure the theory holds entirely. I
think the possibility for accidents and miscalculation is high here." One very plausible miscalculation would
be if US Cyber Command were to penetrate Russian grid networks only to "prepare the battlefield," building
the capability to cause a blackout in Russia with no immediate intention to do so, but Russians misinterpreted
the intrusion as an immediate threat. Georgetown University professor Ben Buchanan calls this dangerous
ambiguity "the cybersecurity dilemma" in his book by the same name. "When you’re on the receiving end of a
hack, it’s very hard to determine the intention of the intruders," he says. "Genuinely attacking and building the
option to attack later on, which is probably what’s happening here, are very hard to disentangle."

Cyber attacks from other countries are better than warfare

Brandon Valeriano and Benjamin Jensen, CATO institute, January 15, 2019, The Myth of the Cyber Offense: The
Case for Restraint, https://www.cato.org/publications/policy-analysis/myth-cyber-offense-case-restraint,
Brandon Valeriano is the Donald Bren Chair of Armed Politics at Marine Corps University. Benjamin Jensen is an
associate professor at the Marine Corps University and a scholar-in-residence at American University's School of
International Service.

Cyber operations also offer a means of signaling future escalation risk as well as a cross-domain release valve for
crises. Rival states use cyber operations as a substitute for riskier military operations. Consider the standoff
between Russia and Turkey in 2016. After a Turkish F-16 shot down a Russian Su-24 Fencer, a wave of DDoS
attacks hit Turkish state-owned banks and government websites.41 Similarly, China is responding to U.S. tariffs
and increased freedom of navigation operations—provocatively sailing U.S. warships in waters that China
claims—with increased cyber activity targeting military networks.42 Russia is using a broad-front cyber
campaign in response to Western sanctions, infiltrating targets ranging from the anti-doping agencies and
sports federations to Westinghouse, which builds nuclear power plants, and the Hague-based Organization for
the Prohibition of Chemical Weapons.43 Rather than escalate with conventional military operations, cyber
operations offer rivals a way to respond to provocations without significantly increasing tensions in a crisis.
Better to have a Russian DDoS attack temporarily shut down Turkish networks than for Russian long-range
missiles to target Turkish military bases.

Targeting civilian infrastructure is immoral and provides a blueprint for adversaries’ attacks

Greenberg, 2019, Andy Greenberg is a senior writer for WIRED, covering security, privacy, information freedom,
and hacker culture. He’s the author of the forthcoming book Sandworm: A New Era of Cyberwar and the Hunt
for the Kremlin's Most Dangerous Hackers, out November 5, 2018, June 18,
https://www.wired.com/story/russia-cyberwar-escalation-power-grid/, How Not To Prevent a Cyberwar With
Russia

He points out that any grid-hacking techniques the US might use against Russia could potentially be turned
back on the US or its allies, providing a blueprint for sophisticated sabotage of the West's far more digitized
73
DebateUS!
NATO Cyber Coop Aff
economy. But even beyond that concern, he argues that callously treating civilians as the collateral damage of a
cyberattack that could black out homes, schools, and hospitals is an unnecessary and immoral step for American
hackers. "It will blow back. But I don’t oppose it because it will blow back. I oppose it because it’s not ethical,"
Lee says. "I don't think it's in keeping with the kind of country we want to be."

Deterrence an easily break down, triggering preemptive wars

Fred Kaplan, June 17, 2019, We’ve Entered a New Age of Cyberwar,
https://slate.com/news-and-politics/2019/06/trump-cyber-russia-power-hacking.html

Richard Clarke, the former cybersecurity chief in President Bill Clinton’s White House and co-author of a
forthcoming book on cyberwar called The Fifth Domain, said in an email, “The Trump administration may be
trying to create a situation of Mutually Assured Destruction, similar to the 1960s strategic nuclear doctrine.”
However, Clarke added, “Cyber is different in many ways.” First is the issue of what strategists call “crisis
instability”—the hair-trigger situation, in which one side might launch an attack, in order to preempt the
other side launching an attack. There is also the uncertainty of “attribution”—the country attacked might not
know for certain who planted the malicious code and might mistakenly strike back at an innocent party, thus
triggering an inadvertent war.

Cyber collapses nuclear deterrence


Cimbala, PhD, is Distinguished Professor of Political Science at Penn State University–Brandywine, ‘11
(Stephen J., “Nuclear Crisis Management and “Cyberwar”: Phishing for Trouble?” Strategic Studies Quarterly,
Spring)

Notwithstanding the preceding disclaimers, information warfare has the potential to attack or disrupt successful
crisis management on each of four dimensions. First, it can muddy the signals being sent from one side to the
other in a crisis. This can be done deliberately or inadvertently. Suppose one side plants a virus or worm in the
other’s communications networks.19 The virus or worm becomes activated during the crisis and destroys or
alters information. The missing or altered information may make it more difficult for the cyber victim to arrange
a military attack. But destroyed or altered information may mislead either side into thinking that its signal has
been correctly interpreted when it has not. Thus, side A may intend to signal “resolve” instead of “yield” to its
opponent on a particular issue. Side B, misperceiving a “yield” message, may decide to continue its aggression,
meeting unexpected resistance and causing a much more dangerous situation to develop. Infowar can also
destroy or disrupt communication channels necessary for successful crisis management. One way it can do this
is to disrupt communication links between policymakers and military commanders during a period of high threat
and severe time pressure. Two kinds of unanticipated problems, from the standpoint of civil-military relations,
are possible under these conditions. First, political leaders may have predelegated limited authority for nuclear
release or launch under restrictive conditions; only when these few conditions obtain, according to the
policymaking process can make its decisions. Once either side sees parts of its command, control, and
communications (C3) system being subverted by phony information or extraneous cyber noise, its sense of panic
at the possible loss of military options will be enormous. In the case of US Cold War nuclear war plans, for
example, disruption of even portions of the strategic C3 system could have prevented competent execution of
parts of the SIOP (the strategic nuclear war plan). The SIOP depended upon finely orchestrated time-on-target
74
DebateUS!
NATO Cyber Coop Aff
estimates and precise damage expectancies against various classes of targets. Partially misinformed or
disinformed networks and communications centers would have led to redundant attacks against the same target
sets and, quite possibly, unplanned attacks on friendly military or civilian installations. A third potentially
disruptive effect of infowar on nuclear crisis management is that it may reduce the search for available
alternatives to the few and desperate. Policymakers searching for escapes from crisis denouements need
flexible options and creative problem solving. Victims of information warfare may have a diminished ability to
solve problems routinely, let alone creatively, once information networks are filled with flotsam and jetsam.
Questions to operators will be poorly posed, and responses (if available at all) will be driven toward the least
common denominator of previously programmed standard operating procedures. Retaliatory systems that
depend on launch-on-warning instead of survival after riding out an attack are especially vulnerable to reduced
time cycles and restricted alternatives: A well-designed warning system cannot save commanders from
misjudging the situation under the constraints of time and information imposed by a posture of launch on
warning. Such a posture truncates the decision process too early for iterative estimates to converge on reality.
Rapid reaction is inherently unstable because it cuts short the learning time needed to match perception with
reality.20 The propensity to search for the first available alternative that meets minimum satisfactory conditions
of goal attainment is strong enough under normal conditions in nonmilitary bureaucratic organizations.21 In
civil-military command and control systems under the stress of nuclear crisis decision making, the first available
alternative may quite literally be the last; or so policymakers and their military advisors may persuade
themselves. Accordingly, the bias toward prompt and adequate solutions is strong. During the Cuban missile
crisis, a number of members of the presidential advisory group continued to propound an air strike and invasion
of Cuba during the entire 13 days of crisis deliberation. Had less time been available for debate and had
President Kennedy not deliberately structured the discussion in a way that forced alternatives to the surface, the
air strike and invasion might well have been the chosen alternative.22 Fourth and finally on the issue of crisis
management, infowar can cause flawed images of each side’s intentions and capabilities to be conveyed to the
other, with potentially disastrous results. Another example from the Cuban crisis demonstrates the possible side effects of simple
misunderstanding and noncommunication on US crisis management. At the most tense period of the crisis, a U-2 reconnaissance aircraft got off course
and strayed into Soviet airspace. US and Soviet fighters scrambled, and a possible Arctic confrontation of air forces loomed. Khrushchev later told Kennedy
that Soviet air defenses might have interpreted the U-2 flight as a prestrike reconnaissance mission or as a bomber, calling for a compensatory response by
Moscow.23 Fortunately Moscow chose to give the United States the benefit of the doubt in this instance and to permit US fighters to escort the wayward
U-2 back to Alaska. Why this scheduled U-2 mission was not scrubbed once the crisis began has never been fully revealed; the answer may be as simple as
bureaucratic inertia compounded by noncommunication down the chain of command by policymakers who failed to appreciate the risk of “normal”
reconnaissance under these extra-ordinary conditions. Further
Issues and Implications The outcome of a nuclear crisis
management scenario influenced by information operations may not be a favorable one. Despite the best
efforts of crisis participants, the dispute may degenerate into a nuclear first use or first strike by one side and
retaliation by the other. In that situation, information operations by either, or both, sides might make it more
difficult to limit the war and bring it to a conclusion before catastrophic destruction and loss of life had taken
place. Although there are no such things as “small” nuclear wars, compared to conventional wars, there can be
different kinds of “nuclear” wars in terms of their proximate causes and consequences.24 Possibilities include a
nuclear attack from an unknown source; an ambiguous case of possible, but not proved, nuclear first use; a
nuclear “test” detonation intended to intimidate but with no immediate destruction; and a conventional strike
mistaken, at least initially, for a nuclear one. As George Quester has noted: The United States and other powers
have developed some very large and powerful conventional warheads, intended for destroying the hardened
underground bunkers that may house an enemy command post or a hard-sheltered weapons system. Such
“bunker-buster” bombs radiate a sound signal when they are used and an underground seismic signal that could
be mistaken from a distance for the signature of a small nuclear warhead.25 The dominant scenario of a general nuclear war
between the United States and the Soviet Union preoccupied Cold War policymakers, and under that assumption concerns about escalation control and
75
DebateUS!
NATO Cyber Coop Aff
war termination were swamped by apocalyptic visions of the end of days. The second nuclear age, roughly coinciding with the end of the Cold War and the
demise of the Soviet Union, offers a more complicated menu of nuclear possibilities and responses.26 Interest in the threat or use of nuclear weapons by
rogue states, by aspiring regional hegemons, or by terrorists abetted by the possible spread of nuclear weapons among currently nonnuclear weapons
states stretches the ingenuity of military planners and fiction writers. In
addition to the world’s worst characters engaged in nuclear
threat of first use, there is also the possibility of backsliding in political conditions, as between the United States
and Russia, or Russia and China, or China and India (among current nuclear weapons states). Politically
unthinkable conflicts of one decade have a way of evolving into the politically unavoidable wars of another—
World War I is instructive in this regard. The war between Russia and Georgia in August 2008 was a reminder
that local conflicts on regional fault lines between blocs or major powers have the potential to expand into
worse.

Offense focus increases tensions, heightens vulnerabilities and is more expensive than
defense
Slayton 17 (Rebecca Slayton is Assistant Professor at Cornell University with a joint appointment in the Science
and Technology Studies Department and the Judith Reppy Institute for Peace and Conflict Studies, “Why Cyber
Operations Do Not Always Favor the Offense,” POLICY BRIEF - Quarterly Journal: International Security,
https://www.belfercenter.org/publication/why-cyber-operations-do-not-always-favor-offense)

The assumption that cyberspace favors the offense is widespread among policymakers and analysts, many of whom use
this assumption as an argument for prioritizing offensive cyber operations. Faith in offense dominance is understandable: breaches
of information systems are common, ranging from everyday identity theft to well-publicized hacks on the Democratic National Committee. A focus on
offense, however, increases international tensions and states’ readiness to launch a counter-offensive after a
cyberattack, and it often heightens cyber vulnerabilities. Meanwhile, belief in cyber offense dominance is not based on a clear
conception or empirical measurement of the offense-defense balance. One useful conception of the cyber offense-defense balance is based on cost-
benefit analysis: What is the benefit of offense less the cost of offense, relative to the benefit of defense less the cost
of defense? The technological complexity of cyberspace does tend to increase the costs of defense, but the costs of
offense and defense are ultimately shaped by the complexity of the goals of offense and defense and
organizations’ capabilities in managing this complexity . Organizational skill can shift the costliness of cyber operations toward the
defense. Further, whereas breaching information systems is easy and can be done at relatively low cost, achieving physical effects is far more
difficult and costly. Meanwhile, the benefits of cyber operations are highly situational and subjective . Thus, claims that
all of cyberspace is offense dominant obscure crucial differences between distinctive kinds of operations and the
ways they are valued; such claims should be avoided . It only makes sense to discuss the offense-defense balance of specific cyber
operations with specific goals, between specific adversaries with distinctive capabilities.

Prioritizing offensive operations creates vulnerabilities and wrecks trust


Slayton 17 (Rebecca Slayton is Assistant Professor at Cornell University with a joint appointment in the Science
and Technology Studies Department and the Judith Reppy Institute for Peace and Conflict Studies, “Why Cyber
Operations Do Not Always Favor the Offense,” POLICY BRIEF - Quarterly Journal: International Security,
https://www.belfercenter.org/publication/why-cyber-operations-do-not-always-favor-offense)

Prioritizing offensive operations can increase adversaries’ fears, suspicions, and readiness to take offensive
action. Cyber offenses include cyber exploitation (intelligence gathering) and cyberattack (disrupting, destroying, or subverting an adversary’s computer
systems). An adversary can easily mistake defensive cyber exploitation for offensive operations because the
distinction is a matter of intent, not technical operation. The difficulty of distinguishing between offensive and defensive
tactics makes mistrustful adversaries more reactive, and repeatedly conducting offensive cyber operations only
76
DebateUS!
NATO Cyber Coop Aff
increases distrust. A focus on offensive operations can also increase vulnerabilities; for example, secretly stockpiling
information about vulnerabilities in computers for later exploitation, rather than publicizing and helping civil
society to mitigate those vulnerabilities, leaves critical infrastructure vulnerable to attack. The skills and organizational
capabilities for offense and defense are very similar . Defense requires understanding how to compromise computer systems ;
one of the best ways to protect computer systems is to engage in penetration testing (i.e., controlled offensive operations on one’s own systems). The
similarity between offensive and defensive skills makes it unnecessary to conduct offensive operations against
adversaries to maintain offensive capability. Thus, rather than stockpiling technologies in the hope of gaining offensive advantage,
states should develop the skills and organizational capabilities required to innovate and maintain information
and communications technologies.

Defensive posture solves the benefits of offense without instability


Slayton 17 (Rebecca Slayton is Assistant Professor at Cornell University with a joint appointment in the Science
and Technology Studies Department and the Judith Reppy Institute for Peace and Conflict Studies, “Why Cyber
Operations Do Not Always Favor the Offense,” POLICY BRIEF - Quarterly Journal: International Security,
https://www.belfercenter.org/publication/why-cyber-operations-do-not-always-favor-offense)

The common assumption that the offense dominates cyberspace is dangerous and deeply misguided. The offense-defense
balance can be
assessed only for specific operations, not for all of cyberspace, as it is shaped by the capabilities of adversaries
and the complexity of their goals in any conflict. When it comes to exerting precise physical effects, cyberspace does not offer
overwhelming advantages to the offense. Because the capabilities of offense and defense are similar, improving
defensive operations allows preparation for cyber offense without risking geopolitical instability or increasing
vulnerability to attack.

Deterrence fails in the cyber realm

Maness & Valeriano, 2015, Ryan C. Maness, Northeastern University, Department of Political Science, Brandon
Valeriano, University of Glasglow, Cyber War versus Cyber Realities: Cyber Conflict in the International System,
Kindle Edition, page number at end of card

Few have offered measured and rational responses to the fear that actions in cyberspace and cyberpower
provoke. The stakes are fairly clear; the notion is that we are vulnerable in our new digital societies. McGraw
(2013) sees cyber conflict as inevitable, but the most productive response would be to build secure systems
and software. Others take a more extreme response by creating systems of cyber deterrence and offensive
capabilities. States may protect themselves by making available and demonstrating the capabilities of offensive
cyber weapons, as the fear of retaliation and increased costs of cyber operations will deter would-be hackers
once they see these weapons in operation. The danger here is with cyber escalation; by demonstrating resolve
and capability, states often provoke escalatory responses from rivals and trigger the security dilemma.
Furthermore, the application of deterrence in cyberspace is inherently flawed in that it takes a system
developed in one domain (nuclear weapons) and applies it to a non-equivalent domain (cyber), an issue that
we will dissect further in this volume. Valeriano, Brandon; Maness, Ryan C. (2015-04-27). Cyber War versus
Cyber Realities: Cyber Conflict in the International System (p. 13). Oxford University Press. Kindle Edition.

Deterrence logic does not apply in cyber space


77
DebateUS!
NATO Cyber Coop Aff
Maness & Valeriano, 2015, Ryan C. Maness, Northeastern University, Department of Political Science, Brandon
Valeriano, University of Glasglow, Cyber War versus Cyber Realities: Cyber Conflict in the International System,
Kindle Edition, page number at end of card

Deterrence logic in the cyber security field is problematic because often the target is responsible for the
infiltration in the first place, due to its own vulnerabilities and weaknesses. makes the process inoperable,
since the first step toward a solid system of deterrence is a strong system of protection, but countries seem to
be jumping first toward systems of offense rather than defense. It must be remembered that in nuclear
deterrence, the target must survive the first strike to have any credible system of retaliatory capability. How is
this possible when countries do not take defenses seriously, nor do they focus on any viable system of
resilience? Deterrence also fails since the norms of non-action in relation to cyber activities dominate the
system, making retaliation in cyberspace or conventional military space unrealistic. Threatening cyber actions
are discouraged; as evidence demonstrates, non-action becomes the new norm. How then can credibility in
cyberspace ever be established? For credibility to be in operation, a key characteristic of deterrence theory,
capabilities must be made known and public. This demonstration effect is nearly impossible in cyber tactics
because in making your capabilities known, you also make them controlled and exposed. Finally, deterrence is
not in operation in the cyber realm because counter-threats are made. These occur not in the form of massive
retaliation generally invoked in conventional deterrence logics, but in the form of marginal low-level actions that
only serve to escalate the conflict further. For an action to be prevented under deterrence, the defensive threat
has to be greater than the offensive threat. Despite the possibility that cyber tactics must be persuasively
catastrophic, the norm in the cyber community is for cyber actions to be either based on espionage or
deception, not typically the sort of actions associated with persuasive consequences preventing an action in the
first place (Lindsay 2013; Gartzke 2013). Deterrence is the art of making known what you want done or not
done, and enforcing this course of options through threats. In terms of cyber deterrence, the concept is utterly
unworkable. If deterrence is not at work for cyber conflict, then compellence may fit the dynamics of cyber
interactions. Cioffi-Revilla (2009) notes the difference between deterrence and compellence in the context of
cyber conflict, writing that “compellence is therefore about inducing behavior that has not yet manifested,
whereas deterrence is about preventing some undesirable future behavior. Accordingly, compellence works
when desirable behavior does occur as a result of a threat or inducement (carrots or sticks, respectively)” (126).
We see neither compellence nor deterrence working in cyber conflict, as states self-restrain themselves from
the overt use of the tactic. Therefore neither is prevented or induced into non-use by threats. Valeriano,
Brandon; Maness, Ryan C. (2015-04-27). Cyber War versus Cyber Realities: Cyber Conflict in the International
System (p. 48). Oxford University Press. Kindle Edition.

Suxnet proves offensive cyberwarfare fails

Boomakanti, 2018, https://www.orfonline.org/research/the-impact-of-cyber-warfare-on-nuclear-deterrence-a-


conceptual-and-empirical-overview-45305/, November 1, Kartik Bommakanti is Associate Fellow with the
Strategic Studies Programme. He is currently working on a project centered on India’s Space Military Strategy
vis-à-vis China. Bommakanti broadly specialises in space military issues, and more specifically the relationship
between the space medium and terrestrial warfare. Space military issues as the focus of his research is primarily
on the Asia-Pacific. Kartik also works on nuclear, conventional and sub-conventional coercion, particularly in the
context of the Indian subcontinent and the role of great powers in the Subcontinent’s strategic dynamics. He has
published in peer reviewed journals.,, The Impact of cyber warfare on nuclear deterrence: A conceptual and
78
DebateUS!
NATO Cyber Coop Aff
empirical overview, https://www.orfonline.org/research/the-impact-of-cyber-warfare-on-nuclear-deterrence-a-
conceptual-and-empirical-overview-45305/

Iran’s response on the other hand, while inflicting damage, only struck American banks and Washington’s ally in
the region, while they remained unscathed from Iran’s response. Although the latter is a critical target, it pales in
comparison to the sophistication and planning involved in targeting Iran’s centrifuge programme. As Colin Gray
wrote, “War is politics, and politics is about relative power.”[52] This statement is instructive, because the joint
US-Israeli cyber-attack against Iran’s nuclear centrifuges was a highly sophisticated attack involving dedicated
teamwork and a joint effort against a high-value target: Tehran’s nuclear sector. One reality that should not be
underestimated is that cyber-attacks are not easy against highly secure targets such as the nuclear and space
programmes of states in the advanced industrialised world. Stuxnet-like malware take time to plan and
engineer for effective use against an adversary. Based on all the evidence available in the public domain,
cyber warfare planners are compelled to collect intelligence on the “mechanical and physical” characteristics
of their targets. Stuxnet was precisely such a cyber-weapon that required considerable engineering skill and
long preparation, and even when it did successfully attack Iran’s centrifuges, it only set it back by at best a
year.[53]

Offensive cyber weapons don’t work well and trigger retaliation

Boomakanti, 2018, https://www.orfonline.org/research/the-impact-of-cyber-warfare-on-nuclear-deterrence-a-


conceptual-and-empirical-overview-45305/, November 1, Kartik Bommakanti is Associate Fellow with the
Strategic Studies Programme. He is currently working on a project centered on India’s Space Military Strategy
vis-à-vis China. Bommakanti broadly specialises in space military issues, and more specifically the relationship
between the space medium and terrestrial warfare. Space military issues as the focus of his research is primarily
on the Asia-Pacific. Kartik also works on nuclear, conventional and sub-conventional coercion, particularly in the
context of the Indian subcontinent and the role of great powers in the Subcontinent’s strategic dynamics. He has
published in peer reviewed journals.,, The Impact of cyber warfare on nuclear deterrence: A conceptual and
empirical overview, https://www.orfonline.org/research/the-impact-of-cyber-warfare-on-nuclear-deterrence-a-
conceptual-and-empirical-overview-45305/

To be sure, the weaker power, which is the target of a cyber-attack, may respond, but its counter cyber-attacks
may be ineffective either due to the lack of sufficient cyber strength or due to the robust defences prepared by
the defender.[54] On the other hand, should a weaker power initiate attack, Thomas Mahnken observes, “The
weaker power might be able to cause a stronger power some annoyance through cyber-attack, but seeking to
compel an adversary through cyber war, it would run the very real risk of devastating retaliation.”[55]
Although some might qualify this by noting that the strong do not have an outright advantage in the cyber
domain, they do wield a relative advantage over the weak and defence is stronger than presumed by
advocates of the “Cyber Revolution” thesis.[56] Yet offence dominance is hard to attain in the cyber realm
because cyber weapons are not easy to master, objectives are difficult to attain especially against strategic
targets, and the potential for retaliation is real, if the adversary too fields potent cyber warfare capabilities.

Cyberwarfare isn’t a weapon of the weak that the strong need to fear
79
DebateUS!
NATO Cyber Coop Aff
Boomakanti, 2018, https://www.orfonline.org/research/the-impact-of-cyber-warfare-on-nuclear-deterrence-a-
conceptual-and-empirical-overview-45305/, November 1, Kartik Bommakanti is Associate Fellow with the
Strategic Studies Programme. He is currently working on a project centered on India’s Space Military Strategy
vis-à-vis China. Bommakanti broadly specialises in space military issues, and more specifically the relationship
between the space medium and terrestrial warfare. Space military issues as the focus of his research is primarily
on the Asia-Pacific. Kartik also works on nuclear, conventional and sub-conventional coercion, particularly in the
context of the Indian subcontinent and the role of great powers in the Subcontinent’s strategic dynamics. He has
published in peer reviewed journals.,, The Impact of cyber warfare on nuclear deterrence: A conceptual and
empirical overview, https://www.orfonline.org/research/the-impact-of-cyber-warfare-on-nuclear-deterrence-a-
conceptual-and-empirical-overview-45305/

Further, the advantages of the strong are in preparing defences against attack and making post-attack recovery
more rapid.[61] The technical demands of cyber warfare are such that the weak have limited capacities, which
include technical and financial resources. These attributes mean that the weak are unlikely to wield the
advantages of the strong. In the cyber domain, there are no substantial asymmetric advantages that the weak
wield against the strong. At best, they may be able to sustain cyber-attacks against low-value or soft targets with
low-end capabilities. As Lindsay observed, “Cyber warfare is not a weapon of the weak”.[62] On balance, this
statement is empirically valid. There are literally no cases where the weak have inflicted considerable pain
against the strong. Stuxnet was a cyber-weapon developed by the strong against the weak. Comparable
responses and attacks similar to Stuxnet by the weak against the strong are absent. In a single day, hundreds of
cyber-attacks, at a minimum, occur. Two or three of them, at most, may involve serious breaches of security,
such as data theft and financial embezzlement. Yet serious strategic cyber-attack targeting C3 nuclear
capabilities are still rare and ultimately a capacity only the strongest cyber powers will possess at least for the
near future. Even more, the weak have not yet demonstrated a comparable capacity for imposing losses against
the strong’s strategic facilities and critical infrastructure.

Offensive cyber retaliation is not credible

HERBERT LIN and AMY ZEGART, 2016, Bytes, Bombs, and Spies (p. 1). Brookings Institution Press. Kindle Edition,
Senior Researcher for cybersecurity policy policy and security at the Center for International Security and
Cooperation and Hank J. Holland Fellow in Cyber Policy and Security at the Hoover Institution, both at Stanford
University. His research interests relate broadly to the policy dimensions of cybersecurity and cyberspace, with
particular focus on the use of offensive operations in cyberspace as instruments of national policy. He is also
Chief Scientist, Emeritus, for the Computer Science and Telecommunications Board, National Research Council
of the National Academies, where he served from 1990 through 2014 as study director of major projects on
public policy and information technology, and Adjunct Senior Research Scholar and Senior Fellow in
Cybersecurity (nonresident) at the Saltzman Institute for War and Peace Studies of the School for International
and Public Affairs at Columbia University., Amy Zegart is the Davies Family Senior Fellow at the Hoover
Institution, a Senior Fellow at the Center for International Security and Cooperation, and Professor of Political
Science, by courtesy, at Stanford University. She is also a contributing editor to The Atlantic. Her research
examines U.S. intelligence challenges, cybersecurity, drone warfare, and American foreign policy. Her
publications include Spying Blind: The CIA, the FBI, and the Origins of 9/11 (Princeton University Press, 2007)
and, with Condoleezza Rice, Political Risk: How Businesses and Organizations Can Anticipate Global Insecurity
80
DebateUS!
NATO Cyber Coop Aff
(Twelve, 2018). Before coming to Stanford in 2011 she was Professor of Public Policy at UCLA’s Luskin School of
Public Affairs and spent several years as a McKinsey & Company management consultant

A key potential shortcoming of kinetic retaliation must therefore lie in the adversary’s assessment of U.S.
credibility—that is, the adversary’s assessment of the United States’ ability and willingness to inflict retaliatory
damage by kinetic attack.14 These shortcomings need to be compared with the credibility challenges inherent in
cyber retaliation, which are likely substantial. To lay the groundwork for this comparison, we first consider the
barriers to making cyber retaliatory threats credible. Generally speaking, the threat of cyber retaliation is less
credible than the threat of kinetic retaliation because a state will have greater difficulty demonstrating its
cyberattack capabilities before a conflict begins. States can reveal their conventional and nuclear capabilities by
developing, testing, and deploying forces, demonstrating their effectiveness against relevant types of targets,
and engaging in training and exercises, all of which are observable (to varying degrees) by their adversaries. In
contrast, an adversary will have far less evidence of the extent and effectiveness of U.S. offensive cyber
capabilities. Not only are they entirely invisible, but they may be untested against adversary systems, leaving the
adversary with some doubt about their effectiveness, and in turn about the credibility of U.S. threats.15 Testing
cyber weapons against the adversary’s systems, especially ones that it views as especially valuable and
important, would be risky because, if detected, the adversary would likely view the test as highly provocative. In
addition, testing a cyber weapon could reduce its future effectiveness by alerting the adversary to the
vulnerability that the attacker plans to exploit. Doubts about the attacker’s offensive cyber capabilities could be
further increased by the limitations of relying on one-shot or target-customized weapons, which could well be
useless after the first attack.16 Thus conventional responses will often be easier for an adversary to assess.
Bytes, Bombs, and Spies (pp. 50-51). Brookings Institution Press. Kindle Edition.
81
DebateUS!
NATO Cyber Coop Aff
Answers to: Best Defense is a Good Offense
No, defense is sufficient
Valeriano and Jensen 19 (Brandon Valeriano is the Donald Bren Chair of Armed Politics at Marine Corps
University. Benjamin Jensen is an associate professor at the Marine Corps University and a scholar-in-residence
at American University's School of International Service, “The Myth of the Cyber Offense: The Case for
Restraint,” Jan 15, 2019, https://www.cato.org/publications/policy-analysis/myth-cyber-offense-case-restraint)

The rationale behind persistent action—that the best defense is a good offense—is deeply flawed. In fact, most military
and strategic theory holds that the defense is the superior posture. 49 For example, Sun Tzu describes controlling an adversary
to make their actions more predictable, and hence easy to undermine, by baiting them to attack strong points. 50
The stronger form of war is a deception-driven defense: confusing an attacker so that they waste resources
attacking strong points that appear weak. This parallels cybersecurity scholars Erik Gartzke and Jon Lindsay’s claim that cyberspace is
not offense dominant, but deception dominant. 51 Rather than persistent action and preemptive strikes on
adversary networks, the United States needs persistent deception and defensive counterstrikes optimized to
undermine adversary planning and capabilities.

Cyberdefense outweighs any offensive capabilities --- deliberately weakening the internet
guarantees successful attacks
Masnick 13 [Mike, founder and CEO of Floor64 and editor of the Techdirt blog, Oct 7th 2013, “National
Insecurity: How The NSA Has Put The Internet And Our Security At Risk,” Techdirt,
https://www.techdirt.com/articles/20131005/02231624762/national-insecurity-how-nsa-has-put-internet-our-
security-risk.shtml]

But, really, the issue is that theNSA's actions aren't actually helping national security, but they're doing the exact
opposite. They're making us significantly less safe. Bruce Schneier made this point succinctly in a recent interview: The NSA’s actions are
making us all less safe. They’re not just spying on the bad guys, they’re deliberately weakening Internet security for everyone—
including the good guys. It’s sheer folly to believe that only the NSA can exploit the vulnerabilities they create . Additionally,
by eavesdropping on all Americans, they’re building the technical infrastructure for a police state. The folks over at EFF have dug into this point in much
greater detail as well. Undermining internet security is a really bad idea. While it may make it slightly easier for the NSA to spy on
people -- it also makes it much easier for others to attack us. For all this talk of national security, it's making us a lot less
secure. In trying to defend this situation, former NSA boss Michael Hayden recently argued that the NSA, when it comes across security
vulnerabilities, makes a judgment call on whether or not it's worth fixing or exploiting itself. He discussed how the NSA
thinks about whether or not it's a "NOBUS" (nobody but us) situation, where only the US could exploit the hole : You
look at a vulnerability through a different lens if even with the vulnerability it requires substantial computational
power or substantial other attributes and you have to make the judgment who else can do this ? If there's a
vulnerability here that weakens encryption but you still need four acres of Cray computers in the basement in order to work it you kind of think "NOBUS"
and that's a vulnerability we are not ethically or legally compelled to try to patch -- it's one that ethically and legally we could try to exploit in order to keep
Americans safe from others. Of course, that ignores just how sophisticated and powerful certain other groups and governments are these days. As that
article notes, the
NSA is known as a major buyer of exploits sold on the market -- but that also means that every single one of
those exploits is known by non-NSA employees , and the idea that only the NSA is exploiting those is laughable. If
the NSA were truly interested in "national security" it would be helping to close those vulnerabilities, not using
them to their own advantage. This leads to two more troubling issues -- the fact that the "US Cyber Command" is under the control of the NSA
is inherently problematic. Basically, the NSA has too much overlap between its offensive and defensive mandates in terms of computer security. Given
what we've seen now, it's pretty damn clear that the
NSA highly prioritizes offensive efforts to break into computers, rather
than defensive efforts to protect Americans' computers. The second issue is CISPA. The NSA and its defenders pushed
82
DebateUS!
NATO Cyber Coop Aff
CISPA heavily, claiming that it was necessary for "national security" in protecting against attacks. But a key part of
CISPA was that it was designed to grant immunity to tech companies from sharing information with... the NSA , which
was effectively put in control over "cybersecurity" under CISPA. It seems clear, at this point, that the worst fears about CISPA are almost certainly
true. It was never about improving defensive cybersecurity, but a cover story to enable greater offensive efforts
by the NSA which, in turn, makes us all a lot less secure.
83
DebateUS!
NATO Cyber Coop Aff

Lethal Autonomous Weapons Bad Disadvantage


84
DebateUS!
NATO Cyber Coop Aff
Links

Lethal autonomous weapons are a critical part of offensive cyber operations

Philip Chertoff, October 2018, Perils of Lethal Autonomous Weapons Systems


Proliferation: Preventing Non-State Acqusition,
https://dam.gcsp.ch/files/2y10RR5E5mmEpZE4rnkLPZwUleGsxaWXTH3aoibziMaV0JJrW
CxFyxXGS
Within an export control, such a definition could be made more inclusive in the interest of carefully monitoring
both lethal autonomous and lethal near-autonomous weapons systems. One caveat would be that such a
definition would be exclusive of digital LAWS (the possible creation of non-deterministic, worm-like cyber
threats that lethally and non-lethally engage targets).

Gregory Allen, February 21, 2018, Artificial intelligence and national security,
https://thebulletin.org/2018/02/artificial-intelligence-and-national-security/, Gregory C. Allen is an adjunct
fellow at the Center for a New American Security, where his research focuses on the intersection of artificial
intelligence, cybersecurity, robotics, and nati, Taniel Chan is currently with Bain & Company in London.
Previously, he was the associate director of Strategy and Analytics for the New York City Department of
Education and a financial ...

However, the same logic suggests AI advances will enable improvements in cyber offense. For cybersecurity,
advances in AI pose an important challenge in that attack approaches today that are labor-and-talent
constrained may – in a future with highly-capable AI – be merely capital-constrained. The most challenging type
of cyberattack, for most organizations and individuals to deal with, is the Advanced Persistent Threat (APT). With
an APT, the attacker is actively hunting for weaknesses in the defender’s security and patiently waiting for the
defender to make a mistake. This is a labor-intensive activity and generally requires highly-skilled labor. With the
growing capabilities in machine learning and AI, this “hunting for weaknesses” activity will be automated to a
degree that is not currently possible and perhaps occur faster than human-controlled defenses could
effectively operate. This would mean that future APTs will be capital-constrained rather than labor-and-talent
constrained. In other words, any actor with the financial resources to buy an AI APT system could gain access
to tremendous offensive cyber capability, even if that actor is very ignorant of internet security technology.
Given that the cost of replicating software can be nearly zero, that may hardly present any constraint at all.

Autonomous responses key to respond to cyber attacks

United Nations Indstitute for Disarmament Research, 2018, The Weaponization of Increasingly Autonomous
Technologies: Autonomous Weapon Systems and Cyber Operations,
https://unidir.org/files/publications/pdfs/autonomous-weapon-systems-and-cyber-operations-en-690.pdf
85
DebateUS!
NATO Cyber Coop Aff
Increasing autonomy in both offensive and defensive cyber operations is attractive for many of the same
reasons as for conventional operations: harnessing ever-greater speed of response, predictive abilities,
decision support, and the identification and exploitation of adversaries’ vulnerabilities. One example of
increasing autonomy in cyber operations is the technique known as Automatic Exploit Generation (AEG), the
purpose of which is to “automatically find bugs and generate working exploits”.1 The 2016 Defense Advanced
Research Projects Agency (DARPA) Grand Cyber Challenge was explicitly dedicated to increasing autonomy in
cyber operations: “The need for automated, scalable, machine-speed vulnerability detection and patching is
large and growing fast as more and more systems—from household appliances to major military platforms
—get connected to and become dependent upon the internet. … Machines were challenged to find and patch
within seconds—not the usual months—flawed code that was vulnerable to being hacked, and find their
opponents’ weaknesses before the defending system did.”2 In essence, the challenge was to automate the process
of identifying vulnerabilities, simultaneously patching one’s own while exploiting the vulnerabilities of other
systems. This well-publicized competition builds upon considerable speculation in the media about government
interest in development of autonomous cyber operations.3 Edward Snowden, for example, alleged that the United
States National Security Agency’s MonsterMind programme was on its way to becoming an autonomous cyber
system: But there were indications it could also include an automated strike-back capability, allowing it to
instantly initiate a counterstrike at a piece of malware’s source. An error in such an autonomous system, Snowden
pointed out, could lead to an accidental war. “What happens when the algorithms get it wrong? ... We’re opening
the doors to people launching missiles and dropping bombs by taking the human out of the decision chain.”4 A
standard component of network security includes monitoring threats, recognizing patterns of attack and even
anticipating them. Increasingly sophisticated algorithms both detect and respond to potential network threats in
the ICT (information and communication technology) environment.5 Already, due to the processing power and
their speed, these “automated” responses are outside real-time human observation or control. At what point would
an “automated” response to a cyber threat become an autonomous response?
86
DebateUS!
NATO Cyber Coop Aff
War
Lethal autonomous weapons change the strategic calculus required to go to war by
eliminating the threat of troop casualty. This makes war and intervention more likely –
only solution is to ban lethal autonomous weapons in their entirety

HRW 12 [Human Rights Watch. November 2012. “Losing Humanity -- The Case Against Killer Robots.”
International Human Rights Clinic. https://www3.nd.edu/~dhoward1/Losing%20Humanity-The%20Case
%20against%20Killer%20Robots-Human%20Rights%20Watch.pdf]

Making War Easier and Shifting the Burden to Civilians Advances in technology have enabled militaries to reduce significantly direct
human involvement in fighting wars. The invention of the drone in particular has allowed the United States to
conduct military operations in Afghanistan, Pakistan, Yemen, Libya, and elsewhere without fear of casualties to
its own personnel. As Singer notes, “[M]ost of the focus on military robotics is to use robots as a replacement for
human losses.”157 Despite this advantage, the development brings complications. The UK Ministry of Defence highlighted the urgency of
more vigorous debate on the policy implications of the use of unmanned weapons to “ensure that we do not risk
losing our controlling humanity and make war more likely.”158 Indeed, the gradual replacement of humans with
fully autonomous weapons could make decisions to go to war easier and shift the burden of armed conflict
from soldiers to civilians in battle zones. While technological advances promising to reduce military casualties are
laudable, removing humans from combat entirely could be a step too far. Warfare will inevitably result in human
casualties, whether combatant or civilian. Evaluating the human cost of warfare should therefore be a calculation political leaders always make before
resorting to the use of military force. Leaders might be less reluctant to go to war, however, if the threat o their own
troops were decreased or eliminated. In that case, “states with roboticized forces might behave more
aggressively…. [R]obotic weapons alter the political calculation for war.”159 The potential threat to the lives of
enemy civilians might be devalued or even ignored in decisions about the use of force .160 The effect of drone
warfare offers a hint of what weapons with even greater autonomy could lead to. Singer and other military experts contend that
drones have already lowered the threshold for war, making it easier for political leaders to choose to use force. 161
Furthermore, the proliferation of unmanned systems, which according to Singer has a “profound effect on ‘the
impersonalization of battle,’”162 may remove some of the instinctual objections to killing . Unmanned systems
create both physical and emotional distance from the battlefield, which a number of scholars argue makes killing
easier.163 Indeed, some drone operators compare drone strikes to a video game because they feel emotionally detached from the act of killing.164 As D. Keith Shurtleff, Army
chaplain and ethics instructor for the Soldier Support Institute at Fort Jackson, pointed out, “[ A]s war becomes safer and easier, as soldiers are
removed from the horrors of war and see the enemy not as humans but as blips on a screen, there is a very real
danger of losing the deterrent that such horrors provide .”165 Fully autonomous weapons raise the same concerns. The prospect of
fighting wars without military fatalities would remove one of the greatest deterrents to combat. 166 It would also
shift the burden of armed conflict onto civilians in conflict zones because their lives could become more at
risk than those of soldiers. Such a shift would be counter to the international community’s growing concern for the protection of civilians.167 While some
advances in military technology can be credited with preventing war or saving lives, the development of fully autonomous weapons could
make war more likely and lead to disproportionate civilian suffering. As a result, they should never be
made available for use in the arsenals of armed forces.

War is more likely with autonomous weapons systems as it decreases the political cost of
going to war thereby increasing the likelihood of warfare
87
DebateUS!
NATO Cyber Coop Aff
Danckwardt 15 [Danckwardt, Petter. Increasing De-Personalization in Warfare : Levinasian Words on Lethal
Autonomous Weapons Systems. Södertörns högskola, Institutionen för kultur och lärande, 2015. http://sh.diva-
portal.org/smash/get/diva2:895468/FULLTEXT01.pdf] /// CP

Furthermore, international lawyers argue that drone technology


has changed how we look at the laws of war . Many of the concerns
regarding drone warfare are of equal concern regarding LAWS . I will not go into detail regarding the US drone strategy in the
Middle East and parts of Africa, 17 however, Chamayou argues that the American drone warfare fundamentally change the traditional conception of war. To
summarize, he argues it has happened in three ways: (1) every place becomes a potential site of drone violence, 18 (2) as ”precise weapons”, drones also
render geographical contours irrelevant since the ostensible precision of these weapons justifies the killing of suspected terrorists in their homes–a “legal
strike zone” is then equated with anywhere the drone strikes,19 and (3) drones change our conception of war because it becomes a priori impossible to die as
one kills (Chamayou 2015).20 In light of this, Chamayou argues that the changing
conception of warfare is having various effects on
the drone state itself. Politics change or are able to change due to the fact that soldiers no longer are risking
their lives, changing citizen’s awareness and attitudes to war. Robots become a technological solution for
states not able to mobilize support for war; social legitimacy becomes irrelevant to the political decision-
making process relating to war. This reduces the threshold for resorting to violence to a degree that violence
appears increasingly as a default option for foreign policy, increasing the risk of brutalization (ibid, 185-94).

Lethal Autonomous Weapons will be used as a tool for repressive regimes and increases
the likelihood of war by eliminating the political opportunity cost of losing troops in an
unpopular foreign conflict thereby making war more likely

HRW 12 [Human Rights Watch. November 2012. “Losing Humanity -- The Case Against Killer Robots.”
International Human Rights Clinic. https://www3.nd.edu/~dhoward1/Losing%20Humanity-The%20Case
%20against%20Killer%20Robots-Human%20Rights%20Watch.pdf]

Byeliminating human involvement in the decision to use lethal force in armed conflict, fully autonomous weapons would
undermine other, non-legal protections for civilians. First, robots would not be restrained by human emotions
and the capacity for compassion, which can provide an important check on the killing of civilians . Emotionless
robots could, therefore, serve as tools of repressive dictators seeking to crack down on their own people
without fear their troops would turn on them. While proponents argue robots would be less apt to harm civilians as a result of fear or anger, emotions do not always
lead to irrational killing. In fact, a person who identifies and empathizes with another human being, something a robot cannot do, will be more reluctant to harm that individual. Second, although relying on

it would also make it easier for political leaders to resort to force


machines to fight war would reduce military casualties—a laudable goal—

since their own troops would not face death or injury. The likelihood of armed conflict could thus increase,
while the burden of war would shift from combatants to civilians caught in the crossfire. Finally, the use of fully
autonomous weapons raises serious questions of accountability, which would erode another established tool for
civilian protection. Given that such a robot could identify a target and launch an attack on its own power, it is unclear who
should be held responsible for any unlawful actions it commits . Options include the military commander that deployed it, the programmer, the manufacturer,
and the robot itself, but all are unsatisfactory. It would be difficult and arguably unfair to hold the first three actors liable, and the actor that actually committed the crime—the robot—would not be punishable.

accountability would fail to deter violations of international humanitarian law and to provide
As a result, these options for

victims meaningful retributive justice. Based on the threats fully autonomous weapons would pose to civilians, Human
Rights Watch and IHRC make the following recommendations, which are expanded on at the end of this report: To All States • Prohibit the
development, production, and use of fully autonomous weapons through an international legally binding
instrument. • Adopt national laws and policies to prohibit the development, production, and use of fully
autonomous weapons. • Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the
very beginning of the development process and continue throughout the development and testing phases. To Roboticists
and Others Involved in the Development of Robotic Weapons • Establish a professional code of conduct governing the research and development of autonomous robotic weapons, especially those capable of
becoming fully autonomous, in order to ensure that legal and ethical concerns about their use in armed conflict are adequately considered at all stages of technological development.
88
DebateUS!
NATO Cyber Coop Aff
Autonomous weapons increase the risk of war and will evolve into weapon of mass
destruction – arms race, hacking, and conflict escalation. Only outright ban solves

Asaro 15 [Asaro, Peter. 08-07-2015. “Ban Killer Robots before They Become Weapons of Mass Destruction.”
Scientific America. https://www.scientificamerican.com/article/ban-killer-robots-before-they-become-weapons-
of-mass-destruction/]

Thedevelopment of autonomous weapons could very quickly and easily lead to arms races between rivals.
Autonomous weapons would reduce the risks to combatants, and could thus reduce the political risks of
going to war, resulting in more armed conflicts. Autonomous weapons could be hacked, spoofed and
hijacked, and directed against their owners, civilians or a third party . Autonomous weapons could also initiate
or escalate armed conflicts automatically, without human decision-making . In a future where autonomous
weapons fight autonomous weapons the results would be intrinsically unpredictable, and much more likely lead
to the mass destruction of civilians and the environment than to the bloodless wars that some envision . Creating
highly efficient automated violence is likely to lead to more violence, not less. There is also a profound moral question at stake. What is
the value of human life if we delegate the responsibility for deciding who lives and who dies to machines? What kind of world do we want to live in and leave for our children? A world in which AI programs

have the opportunity to create a world in which autonomous


and robots have the means and authority to use violent force and kill people? If we

weapons are banned, and those who might use them are stigmatized and held accountable, do we not have a moral
obligation to work toward such a world? We can prevent the development of autonomous weapons before they
lead to arms races and threaten global security and before they become weapons of mass destruction . But our
window of opportunity for doing so is rapidly closing. ADVERTISEMENT For the past two years, the Campaign to Stop Killer Robots has been urging the United Nations to ban autonomous weapons. The

U.N.’s Convention on Certain Conventional Weapons (CCW) has already held two expert meetings on the issue,
and our coalition of 54 nongovernmental organizations* from 25 countries is encouraging the CCW to advance
these discussions toward a treaty negotiation. We very much welcome the support from this letter but we must continue to encourage the states represented at the CCW to
move forward on this issue. The essential nature of an arms race involves states acting to improve their own short-term interests at the expense of their own and global long-term benefits. As the letter from

have to learn to think in a new way. We have to learn to ask ourselves not what steps can be
Einstein and Russell makes clear: “We

taken to give military victory to whatever group we prefer, for there no longer are such steps; the question we
have to ask ourselves is: What steps can be taken to prevent a military contest of which the issue must be
disastrous to all parties?” We must continue to demand that of our leaders and policy makers work together with other
nations to preempt the threats posed by autonomous weapons by banning their development and use, before we
witness the mass destruction they threaten to bring.
89
DebateUS!
NATO Cyber Coop Aff
Arms Race
Autonomous weapons systems risk fueling a global arms race

Morgan et. al 20 [Morgan, Forrest E., Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden,
Kelly Klima, and Derek Grossman, Military Applications of Artificial Intelligence: Ethical Concerns in an
Uncertain World. Santa Monica, CA: RAND Corporation, 2020.
https://www.rand.org/pubs/research_reports/RR3139-1.html.]

Much of the R&D of AI is conducted by the private sector and by nongovernment researchers. These efforts have significantly advanced this technology, identified new types of
applications, and decreased the costs. Much
foundational research is publicly available, and researchers have also created open-
source tools to promote wide use.63 Although such research and tools are not focused on military applications, many
of these capabilities are dual- or multiuse across civilian and military contexts. For example, in a futuristic depiction of a malicious
use of open-source tools, the “Slaughterbots” video depicts the combination of three technologies: small UAVs, facial recognition technology, and a munition. Less mature
versions of all of these technologies are already commercially available. As these types of capabilities improve and costs decrease, they will be more readily accessible for actors
outside of traditional global powers. The 2015 “Open Letter from AI & Robotics Researchers” notes that “autonomous
weapons will become the
Kalashnikovs of tomorrow” and that “it will only be a matter of time until they appear on the black market and in
the hands of terrorists.”64 This is a dire warning of proliferation from some of the experts closest to the technology . The
spread of cyber capabilities demonstrates these risks. It is not only the traditional military powers that have
sophisticated cyber tools. Now more than 30 countries have military cyber programs, and North Korea and
Iran have already leveraged their cyber programs for a variety of malicious purposes .65 In addition, criminals have also
taken advantage of readily available cyber tools to steal identities, money, and information. This proliferation is especially troubling, since these systems enable small-scale
malicious actors to inflict significant damage on even welldefended targets

Russia and non-state actors are actively investing in autonomous weapons systems

Morgan et. al 20 [Morgan, Forrest E., Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden,
Kelly Klima, and Derek Grossman, Military Applications of Artificial Intelligence: Ethical Concerns in an
Uncertain World. Santa Monica, CA: RAND Corporation, 2020.
https://www.rand.org/pubs/research_reports/RR3139-1.html.]

Chapter 2 provided an overview of the potential military benefits of AI, including faster and better decisionmaking, improved ISR and precision targeting, mitigation of manpower
issues, and improvements in cyber defense. Hoping to harness these benefits to regain military advantages eroded by the post–Cold War proliferation of advanced military
capabilities, DoD is investing heavily in these technologies. However, private companies are leading the research on the most innovative
applications, and government institutions will need to leverage these technology firms for their own acquisition
and development. Moreover, the United States is not the only nation whose military establishment is seeking to benefit from AI. Unlike in some past technological
developments, the United States will not have a monopoly, or even a first-mover advantage in this competition . China is aggressively pursuing
militarized AI technologies, and its developers may have certain advantages over their U.S. counterparts .
Potential Chinese advantages include a top-down strategic approach that emphasizes civil-military fusion, access to
vast amounts of data for algorithm training, and a potential willingness to press into risky and arguably unethical
technology applications. However, China also faces limitations in its AI advancement, including a dearth of AI
experts and technicians. Russia is also seeking to rapidly develop military AI. Political and structural factors in Russia both help
and hurt its ability to integrate AI into military applications. Although Russia’s private sector is less diversified and developed than that of
the United States, like China’s, it is potentially more responsive to military requirements. Suffering from
Western sanctions and hampered by an inflexible, centrally controlled defense-industrial base, Russia is in a position of relative
economic weakness, but this position encourages it to develop AI applications aggressively and employ
90
DebateUS!
NATO Cyber Coop Aff
them more recklessly. Indeed, Russia is already using AI technologies in support of its military efforts . They provide
important capabilities in its hybrid, gray-zone, and information warfare campaigns. The United States also faces significant
risks that military AI could proliferate to other state and nonstate actors. Some tools, such as offensive cyber capabilities, feature relatively low development costs and easy reuse
once they are loose in the environment. Actors
such as North Korea and criminal groups have already been able to harness some
of these capabilities for their malicious purposes. A s the cost of AI capabilities decrease, applications could
proliferate to a broad range of actors with potentially lethal consequences.

Absent international norms –global arms race is coming with dire consequences to global
stability

Morgan et. al 20 [Morgan, Forrest E., Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden,
Kelly Klima, and Derek Grossman, Military Applications of Artificial Intelligence: Ethical Concerns in an
Uncertain World. Santa Monica, CA: RAND Corporation, 2020.
https://www.rand.org/pubs/research_reports/RR3139-1.html.]

Since DoD’s 2012 release of Directive 3000.09, the United States has had the most advanced policy on the regulation of autonomous weapons that is
publicly available. The directive articulates a standard of “appropriate human judgment” in development and use of autonomous weapon systems, and
requires training and other policy guidelines to ensure that autonomous 122 weapons can be used reliably and safely. Other
countries, however,
have not been as transparent about military AI policy. Russia and China have not publicly articulated
restraints; nor have they explained how their legal review process ensures compliance with LOAC. It is also not clear that the United States is aligned
with NATO members or other allies regarding policies that apply to the development and use of autonomous weapon systems. International
competition in the development of military AI could escalate into a full-blown arms race. The lack of
international consensus on norms of responsible development and use creates risks that states will have an
incentive to rapidly acquire and integrate military AI without putting appropriate policies in place . Such an
environment could generate ever-increasing pressure to quickly identify and develop new military AI applications
without sufficient precaution to ensure they are safe and reliable . This situation could result in a “race to the
bottom,” ultimately threatening the ability of humans to exercise agency over military AI systems. Such an
outcome would have serious ramifications for the entire international community.

Proliferation of autonomous weapons system is inevitable --- presence in commercial


markets makes proliferation fundamentally different from other weapons systems

Leys 18 [Leys, Nathan. “Autonomous Weapon Systems and International Crises.” Strategic Studies Quarterly,
vol. 12, no. 1, 2018, pp. 48–73. JSTOR, www.jstor.org/stable/26333877]

Finally, the proliferation


of cutting-edge weapons is not a new problem for strategists. However, compared to nuclear
weapons or GPS-targeted precision munitions, the technologies enabling AWS are much more easily available
in the commercial market. Many of the sensors used in AWS, for example, are increasingly vital to civilian autonomous technologies. Consider self-driving cars:
Lidar (light radar), for instance, is favored by many developers of self-driving cars because of its ability to “pinpoint the location of objects up to 120 meters away with centimeter
accuracy.”27 Other prototype vehicles use passive systems like high-resolution cameras and microphones to understand the world around them.28 Many
of the
challenges faced by military AWS, including operating in low-visibility conditions, differentiating human bodies
from inanimate objects, and developing redundant systems to prevent the failure of one sensor rendering a robot
blind or deaf, are the same problems that civilian engineers are attempting to solve. In deed, the sensors that will allow a self-
91
DebateUS!
NATO Cyber Coop Aff
driving car to avoid hitting a pedestrian may soon be the same as those used by an AWS to kill an enemy combatant. The ub iquity
of these technologies in the
civilian world matters, because if AWS substantially increase the capabilities of an adopting military, the question
of proliferation becomes inextricable from the question of how difficult and expensive it is to build AWS . Some
analysts expect AWS will proliferate easily.29 A now-famous open letter signed by luminaries including Elon Musk, Stephen Hawking, and Steve
Wozniak warns, “Autonomous weapons will become the Kalashnikovs of tomorrow.”30 But this comparison appears inaccurate. The Kalashnikov came to define modern low-
level warfare because it is simple, cheap,easy to master, and practically unbreakable.31 It may soon be possible to rig a cheap drone to dive-bomb
anything that moves, but the highly capable AWS likely to be deployed by the United States and its near-peer rivals are the
opposite of simple, and as they develop, they will become more complex not less . Andrea Gilli and Mauro Gilli note that similar
constraints may make the proliferation of military UAVs much more difficult than is commonly assumed.32 Given these technical limits, for the near-
and medium-term, only the most technologically advanced militaries are likely to develop AWS effective enough
to make a difference against the United States or similarly capable military .

Militarization of AI is dangerous and unsafe

Scharre 20 [Scharre, Paul, is a senior fellow and director of the Technology and National Security Program at the
Center for a New American Security, 06-02-2020. “Policy Roundtable: Artificial Intelligence and International
Security.” Texas National Security Review. https://tnsr.org/roundtable/policy-roundtable-artificial-intelligence-
and-international-security/#essay4]

The third lens through which to view the effects of AI on warfare is with regard to the specific nature of AI technology today and what makes it different
from other emerging or existing dual-use technologies . It is striking that thousands of AI researchers have spoken out
about the dangers of the militarization of AI or an “AI arms race.”83 Militaries routinely adopt new technologies to improve their
capabilities, yet the militarization of AI in particular has proven controversial in a way that the military adoption of
computers or networking, for example, has not. While there are no doubt many factors behind these concerns, it is worth asking: Do AI scientists know something
about AI technology that policymakers don’t? The answer is quite clearly “yes.” Current AI and machine-learning methods are powerful but brittle.84 AI systems can
achieve superhuman performance in some settings, yet fail catastrophically in others. AI systems are also vulnerable to bias,
adversarial attacks, data poisoning, reward hacking, and other types of failure.85 In addition, machine-learning systems often exhibit surprising emergent behaviors, in ways both
good and bad.86 While military AI technical experts understand these flaws, they have yet to percolate to the minds of senior leaders, who often have only heard of the potential
benefits of AI technology. Some caution is warranted. Deep-learning technologies are powerful, but are also insecure and unreliable.
One AI scientist recently compared machine learning to “alchemy.”87 Perhaps policymakers would be more cautious if AI were presented to
them as a kind of “militarizing alchemy.” The risk is not that AI systems don’t work, in which case they would be unlikely to be deployed at all, but that
they work perfectly in training environments but fail spectacularly in wartime. Their brittle nature means that subtle changes in environment or the data they use could dramatically
change their performance.88 Wars
are rare, which is a good thing, but the downside is that militaries may not have access to
realistic datasets on which to train AI systems. Many military tasks can be accurately rehearsed in peacetime, such as aircraft takeoff and landing, aerial
refueling, or driving vehicles. In these circumstances, AI systems can likely be designed, tested, and verified over time to achieve adequate levels of performance, in some cases
better than humans. But adapting
to enemy tactics is another matter entirely. For battlefield decision-making, not only at
the operational level but also at the tactical level, humans will be required. The human mind remains the most
advanced cognitive processing system on the planet . AI systems, for all their prowess, perform poorly at adapting to novel or unexpected situations,
which abound in warfare. If militaries deploy AI systems before they are fully tested and verified in an attempt to stay ahead of competitors, they risk sparking a “race to the
bottom” when it comes to AI safety.89 As militaries continue to pursue artificial intelligence, leaders should be aware of the significant risks that could come with its adoption.
Many military
applications of AI will be inconsequential, but others could be concerning, such as the use of AI in
nuclear operations or lethal decision-making. The widespread adoption of AI could accelerate the pace of
military operations, pushing warfare beyond human control, while the pursuit of AI capabilities risks a “race
to the bottom” in AI safety. Militaries are exploring the benefits of AI, which are likely to be significant, but they should also study the potential risks that may
emerge from military applications of AI, as well as how to mitigate those risks.
92
DebateUS!
NATO Cyber Coop Aff

LAWs prolif structurally inevitable and undermines global security—low cost, military
effectiveness, expendability, and precisionf

Umbrello et. al 20 [Umbrello, Steven; Torres, Phil; Bellis, Angelo D. 2020. “The future of war: could lethal
autonomous weapons make conflict more ethical?,” AI & Society, Institute for Ethics and Emerging Technology,
35, 273, 282.]

To begin, one of the most compelling reasons for opposing nuclear non-proliferation eforts is that the destructive potential of nuclear weapons increases the
threshold of use (Jürgen 2008; Wilson 2012). Thus, only in extreme circumstances would rational actors deem their use to be either morally or strategically
acceptable. This strongly contrasts with the case of
LAWs, whose cost would be small compared to the cost of paying military
personnel. Consequently, states could maintain stockpiles of LAWs that are far larger than any standing army. The low cost of LAWs would
also make them more expendable than human soldiers (Jacoby and Chang 2008; Singer 2009a, b; Jenks 2010), and they could strike
the enemy with greater precision than human scolders can currently achieve (Thurnher 2012; Ekelhof and Struyk 2014). These
four properties—low cost, military effectiveness, expendability, and precision—could drive proliferation
while lowering the threshold for use and, therefore, undermine geopolitical security. Incidentally, similar claims could be
made about anticipated future nanotech weaponry (see Whitman 2011). The attractiveness of LAWs is apparent in the US’s use of
“unmanned aerial vehicles” (UAVs, also known as “drones”) in Iraq and Syria. These semi-autonomous systems ofer a cheap,
efective, and relatively precise means for conducting surveillance and targeting enemy combatants [ despite unsatisfed
infrastructural needs to sustain the drone program] (McLean 2014). As a result, the US drone program has grown and the frequency of drone use against
terrorist organizations like the (now-defunct) Islamic State has steadily increased in the past decade (Higgins 2017). Yet the proliferation of LAWs discussed
in this paper is diferent in important respects from the proliferation of current UAV devices. LAWs
are theoretically capable of becoming
moral actors capable of making life and death decisions without human intervention.

Potential for future robot arms race outweighs any short-term benefit of LAWs—only
ethical solution is a ban

Wallach 17 [Wallach, Wendell. 2017. Toward a ban on lethal autonomous weapons: surmounting the obstacles.
Commun. ACM 60, 5 (May 2017), 28–34]

The short-termbenefits of LAWS could be far outweighed by long-term consequences . For example, a robot arms
race would not only lower the barrier to accidentally or intentionally start new wars , but could also result in
a pace of combat that exceeds human response time and the reflective decision-making capabilities of
commanders. Small low-cost drone swarms could turn battlefields into zones unfit for humans. The pace of
warfare could escalate beyond meaningful human control . Military leaders and soldiers alike are rightfully concerned that military
service will be expunged of any virtue. In concert with the compelling legal and ethical considerations LAWS pose for IHL, unpredictability and risk
concerns suggest the need for a broad prohibition. To be sure, even with a ban, bad actors will find LAWS relatively easy to assemble, camouflage, and
deploy. The Great Powers, if they so desire, will find it easy to mask whether a weapon system has the capability of
functioning autonomously. The difficulties in effectively enforcing a ban are perhaps the greatest barrier to be overcome in persuading states that
LAWS are unacceptable. People and states under threat perceive advanced weaponry as essential for their immediate
93
DebateUS!
NATO Cyber Coop Aff
survival. The stakes are high. No one wants to be at a disadvantage in combating a foe that violates a ban. And yet, violations of the ban against
the use of biological and chemical weapons by regimes in Iraq and in Syria have not caused other states to adopt these weapons. The power of a ban goes
beyond whether it can be absolutely enforced. The development
and use of biological and chemical weapons by Saddam
Hussein helped justify the condemnation of the regime and the eventual invasion of Iraq . Chemical weapons use by Bashar
al-Assad has been widely condemned, even if the geopolitics of the Syrian conflict have undermined effective follow-through in support of that
condemnation. A ban on LAWS is likely to be violated even more than that on biological and chemical weapons. Nevertheless, a ban makes it clear
that such weapons are unacceptable and those using them are deserving of condemnation. Whenever possible that
condemnation should be accompanied by political, economic, and even military measures that punish the offenders. More importantly, a ban will help slow,
if not stop, an autonomous weapons arms race. But most importantly, banning LAWS will function as a moral signal that international humanitarian law
(IHL) retains its normative force within the international community. Technological possibilities will not and should not succeed in pressuring the
international community to sacrifice, or even compromise, the standards set by IHL. A ban
will serve to inhibit the unrestrained
commercial development and sale of LAWS technology. But a preemptive ban on LAWS will not stop nor necessarily slow the
roboticization of warfare. Arms manufacturers will still be able to integrate ever-advancing features into the robotic weaponry they develop. At best, it will
require that a human in the loop provides a real-time authorization before a weapon system kills or destroys a target that may harbor soldiers and
noncombatants alike. Even a modest
ban signals a moral victory, and will help ensure that the development of AI is
pursued in a truly beneficial, robust, safe, and controllable manner . Requiring meaningful human control in the form
of real-time human authorization to kill will help slow the pace of combat, but will not stop the desire for
increasingly sophisticated weaponry that could potentially be used autonomously . In spite of recent analyses suggesting that
humanity has become less violent over several millennia,9 warfare itself is an evil humanity has been unsuccessful at quelling. However, if we are to survive
and evolve as a species some limits must be set on the ever more destructive and escalating weaponry technology affords. The nuclear
arms race has
already made clear the dangers inherent in surrendering to the inevitability of technological possibility . Arms control
will never be a simple matter. Nevertheless, we must slowly, effectively, and deliberately put a cap on inhumane
weaponry and methods as we struggle to transcend the scourge of warfare.
94
DebateUS!
NATO Cyber Coop Aff
Conflict Escalation General
Lethal Autonomous Weapons System makes conflict more likely—generating escalation
pressure, exacerbating security dilemmas, and uncertainty over programming

Horowitz 19 [Horowitz, Michael C. (2019) “When speed kills: Lethal autonomous weapon systems, deterrence
and stability,” Journal of Strategic Studies, 42:6, 764-788, DOI: 10.1080/01402390.2019.1621174]

This article draws on research in strategic studies and examples from military history to assess the issues above, as
well as the potential for arms control.1 It focuses on these questions through the lens of key characteristics of
LAWS, especially the potential for increased operational speed and, simultaneously, less human control over some battlefield
choices. One of the primary attractions of autonomous systems , even compared to remotely piloted systems, is the potential to
operate at machine speed. Another potential benefit is the possibility of machine-like accuracy in following programming, but that comes with a
potential downside: the loss of control and the accompanying risk of accidents, adversarial spoofing and
miscalculation. Even if LAWS malfunction at the same rate as humans in a given scenario, the ability of operators to control the impact of those
malfunctions may be lower; this could make LAWS less predictable on the battlefield. The article then examines how these issues interact with the large
uncertainty parameter associated with AI-based military capabilities at present, both in terms of the range of the possible and the opacity of their
programming. The results highlight several critical issues surrounding the development and deployment of LAWS.2
First, the desire to fight at
machine speed with autonomous systems, while making a military more effective in a conflict, could increase
crisis instability. As countries fear losing conflicts faster, it could generate escalation pressure, including an
increased incentive for first strikes. Second, the fear of accidents and losing control of autonomous systems
could limit the willingness of militaries to deploy them at times due to a lack of trust in their effectiveness . Third,
the dual-use, or even general purpose, character of the basic science underlying many autonomous systems will make the technology relatively hard to
control, though whether this is described as diffusion, proliferation
or an arms race will depend on political dynamics as much as anything.
Finally, multiple uncertainty parameters concerning LAWS could exacerbate security dilemmas . Uncertainty
over the range of the possible concerning the programming of LAWS will increase fear of those systems in the near term,
making restraint less likely for competitive reasons. Moreover, the inherent differences between remotely piloted
systems and LAWS at the platform level come from software, not hardware, complicating efforts .

Lethal Autonomous Weapons disrupt human strategic calculus escalating conflict and
cause civilian casualties

Sauer 16 [Sauer, Frank. "Stopping 'Killer Robots': Why Now is the Time to Ban Autonomous Weapons
Systems." Arms Control Today, vol. 46, no. 8, 2016, pp. 8-13. ProQuest]

In light of these anticipated benefits, one might expect militaries to unequivocally welcome the introduction of autonomous weapons systems. Yet, their reputation remains mixed
at best. For instance, there are multiple operational risks. The potential for high-tempo fratricide, much greater than at human intervention
speeds, incentivizes militaries to retain humans in the chain of decision-making as a fail-safe mechanism .6 Above and
beyond such tactical concerns, these systems threaten to introduce a destabilizing factor at the strategic level. For one,
autonomous weapons systems generate new possibilities for disarming surprise attacks . Small, stealthy, or extremely
low-flying systems, or swarms, are difficult to detect and defend against. When nuclear weapons or strategic command-and-control systems
are or are perceived to be put at greater risk, autonomous conventional capabilities end up causing instability at
the strategic level. Further, trading algorithms at the stock market already provide cautionary tales of unforeseeable and costly algorithm interactions. Introducing
autonomous systems into conflict runs the risk of generating similarly unexpected outcomes. The sequence of events developing at rapid speed
95
DebateUS!
NATO Cyber Coop Aff
from the interaction of autonomous systems or swarms of two adversaries could never be trained, tested, nor
truly foreseen. An uncontrolled escalation from crisis to war is entirely within the realm of possibilities .7
Human decision-making in armed conflict requires complex assessments to ensure a discriminate and
proportionate application of military force in accordance with international humanitarian law. Not only are combatants and
noncombatants often not clearly distinguishable, but weighing a potential risk to civilians or damage to civilian objects against the
anticipated military advantage in the fog of war poses a challenge to even the most experienced of commanders. In
the foreseeable future, it is doubtful that these processes can be replicated in software code; but if these systems cannot be designed to abide by international humanitarian law, the
previously mentioned hope for them rendering war more humane is misguided.8

AI neutralizes a countries second-strike capability fueling global instability and increasing


the likelihood of a first strike

Morgan et. al 20 [Morgan, Forrest E., Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden,
Kelly Klima, and Derek Grossman, Military Applications of Artificial Intelligence: Ethical Concerns in an
Uncertain World. Santa Monica, CA: RAND Corporation, 2020.
https://www.rand.org/pubs/research_reports/RR3139-1.html.]

A final strategic risk is that as AI-enabled tools develop, the basic principles that have ensured relative stability among the global powers since World War II
are weakened. In particular, AI-enabled
systems might become advanced to the point that they undermine “second-
strike” capabilities that are essential to deterrence of nuclear war through the principle of mutual assured
destruction. A recent RAND report has considered the possibility that, as AI improves, it might be used to locate all of an adversary’s
nuclear launchers.66 With this capability, an aggressor could attack without fear of nuclear retaliation. Even the perception
that nuclear launchers would be vulnerable in this way might encourage a state to undertake a first strike to
preempt the possibility that they might lose the ability to use nuclear weapons later in a conflict. Such a
scenario could be highly destabilizing and put the entire world at risk of nuclear catastrophe.67
96
DebateUS!
NATO Cyber Coop Aff
U.S.-China Conflict
Autonomous systems undermine traditional theories of deterrence and would increase risk
of war

Wong et. al 20 [Wong, Yuna Huh, John Yurchak, Robert W. Button, Aaron Frank, Burgess Laird, Osonde A.
Osoba, Randall Steeb, Benjamin N. Harris, and Sebastian Joon Bae, Deterrence in the Age of Thinking Machines.
Santa Monica, CA: RAND Corporation, 2020. https://www.rand.org/pubs/research_reports/RR2797.html. Also
available in print form.]

Widespread AI and autonomous systems could also make escalation and crisis instability more likely by
creating dynamics conducive to rapid and unintended escalation of crises and conflicts. This is because of how quickly decisions
may be made and actions taken if more is being done at machine, rather than human, speeds. Inadvertent escalation could be a real concern. In protracted crises and conflicts
between major states, such as the United States and China, there may be strong
incentives for each side to use such autonomous capabilities
early and extensively, both to gain coercive and military advantage and to attempt to prevent the other side from
gaining advantage.3 This would raise the possibility of first-strike instability. AI and autonomous systems may
also reduce strategic stability. Since 2014, the strategic relationships between the United States and Russia and between the United States and China have each
grown far more strained. Countries are attempting to leverage AI and develop autonomous systems against this strategic
context of strained relations. By lowering the costs or risks of using lethal force, autonomous systems could
make the use of force easier and more likely and armed conflict more frequent. 4 A case may be made that AI and autonomous
systems are destabilizing because they are both transformative and disruptive. We can already see that systems such as UAVs, smart munitions, and loitering weapons have the
AI and autonomous systems could lead to
potential to alter the speed, reach, endurance, cost, tactics, and burdens of fielded units. Additionally,
arms race instability. An arms race in autonomous systems between the United States and China appears
imminent and will likely bring with it the instability associated with arms races . Finally, in a textbook case of the security
dilemma, the proliferation of autonomous systems could ignite a serious search for countermeasures that exacerbate
uncertainties and concerns that leave countries feeling less secure.
Solvency – China has called for a ban of LAWs United States commitment is key to de-escalation and
cooperation

Morgan et. al 20 [Morgan, Forrest E., Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden,
Kelly Klima, and Derek Grossman, Military Applications of Artificial Intelligence: Ethical Concerns in an
Uncertain World. Santa Monica, CA: RAND Corporation, 2020.
https://www.rand.org/pubs/research_reports/RR3139-1.html.]

While China’s compliance with arms control treaties is a recurring issue, Beijing
does act with greater restraint in areas where it has
signed treaties (lasers and bioweapons) than in areas where it has not (landmines and cluster munitions). Beijing’s
tendency to blur the line on laser weapons is troubling, especially considering that without technical information on the weapons it uses, determining
whether it is compliant is difficult. Furthermore, a compliant weapon at long range might cause permanent blindness if used at closer range. A LAWS ban
would have similar problems. As previously discussed, the difference between compliant and noncompliant systems, in terms of levels of autonomy, might
be as little as a software change or even the flip of a switch, and it would be difficult or impossible for one state to know another’s weapons settings. That
being said, while it is impossible
to tell what would have happened if China had never signed the treaty, it is likely that
Chinese blinding lasers, chemical weapons, and bioweapons would be more dangerous and widespread had
Beijing not shown some restraint in response to its treaty obligations . Ultimately, it may be in the United States’
interest to support China’s proposal for a ban on LAWS. As noted above, the definition of LAWS that China has
97
DebateUS!
NATO Cyber Coop Aff
proposed would set the bar so high that it would be unlikely to affect the development of current or planned U.S.
systems. Therefore, it is unlikely to be a disguised attempt to stunt the United States’ AI development. U.S. standards are already stricter than those in the
proposed ban. Nevertheless, in negotiating an international agreement, the United States may want to narrow the
definition of LAWS somewhat to more closely align with DoD directives. For example, U.S. negotiators may want to
remove the criterion that only systems that learn from their environment and autonomously expand their functions
and capabilities are LAWS. Furthermore, while China’s suggestions that national legal reviews are insufficient to ensure that a ban is maintained
may have been veiled attacks on U.S. defense policy, they could actually be made to serve the United States’ interests. At present, China has no legal review
program to speak of, and the United States’ legal review process constitutes an asymmetric hurdle U.S. weapons development programs must clear. 93 Any
extent to which a similar and similarly transparent process can be imposed on the PLA by Beijing’s diplomatic commitments would likely be a boon for
DoD. Even if China’sproposed ban would set the ethical bar lower for China than for the United States, it would at
least give China (and hopefully other nations as well) some clear ethical obligations. The United States could then
pressure China to “raise the bar,” propose measures to improve transparency of China’s legal review
process, and impose political costs if the PLA ever violated its own commitments.
98
DebateUS!
NATO Cyber Coop Aff
U.S.-Russia Conflict
Autonomous Weapons Systems undermines crisis stability and fuels conflict escalation
between the United States and Russia –incentives for first strike, speed of autonomous
systems makes de-escalation impossible, and changes Russia’s strategic threat calculation
making miscalculation more likely

Laird 20 [Laird, Burgess. Burgess Laird is a senior international defense researcher at the nonprofit, nonpartisan
RAND Corporation. 06-30-2020. “The Risks of Autonomous Weapons Systems for Crisis Stability and Conflict
Escalation in Future U.S.-Russia Confrontations.” RAND Corporation. https://www.rand.org/blog/2020/06/the-
risks-of-autonomous-weapons-systems-for-crisis.html]

While holding out the promise of significant operational advantages, AWS simultaneously could increase the potential for undermining crisis
stability and fueling conflict escalation in contests between the United States and Russia. Defined as “the degree to which mutual
deterrence between dangerous adversaries can hold in a confrontation,” as my RAND colleague Forrest Morgan explains, crisis stability and the ways to achieve it are

not about warfighting, but about “building and posturing forces in ways that allow a state, if confronted, to avoid
war without backing down” on important political or military interests. Thus, the military capabilities developed by nuclear-armed states like the
United States and Russia and how they posture them are key determinants of whether crises between them will remain stable or devolve into conventional armed
conflict, as well as the extent to which such conflict might escalate in intensity and scope, including to the level of nuclear use. AWS could foster crisis instability and

conflict escalation in contests between the United States and Russia in a number of ways; in this short essay I will highlight
only four. While holding out the promise of significant operational advantages, AWS simultaneously could increase the potential for undermining crisis

stability and fueling conflict escalation. First, a state facing an adversary with AWS capable of making
decisions at machine speeds is likely to fear the threat of sudden and potent attack , a threat that would
compress the amount of time for strategic decision making . The posturing of AWS during a crisis would likely
create fears that one's forces could suffer significant, if not decisive, strikes . These fears in turn could translate into
pressures to strike first—to preempt—for fear of having to strike second from a greatly weakened position . Similarly,
within conflict, the fear of losing at machine speeds would be likely to cause a state to escalate the intensity of the conflict possibly even to the level of nuclear use. Second, as the speed of

military action in a conflict involving the use of AWS as well as hypersonic weapons and other advanced
military capabilities begins to surpass the speed of political decision making, leaders could lose the ability to
manage the crisis and with it the ability to control escalation. With tactical and operational action taking place
at speeds driven by machines, the time for exchanging signals and communications and for assessing
diplomatic options and offramps will be significantly foreclosed . However, the advantages of operating inside the OODA loop of a state adversary
like Iraq or Serbia is one thing, while operating inside the OODA loop of a nuclear-armed adversary is another. As the renowned scholar Alexander George emphasized (PDF), especially in contests

between nuclear armed competitors, there is a fundamental tension between the operational effectiveness sought
by military commanders and the requirements for political leaders to retain control of events before major
escalation takes place. Third, and perhaps of greatest concern to policymakers should be the likelihood that, from the vantage point of Russia's leaders, in U.S. hands the
operational advantages of AWS are likely to be understood as an increased U.S. capability for what Georgetown
professor Caitlin Talmadge refers to as “conventional counterforce” operations. I n brief, in crises and conflicts, Moscow is
likely to see the United States as confronting it with an array of advanced conventional capabilities
backstopped by an interconnected shield of theater and homeland missile defenses . Russia will perceive such
capabilities as posing both a conventional war-winning threat and a conventional counterforce threat (PDF)
poised to degrade the use of its strategic nuclear forces . The likelihood that Russia will see them this way is reinforced by the fact that it currently sees U.S.
conventional precision capabilities precisely in this manner. As a qualitatively new capability that promises new operational advantages, the addition of AWS to U.S.

conventional capabilities could further cement Moscow's view and in doing so increase the potential for crisis
instability and escalation in confrontations with U.S. forces. In other words, the fielding of U.S. AWS could augment what
99
DebateUS!
NATO Cyber Coop Aff
Moscow already sees as a formidable U.S. ability to threaten a range of important targets including its command
and control networks, air defenses, and early warning radars, all of which are unquestionably critical components
of Russian conventional forces. In many cases, however, they also serve as critical components of Russia's nuclear force operations. As Talmadge argues, atta cks on such
targets, even if intended solely to weaken Russian conventional capabilities, will likely raise Russian fears that the
U.S. conventional campaign is in fact a counterforce campaign aimed at neutering Russia's nuclear capabilitie s. Take
for example, a hypothetical scenario set in the Baltics in the 2030 timeframe which finds NATO forces employing swarming AWS to suppress Russian air defense networks and key command and control nodes
in Kaliningrad as part of a larger strategy of expelling a Russian invasion force. What to NATO is a logical part of a conventional campaign could well appear to Moscow as initial moves of a larger plan
designed to degrade the integrated air defense and command and control networks upon which Russia's strategic nuclear arsenal relies. In turn, such fears could feed pressures for Moscow to escalate to nuclear

even if the employment of AWS does not drive an increase in the speed and momentum
use while it still has the ability to do so. Finally,

of action that forecloses the time for exchanging signals, a future conflict in which AWS are ubiquitous will likely
prove to be a poor venue both for signaling and interpreting signals . In such a conflict, instead of interpreting a downward
modulation in an adversary's operations as a possible signal of restraint or perhaps as signaling a willingness to
pause in an effort to open up space for diplomatic negotiations, AWS programmed to exploit every tactical opportunity might read the modulation as an
opportunity to escalate offensive operations and thus gain tactical advantage. Such AWS could also misunderstand adversary attempts to signal resolve solely as adversary preparations for imminent attack. Of

correctly interpreting signals sent in crisis and conflict is vexing enough when humans are making all the
course,

decisions, but in future confrontations in which decisionmaking has willingly or unwillingly been ceded to
machines, the problem is likely only to be magnified. Concluding Thoughts Much attention has been paid to the operational advantages to be gained from the
development of AWS. By contrast, much less attention has been paid to the risks AWS potentially raise. There are times in which the fundamental tensions between the search for military effectiveness and the

development of
requirements of ensuring that crises between major nuclear weapons states remain stable and escalation does not ensue are pronounced and too consequential to ignore. The

AWS may well be increasing the likelihood that one day the United States and Russia could find themselves
in just such a time. Now, while AWS are still in their early development stages, it is worth the time of policymakers to carefully consider whether the putative operational advantages from
AWS are worth the potential risks of instability and escalation they may raise.

You might also like