You are on page 1of 18

1NC

1
Interpretation: The aff must specify the uses of lethal autonomous weapons
that are banned in delineated text of the 1AC.

There are three ways to ban lethal autonomous weapons.


Cernea 18 Mihail-Valentin Cernea, University of Iași. "The Ethical Troubles of Future Warfare.
On the Prohibition of Autonomous Weapon Systems." Annals of the University of Bucharest.
2018. https://www.semanticscholar.org/paper/The-Ethical-Troubles-of-Future-Warfare.-On-the-
of-Cernea/bb636cc4a1d2be31b66502910459a37519d8ed45?p2df. [Premier]

4.1. What to Ban?

First of all, we need to distinguish the three ways in which one could put in effect an AWS ban.
One could ban the future development of AWS – for this ban to be justified, one needs to show
that there are no possible conditions under which the mere existence of such advanced robotic
weaponry would be morally justified. To further complicate matters, one could ban any research
that could lead to AWS, which is unrealistic, given that portions of the technology require for
autonomous weapons systems are already a part of civilian life, or one could ban only the
research required to build the actual ‘killer’ robot. For such a ban to make sense, either
deontological arguments need to be given which show why such weaponized artificial beings
would violate certain generally accepted ethical principles or a more consequentialist approach
that warns against the negative effects of such research. Another option available to adversaries
of AWS is to ban the production of said machines. In this case, my opinion is that arguments
need to strike at the consequences of nation states and non-state actors of AWS. Like in the very
unfortunate case of chemical weapons, the research may be out there, but firm red lines
regarding the possession AWS may be enforced. A similar case could be built for a ban on the
deployment of AWS on the battlefield. Here too one could see a nuance based on Tamburrini’s
discussion of the variable accuracy of AWS depending on the environment in which they are
deployed: rather than just banning the use of AWS outright, international treaties could just
prohibit autonomous killing machines in the environments where their effectiveness has not
been demonstrated beyond reasonable doubt.

1 – Ground – I lose circumvention, the research DA, a multitude of PICs, NCs,


and more – independently, it chills me from reading those args since I’m not
sure if they will end up linking
2 – Policy ed – a-spec is affects everything – proves our ground arg
Heminway 5 (Joan MacLeod , Associate Professor – University of Tennessee College of Law,
“Rock, Paper, Scissors: Choosing the Right Vehicle for Federal Corporate Governance Initiatives”,
Fordham Journal of Corporate & Financial Law, 10 Fordham J. Corp. & Fin. L. 225)
Many legal scholars - and others - are quick to propose the enactment or adoption of specific
legal rules without articulating , on any  [*226]  reasoned basis, the appropriate institutional
vehicle by which the rule should be enacted or adopted. Some rulemaking proposals ignore
2

issues of institutional choice altogether. Others seemingly suggest a path for enactment or
adoption of the proposed rule based on an overly simplistic or, in some cases, nonexistent
analysis. Despite this relative lack of attention to issues of institutional choice in proposals for
legal change, the institutional vehicle chosen for enactment or adoption of a legal rule may be
important. Among other things, institutional choice may impact the probability of enactment,
as well as the form, content, efficacy, or cost of the rule. With these and other related issues in
3

mind, this article focuses on decision making in federal jurisprudence; more specifically, it
4

focuses on identifying and analyzing important considerations involved in determining whether


a desired federal rule of corporate governance optimally should be legislated by the U.S.
5

Congress, adopted by the U.S. Securities and Exchange Commission ("SEC"), or instituted by the
federal judiciary. The schoolyard game of RPS, a game that is designed to be used by the players
to make decisions, is an analog to this  [*227]  determinative process in a number of respects.
6

Reflecting on the brief quote at the beginning of this article, for example, one might observe
that the process of choosing and implementing the right vehicle for federal corporate
governance initiatives, like RPS, often requires wit, speed, dexterity, and strategy and is
7 8 9 10

characterized, in many cases, by honorable players who are willing commit to the outcome of
11

that  [*228]  process. However, unlike RPS, this article offers a rigorous, reasoned model for
12

decision making, based on foundational principles drawn from (among other disciplines)
constitutional law, administrative law, legislative process, political science, and economics. The
13

primary objective of this article is the encouragement of an analytical, comparative approach to


institutional choice in the establishment of federal rules of corporate governance. Toward that
14

end, the article proposes a construct for making institutional choice decisions - a group  [*229] 
of four elements that a rule proponent should consider in deciding whether a particular rule of
15

corporate governance at the federal level should be enacted by Congress, promulgated by the
SEC, or ordered by a federal court. Because this article is foundational in its approach, most of
16

the article is spent identifying, defining, and otherwise explaining these component elements of
the suggested decisionmaking model. The article also, however, instructs the reader in how to
employ the proposed model in determining institutional choice in federal corporate governance
rulemaking.

3 – CX doesn’t check –
A] Guts pre-round prep which is best for research
B] It’s not binding and judges don’t pay attention
C] Avoids textual competition for counterplan – the 1AR could still read a perm
Vote on fairness – abuse skews your evaluation of substance – education is a
voter – it’s why schools fund debate. Drop the debater on spec – I can’t respond
to a new aff in the 2NR since I don’t have a 3NR to defend my offense. Spec link
turns 1AR theory – proves the aff forced me to be abusive. No RVIs: (A) The 1AR
would just sit on T with frontlines so I’ll always lose to the unchecked 2AR
collapse – also means evaluate after the 2NR. (B) The aff has the burden of
being topical so I have an unconditional right to read T. Use competing interps
—either there’s a bright line which collapses, or there isn’t which causes
intervention.
2
CP – States ought to ban anti-personnel autonomous weapons.

It competes – anti-materiel weapons are included by default.


Gubrud 18 Mark A. Gubrud, physicist and adjunct professor in the Peace, War and Defense
curriculum at the University of North Carolina. "The Ottawa Definition of Landmines as a Start to
Defining LAWS." Convention on Conventional Weapons Group of Governmental Experts Meeting
on lethal autonomous weapons systems, United Nations Geneva. 9-13 April 2018
https://reachingcriticalwill.org/images/documents/Disarmament-
fora/ccw/2018/gge/documents/Landmines-and-LAWS.pdf. [Premier]

Finally, it may be necessary to define the terms “lethal” and “lethality” for this context, since it is
not acceptable to allow autonomous weapons to attack materiel targets , at least with
destructive or damaging, kinetic force, which will inevitably also endanger humans. The CCW
may still want to avoid stirring “cyber” warfare into this pot. But it cannot free robots to fight
other robots with live weapons, as if robotic warfare would not become general conflict.
Therefore the definition of “lethal” must here be something like:

Lethal weapons systems apply physical force against objects or persons, with effects
which may include impediment, harm, damage, destruction or death.

It is most important to remember that this is only an attempt at a working definition of LAWS
which can be the basis for an agreement to ban or regulate some or all of them. The approach
does not seek to ban everything that falls under the definition of LAWS. But to avoid a narrow
ban that could be circumvented just by changing one detail of a future weapon system so that it
fell outside the definition, it bans all LAWS by presumption and enumerates exceptions for
weapons systems, such as close-in defenses against uninhabited munitions, that may be
considered desirable, or for existing systems that we agree, by negotiation, to grandfather in.

Solves the aff but avoids our turns – these weapons do not kill people, but do
enable states to defend themselves and keeps most existing lethal autonomous
weapons.

Solves vagueness and circumvention.


Scharre 20 Scharre, Paul. "Autonomous Weapons and Stability." King's College London Thesis.
March 2020.
https://kclpure.kcl.ac.uk/portal/files/129451536/2020_Scharre_Paul_1575997_ethesis.pdf.
[Premier]

A ban on autonomous weapons that target ed people may be another matter. The ban is clearer,
the horribleness of the weapon greater, and the military utility lower. Transparency is still a
problem, but slightly easier in some ways. These factors may make restraint more feasible for
anti-personnel autonomous weapons.

It would be easier for states to distinguish between anti-personnel autonomous weapons and
existing systems. There are no anti-personnel equivalents of homing missiles or automated
defensive systems in use around the world.625 This could allow states to sidestep the tricky
business of carving out exceptions for existing uses

The balance between military utility and the weapon’s perceived horribleness is also very
different for anti-personnel autonomous weapons. Targeting people is much more problematic
than targeting objects for a variety of reasons. Complying with IHL is more difficult. Interpreting
human behavior and theory-of-mind would likely be required to comply with IHL principles like
distinction and hors de combat in many settings. While these challenges might be surmountable
someday, current technology does not seem adequate. Anti-personnel autonomous weapons
are also significantly more hazardous than anti-materiel autonomous weapons. If an anti-
personnel weapon malfunctions, humans cannot simply climb out of a tank to escape being
targeted. A person can’t stop being human. The potential harm a malfunctioning anti-personnel
autonomous weapon could cause would be far greater. Anti-personnel autonomous weapons
also pose a greater risk of abuse by those deliberately wanting to attack civilians. They could be
used by despots to attack their own population or by countries who don’t respect the law to
attack enemy civilians.

Finally, the public may see machines that target and kill people on their own as genuinely
horrific. Weapons that autonomously targeted people would tap into an ageold fear of
machines rising up against their makers. Like the millennia-old aversion to poison, it may not be
logical, but public revulsion could be a decisive factor in achieving political support for a ban.

The military utility of anti-personnel autonomous weapons is also far lower than anti-materiel or
anti-vehicle autonomous weapons. The reasons for moving to supervised autonomy (speed) or
full autonomy (no communications) don’t generally apply when targeting people. Defensive
systems like Aegis need a supervisedautonomous mode to defend against salvos of high-speed
missiles, but overwhelming defensive positions through waves of human attackers has not been
an effective tactic since the invention of the machine gun. The additional half-second it would
take to keep a human in the loop for a weapon like the South Korean sentry gun is marginal.
Antipersonnel autonomous weapons in communications-denied environments are also not likely
to be of high value for militaries. At the early stages of a war when communications are
contested, militaries will be targeting objects such as radars, missile launchers, bases, airplanes,
and ships, not people. Militaries would want the ability to use small, discriminating anti-
personnel weapons to target specific individuals , such as terrorist leaders, but those would be
semi-autonomous weapons ; a human would be choosing the target.

Transparency would still be challenging. As is the case for weapons like the South Korean sentry
gun, others would have to essentially trust countries when they say they have a human in the
loop. Many nations are already fielding armed robotic ground vehicles, and they are likely to
become a common feature of future militaries. It would be impossible to verify that these
robotic weapons do not have a mode or software patch waiting on the shelf that would enable
them to autonomously target people. Given the ubiquity of autonomous technology, it would
also be impossible to prevent terrorists from creating homemade autonomous weapons. Large-
scale industrial production of anti-personnel weapons would be hard to hide, however. If the
military utility on these weapons were low enough, it isn’t clear that the risk of small-scale uses
would compel other nations to violate a prohibition . The combination of low military utility and
high potential harm may make restraint possible for anti-personnel autonomous weapons.

Circumvention is certain because of vagueness, lack of verification, and dual use


– it prevents testing which increases danger.
Gibbs 19 Noah Gibbs, Writer and Teacher on Software Development. "Why Banning
Autonomous Weapons Is More Dangerous Than Developing Them Responsibly." Polemics. 25
April 2019. www.polemics-magazine.com/opinion/banning-autonomous-weapons-more-
dangerous-developing-responsibly. [Premier]

While the objective of the Campaign to Stop Killer Robots is noble in that it seeks to prevent
human suffering, a ban on autonomous weapons is the wrong way to deal with the challenges
related to the militarization of artificial intelligence. One problem with a ban is that it is nearly
impossible to define autonomous weapons in a way that is universally accepted. This is because
weapons can have varying degrees of autonomy. For example, a heat-seeking air-to-air missile
will fly towards a heat source within its field of view. Once the missile is fired, the human
operator has little control over which heat source it might target. The missile ‘ chooses’ what it
will destroy. Despite this, few people would call a heat-seeking missile intelligent, even though it
exhibits a degree of autonomous decision-making. Robots equipped with sophisticated AI are a
different story. With the right AI, a robot may be capable of identifying a target as specific as a
person’s face and deciding whether to engage all by itself. Weapons that display such behavior
are said to be fully autonomous.

Unfortunately, there is no clear dividing line between fully autonomous weapons and semi-
autonomous weapons. Military drones provide an excellent example of this problem. Today,
most military drones are flown remotely by a pilot sitting somewhere on the ground. Decisions
to launch an attack from a drone are always made by the human pilot. AI could change this
dynamic by allowing a commander to order a drone to attack anything that qualifies as a target
within a given area. The drone could then loiter over an area searching for targets that it can
then decide to attack by itself. Hardware wise, both types of drones are the same. The only thing
that separates a drone flown by a human from a drone that can make decisions itself is
software.

Software is, by nature, impossible to observe unless you have access to the computer used to
program the autonomous weapon. As such, verifying an autonomous weapons ban is
exceedingly difficult because it requires states to give unprecedented access to their military
facilities and software. It is unlikely that any military would be willing to provide such access due
to the risk of cyber espionage. An adversary that gains access to an autonomous weapon’s
software could potentially train it so that the weapon would not recognize enemy targets . Even
worse, they could trick the weapon into attacking friendly forces or civilians.

The inability to observe a weapon’s software would make any autonomous weapons ban a risky
proposition for the states entering into it. States could easily cheat the treaty regime by
developing autonomous software for weapon systems that are normally manned. Again, drones
exemplify this problem. A state could claim that all its drones are flown by humans while
secretly developing an AI that could also fly the drone. It would be impossible for observers to
tell who or what was flying the drone.

Given the ease of cheating, an autonomous weapons ban would inevitably give rise to highly
secretive autonomous weapons programmes. These programmes would be significantly more
dangerous than current autonomous weapons programmes because their secrecy would
inevitably result in the weapons being tested less than an unclassified weapons programme. In
turn, it is more likely that a secret autonomous weapon might behave unexpectedly when
introduced to an actual battlefield. Rather perversely, an autonomous weapons ban may make
the risk of a catastrophic loss of control of an autonomous weapon higher.
3
PIC: I Advocate the entirety of the aff except for the use of the acronym
“LAWs”.
The net benefit is extinction – Other acronyms are key to prevent
extinction from AI. Fung 15,
Fung, Brian. "We Are All Going To Die In The Robot Uprising Because Of This Acronym". The Washington Post, 2015, https://www.washingtonpost.com/news/the-
switch/wp/2015/11/10/we-are-all-going-to-die-in-the-robot-uprising-because-of-this-acronym/. Accessed 30 Nov 2020. //Scopa

We have to ban all LAWS. LAWS should never be developed or used. LAWS should never be used in anger. Those are just a few of the responses researchers at the
University of British Columbia got when they asked the Internet-using public about the military use of lethal, autonomous weapons systems — Terminator-style
killer robots, in layman's terms. The survey itself sheds some interesting light on people's attitudes — 11 percent of respondents say they'd rather be under attack
from robots that could control themselves rather than robots that were being operated by humans — but it's impossible to read the study without getting the

LAWS is just, well, not the acronym you want to be using if you're trying to
nagging feeling that

warn people about the dangers of deadly artificial intelligence. Laws, which are of
course distinct from LAWS, are necessary. Any well-functioning society requires them. So to say that LAWS
pose "a matter of democratic and humanitarian concern" is not wrong per se, but it could be
misleading. For one thing, the military already uses the LAW acronym — to refer to
the light anti-tank weapon. So, in the minor interest of our species' survival, I'm launching an
effort to rebrand killer robots to better reflect their true nature. How about instead of LAWS, we

call them:
KILLAS — Kinetic Independent Lethal Locomotive Autonomous Soldiers
BOOMERs — Big Obnoxious Outwardly Mobile Eradication Recruits
THWACKs — Tough as Heck While Actively Computing Killzones
VADERs — Vain but Able Death-Emitter Robots
MUSKs — Machines Using Software to Kill
TURINGs — Totally Unrelenting Resource for Infiltration, Neutralization and Getaways
KBOTs — Killer Bloodthirsty Operational Toadies
TERMINATORs — Trained Electronic Representative for Military-Industrial Networks
and Autonomous Tactical Operations Resource
For the sake of humanity, I'm open to other suggestions.
The pre-fiat net benefit is resolvability – debates become irresolvable
when the judge has to listen to tags and cards that use both terms LAWs
for lethal autonomous weapons and laws as in the legal system. Clarity is
key to minimize judge intervention and maximize the correct decision,
which comes first because otherwise debate is no longer a test of skill.
The doc capitalization doesn’t solve because judges don’t flow off them.
No Perms – 1. Ground – They’re severance since the aff now defends
something that is different from the explicit text in the 1ac which kills all
neg ground since they can shift to whatever the 1n defends making it
impossible to debate 2. Logic – They flow neg since it concedes the CP is a
better idea than the aff 3. Advocacy skills – Perms allow shifting
advocacies that never force the aff to defend their position against
scrutiny which kills any education from having substance debates 4.
Textually and functionally competitive – it explicitly defends something
that isn’t the aff and requires an extra step that has net benefits to it.
4
Presumption and permissibility negate—
[1] Semantics – Ought is defined as expressing obligation1 which means absent
a proactive obligation you vote neg since there’s a trichotomy between
prohibition, obligation, and permissibility and proving one disproves the other
two. Semantics o/w – a) it’s key to predictability since we prep based on the
wording of the res and b) it’s constitutive to the rules of debate since the judge
is obligated to vote on the resolutional text.
[2] Safety – It’s ethically safer to presume the squo since we know what the
squo is but we can’t know whether the aff will be good or not if ethics are
incoherent.
[3] Logic – Propositions require positive justification before being accepted,
otherwise one would be forced to accept the validity of logically contradictory
propositions regarding subjects one knows nothing about, i.e if one knew
nothing about P one would have to presume that both the “P” and “~P” are
true.
In order for the resolution, the aff must prove that the state can absolutely ban
lethal autonomous weapons. To clarify, when the state bans X, it cannot
structurally falter from that obligation. If not, you negate. Prefer –

1 – Text – “Ban” means a state of exception.


Rocha 19 Eduardo Rocha, Universidade Federal de Pelotas (UFPel), Faculdade de Arquitetura e
Urbanismo (FAUrb), Programa de Pós-graduação em Arquitetura e Urbanismo (PROGRAU),
Pelotas, RS, Brasil. "Para-formal commerce: a cartography of public space in the Brazil-Uruguay
border." urbe, Rev. Bras. Gest. Urbana vol.11 Curitiba 2019 https://www.scielo.br/scielo.php?
script=sci_arttext&pid=S2175-33692019000100218

Any person or circumstance that disturbs the State order can suffer with the suspension of their
right to live, creating a state of exception. Homo sacer represents the one that is included
through exclusion and excluded in an inclusive way, or so to say, the State that has the duty of
protection is suspending rights. A ban, bandit, abandonment life. For Agamben, the word ban
means both exclusion and freedom , in the sense that one does not belong anywhere, thus is
loose, free to make any decision. In a denser analysis, one finds that the “ban” represents a
force of attraction and repulsion that ties together two (coexisting) opposite poles of bare life
and sovereign power, “[…] we must learn to recognize this structure in the political relations and
public spaces in which we still live” ( Agamben, 2002 , p. 117). If something or someone was
1
https://www.merriam-webster.com/dictionary/ought
abandoned, it is because at first, they did belong somewhere. Nonetheless, even when they are
excluded, they are still somehow connected with their origins, although they do not belong to
any specific place.

Text is a side constraint to other interps of the res since it establishes what it
means which controls fairness and education. It also determines what the
content of the resolutional statement requires the aff to prove, so it comes first
under truth testing.
2 - Conceptual necessity – If the state cannot ban LAWs, then the principle itself
is incoherent – it presupposes some binding force. Means (a) you’d still negate
even if the burden is false since countries would just proliferate, and (b) it’s a
prereq to debating the res since my burden evaluates what it means to affirm
or negate.
Now negate—
1 – The constitutive feature of the law is that the sovereign creates it, but the
sovereign lives outside of the law and has complete control over the it. The
sovereign is the only authority over the law, creating a state of exception; the
state cannot undermine the sovereign in the state of exception. Thus, any
principle that mandates the state to act is impossible.

Agamben 04 [Agamben, Giorgio. “Homo Sacer – Sovereign Power and Bare Life”.
Translated by Daniel Heller-Rozan. Published 2004. Bracketed for gendered
language] AA
The paradox of sovereignty consists in the fact the sovereign is, at the same time, outside and inside the
juridical order. If the sovereign is truly the one to whom the juridical order grants the power of proclaiming a state of exception
and, therefore, of suspending the order's own validity, then "the sovereign stands outside the juridical order and,
nevertheless, belongs to it, since it is up to him [them] to decide if the constitution is to be suspended in
toto" (Schmitt, Politische Theologie, p. 13). The specification that the ¶ sovereign is "at the same time outside and inside the juridical
order" (emphasis added) is not insignicant: the sovereign, having the legal power to suspend the validity of the law, legally places
himself outside the law. This means that the paradox can also be formulated this way: "the law is outside itself," or: "1, the
sovereign, who am outside the law, declare that there is nothing outside the law [che non c'e un ori gge]." ¶ The topology implicit in
the paradox is worth reflecting upon, since the degree to which sovereignty marks the limit (in the dou ble sense of end and
principle) of the juridical order will become clear only once the structure of the paradox is grasped. Schmitt presents this structure
the structure of the exception (Ausnahme): ¶ The exception is that which cannot be subsumed; it defies general codification, but it
simultaneously reveals a specically juridical formal element: the decision in absolute purity. The
exception appears in its
absolute form when it is a question of creating a situation in which juridical rules can be valid.
Every general rule demands a regular, everyday frame oflife to which it can be factually applied
and which is submitted to its regulations. The rule requires a homogeneous me dium. This factual regularity is not
merely an "external presupposi tion" that the jurist can ignore; it belongs, rather, to the rule's imma nent validity. There is no rule
that is applicable to chaos. Order must be established for juridical order to m e sense. A regular
situation must be created, and sovereign is he who definitely decides if this situation is actually
effective. l law is "situational law." The sovereign creates and guarantees the situation as a whole in irs totality. He has the
monopoly over the nal decision. Therein consists the essence of State sovereignty, which must therefore
be properly juridically de ned not as the monopoly to sanction or to rule but as the monopoly to
decide, where the word "monopoly" is used in a general sense that is still to be developed. The decision reveals the essence of
State authority most clearly. Here the decision must be distinguished from the juridical regulation, and (to formulate it paradoxically)
authority proves itself not to need law to create law. .. . The exception is more interesting than the regular case. The latter proves
nothing; the excep tion proves everything. The exception does not only confirm the rule; the rule as such lives o the exception alone.
A Protestant theologian who demonstrated the viral intensity of which theological reflection was still capable in the nineteenth
century said: "The exception explains the general and itself. And when one really wants to study the general, one need only look
around for a real exception. It brings everything to light more clearly than the general itself After a while, one becomes disgusted
with the endless talk about the general-there are exceptions. If they cannot be explained, then neither can the general be explained.
Usually the difficulty is not noticed, since the general is thought about not with passion but only with comfortable superficiality. The
exception, on the other hand, thinks the general with intense passion." (Politische Theologie, pp. 19-22) ¶ It is not by chance that in
defining the exception Schmitt refers to the work of a theologian (who is none other than S0ren Kierke gaard). Giambattista Vico
had, to be sure, armed the superiority ¶ of the exception, which he called "the ultimate configuration of facts," over positive law in a
way which was not so dissimilar: '' esteemed jurist is, therefore, not someone who, with the help of a good memory, masters
positive law [or the general complex of laws], but rather someone who, with sharp judgment, knows how to look into cases and see
the ultimate circumstances off acts that merit equitable consideration and exceptions from general rules" (De antiquissima, chap. 2).
Yet nowhere in the realm of the juridical sciences can one nd a theory that grants such a high position to the exception. For what is
at issue in the sovereign exception is, according to Schmitt, the very condition of possibility of juridical rule and, along with it, the
very meaning of State authority. Through the state of exception, the sovereign "creates and guarantees the
situation" that the law needs for its own validity. But what is this "situation," what is its structure, such that it
consists in nothing other than the suspension of the rule? ¶ X The Vichian opposition between positive law (ius theticum) and
exception well expresses the particular status of the exception. The exception is an element in law that transcends positive law in
the form of its suspension. The exception is to positive law what negative theology is to positive theology. While the latter a rms and
predicates determinate qualities of God, negative (or mystical) theology, with its "neither . . . nor . . . ," negates and suspends the
attribution to God of any predicate whatsoever. Yet negative theology is not outside theology and can actually be shown to function
as the principle grounding the possibility in general of anything like a theology. Only because it has been negatively presupposed as
what subsists outside any possible predicate can divinity become the subject of a predication. Analogously, only because its validity
is suspended in the state of exception can positive law define the normal case as the realm of its own validity. ¶ r.2. The exception is
a kind of exclusion. What is excluded from the general rule is an individual case. But the most proper characteristic of the exception
is that what is excluded in it is not, on account of being excluded, absolutely without relation to the rule. On the contrary, what is
excluded in the exception maintains itself ¶ in relation to the rule in the form of the rule's suspension. The rule applies to the
exception in no nger app ing, in withdrawing om it. The
state of exception is thus not the chaos that precedes
order but rather the situation that results from its suspension . In this sense, the exception is truly, according
to its etymological root, taken outsi (ex-capere), and not simply excluded. ¶ It has o en been observed that the juridico-political order
has the structure ofan inclusion of what is simultaneously pushed outside. Gilles Deleuze and Felix Guattari were thus able to write,
"Sovereignty only rules over what it is capable of interiorizing" (Deleuze and Guattari, Mil p teaux, p. 5); and, concerning the "great
confinement" described by Foucault in his Madness and Civilition, Maurice Blanchot spoke of society's attempt to "confine the
outside" (en rmer le dehors), that is, to constitute it in an "interiority of expectation or of exception." Confronted with an excess, the
system interiorizes what exceeds it through an interdiction and in this way "designates itself as exterior to itself" (L ntretien in ni, p.
292) . The exception that defines the structure of sovereignty is, however, even more complex. Here what
is outside is
included not simply by means of an interdiction or an internment, but rather by means of the
suspension of the juridical order's validity-by letting the juridical order , that is, withdraw from the
exception and aban don it. The exception does not subtract itself from the rule; rather, the rule, suspending itself, gives
rise to the exception and, maintaining itself in relation to the exception, rst constitutes itself as a rule. The particular "force" of law
consists in this capacity of law to maintain itself in relation to an exteriority. We shall give the name relation of exception to the
extreme form of relation by which something is included solely through its exclusion. ¶ The situation created in the
exception has the peculiar characteristic that it cannot be defined either as a situation of fact or as a situation of right, but
instead institutes a paradoxical threshold of indistinction between the two. It is not a fact, since it is only created through
the suspension of the rule . But for the same reason, it is not even a juridical case in point, even if it opens the possibility ¶
of the force of law. This is the ultimate meaning of the paradox that Schmitt formulates when he
writes that the sovereign[s] decision "proves itself not to need law to create law." What is at
issue in the sovereign exception is not so much the control or neutralization of an excess as the
creation and definition of the very space in which the juridico-political order can have validity. In
this sense, the sovereign exception is the fundamental localization (Ortung), which does not limit itself to distinguishing
what is inside from what is outside but instead traces
a threshold (the state of exception) between the two,
on the basis of which outside and inside, the normal situation and chaos, enter into those complex
topological relations that make the validity of the juridical order possible. ¶ The "ordering of space" that is, according to Schmitt,
constitu tive of the sovereign nomos is therefore not only a "taking of land" (Landesnahme)-the determination of a juridical and a
territorial ordering (of an Ordnung and an Ortung)-but above all a "taking of the outside," an exception (Ausnahme). ¶
Case
Advantage 1
Ban doesn’t solve – 5 factors means states still arms race.
Scharre 18 Paul Scharre, senior fellow and director of the technology and national security
program at the Center for a New American Security. "A Million Mistakes a Second." Foreign
Policy. 12 September 2018. https://foreignpolicy.com/2018/09/12/a-million-mistakes-a-second-
future-of-war/. [Premier]

Attempts at arms control go back to antiquity, from the Bible’s prohibition on wanton
environmental destruction in Deuteronomy to the Indian Laws of Manu that forbade barbed,
poisoned, or concealed weapons. In the intervening centuries, some efforts to ban or regulate
certain weapons have succeeded, such as chemical or biological weapons, blinding lasers, land
mines, cluster munitions, using the environment as a weapon, placing weapons in space, or
certain delivery mechanisms or deployment postures of nuclear weapons. Many other attempts
at arms control have failed, from the papal decrees denouncing the use of the crossbow in the
Middle Ages to 20th-century attempts to ban aerial attacks on cities, regulate submarine
warfare, or eliminate nuclear weapons. The United Nations began a series of meetings in 2014
to discuss the perils of autonomous weapons. But so far the progress has been far slower than
the pace of technological advances.

Despite that lack of success, a growing number of voices have begun calling for a ban on
autonomous weapons. Since 2013, 76 nongovernmental organizations across 32 countries have
joined a global Campaign to Stop Killer Robots. To date, nearly 4,000 artificial intelligence and
robotics researchers have signed an open letter calling for a ban. More than 25 national
governments have said they endorse a ban, although none of them are major military powers or
robotics developers. But such measures only tend to succeed when the weapons in question are
of marginal value, are widely seen as especially horrific or destabilizing, are possessed by only
a few actors , are clearly distinguished from other weapons, and can be easily inspected to
verify disarmament. None of these conditions applies to autonomous weapons.

Even if all countries agreed on the need to restrain this class of arms , the fear of what others
might be doing and the inability to verify disarmam ent could still spark an arms race. Less
ambitious regulations could fare better, such as a narrow ban on anti-personnel autonomous
weapons, a set of rules for interactions between autonomous weapons, or a broad principle of
human involvement in lethal force. While such modest efforts might mitigate some risks,
however, they would leave countries free to develop many types of autonomous weapons that
could still lead to widespread harm.
Advantage 2
LAWs save lives, de-escalate conflict, and prevent war.
Rabkin and Yoo 17 Jeremy Rabkin, professor of law at Antonin Scalia Law School at George
Mason University, and John Yoo, Emanuel S. Heller Professor of Law Co-Faculty Director, Korea
Law Center Director, UC Berkeley Public Law & Policy Program. "‘Killer Robots’ Can Make War
Less Awful." The Wall Street Journal. 1 September 2017. https://www.wsj.com/articles/killer-
robots-can-make-war-less-awful-1504284282. [Premier]

Mr. Musk has established himself in recent years as the world’s most visible and outspoken critic
of developments in artificial intelligence, so his views on so-called “killer robots” are no surprise.
But he and his allies are too quick to paint dire scenarios , and they fail to acknowledge the
enormous potential of these weapons to defend the U.S. while saving lives and making war
both less destructive and less likely.

In a 2014 directive, the U.S. Defense Department defined an autonomous weapons system as
one that, “once activated, can select and engage targets without further intervention by a
human operator.” Examples in current use by the U.S. include small, ultralight air and ground
robots for conducting reconnaissance and surveillance on the battlefield and behind the lines,
antimissile and counter-battery artillery, and advanced cruise missiles that select targets and
evade defenses in real-time. The Pentagon is developing autonomous aerial drones that can
defeat enemy fighters and bomb targets; warships and submarines that can operate at sea for
months without any crew; and small, fast robot tanks that can swarm a target on the ground.

Critics of these technologies suggest that they are as revolutionary—and terrifying—as nuclear
weapons. But robotics and the computing revolution will have the opposite effect of nuclear
weapons. Rather than applying monstrous, indiscriminate force, they will bring more precision
and less destruction to the battlefield. The new generation of weapons will share many of the
same qualities that have made the remote-controlled Predator and Reaper drones so powerful
in finding and destroying specific targets.

The weapons are cost-effective too. Not only can the U.S. Air Force buy 20 Predators for roughly
the cost of a single F-35 fighter, it can also operate them at a far lower cost and keep them on
station for much longer. More important, robotic warriors—whether remote-controlled or
autonomous—can replace humans in many combat situations in the years ahead, not just in the
air but on the land and sea as well. Fewer American military personnel will have to put their
lives on the line in risky missions.

Critics are concerned about taking human beings out of the loop of decision-making in combat.
But direct human involvement doesn’t necessarily make warfare safer, more humane or less
incendiary. Human soldiers grow fatigued and become emotionally involved in conflict, which
can result in errors of judgment and the excessive use of force.

Deploying robot forces might even restrain countries from going to war. Historically, the U.S.
has deployed small contingents of military personnel to global hot spots to serve as
“ tripwires ”—initial sacrifices that, in the event of a sudden attack, would lead to
reinforcements and full military engagement. If machines were on the front lines for these initial
encounters, however, they could provide space —politically and emotionally—for calmer
deliberation and the negotiated settlement of disputes.

Critics also fear that autonomous weapons will lower moral and political accountability in
warfare. They imagine a world in which killer robots somehow fire themselves while presidents
and generals avoid responsibility. But even autonomous weapons have to be given targets,
missions and operating procedures, and these instructions will come from people. Human
beings will still be responsible.

Could things go terribly wrong? Could we end up in the world of “RoboCop” or “The
Terminator,” with deadly devices on a rampage?

It is impossible to rule out such dystopian scenarios, but we should have more confidence in our
ability to develop autonomous weapons within the traditional legal and political safeguards. We
regularly hold manufacturers, sellers and operators liable for automobile s

You might also like