You are on page 1of 8

INDIAN YEARBOOK OF INTERNATIONAL LAW AND POLICY

2010

POST-HUMAN HUMANITARIAN LAW:


THE LAW OF WAR IN THE AGE OF ROBOTIC WEAPONS

Vik Kanwar*

REVIEW ESSAY

P.W. SINGER, WIRED FOR WAR: THE ROBOTICS REVOLUTION AND CONFLICT IN THE 21ST CENTURY,
PENGUIN PRESS (2009)

RONALD ARKIN, GOVERNING LETHAL BEHAVIOR IN AUTONOMOUS ROBOTS, CHAPMAN & HALL (2009)

WILLIAM H. BOOTHBY, WEAPONS AND THE LAW OF ARMED CONFLICT, OXFORD UNIVERSITY PRESS (2009)

ARMIN KRISHNAN, KILLER ROBOTS: THE LEGALITY AND ETHICALITY OF AUTONOMOUS WEAPONS,
ASHGATE PRESS (2009)

I. INTRODUCTION

The literature on robotic warfare has grown substantially over the past year as the use of
―warbots‖ (also called robotic weapons, drones, unmanned combat vehicles [UCVs], or
unmanned aerial vehicles [UAVs]) has increased and become visible in combat zones including
Iraq, Pakistan, and Afghanistan.1 The regulation of these weapons—once a topic for obscure
academic theses2—has become a topic of mainstream media attention. 3 Among four titles
*
Assistant Professor, Jindal Global Law School (JGLS). O.P. Jindal Global University, Sonipat, Haryana, National
Capital Region of Delhi, India. Assistant Director, Centre on Public Law and Jurisprudence (CPLJ). LL.M. New
York University School of Law (2001); J.D., Northeastern University School of Law (2000). The author is a
member of the New York Bar, and has served as an expert-consultant to the Control Arms campaign and to the
Program on Humanitarian Law and Policy Research at Harvard University. The author would like to thank his
colleagues Professors Priya S. Gupta and Prabhakar Singh (JGLS) and research assistants Gaurav Mukherjee
(NALSAR) and Deepaloke Chatterjee (NUJS) for valuable comments.
1
The United States has been the foremost user of these weapons. For the U.S. government‘s defense of the use of
these weapons under IHL and the law of self-defense, see Harold Hongju Koh, Legal Adviser, U.S. Department of
State, Remarks: The Obama Administration and International Law, Annual Meeting of the American Society of
International Law Washington, DC (March 25, 2010). Available at:
http://www.state.gov/s/l/releases/remarks/139119.htm. (Last checked April 15, 2010).
2
Earlier works on this topic have been largely confined to specialty journals and unpublished theses: See John. J.
Klein, The Problematic Nexus: Where Unmanned Combat Air Vehicles and the Law of Armed Conflict Meet, Air
and Space Power Journal, July 2003. Erin A. McDaniel, Robot Wars: Legal and Ethical Dilemmas of Using
Unmanned Robotic Systems in 21st Century Warfare and Beyond, M.M.A.S. Thesis (Faculty of the U.S. Army
Command and General Staff College, Fort Leavenworth, Kansas), 2008.
3
See, e.g., John Barry and Evan Thomas, Military: The UAV Revolution -- Up in the Sky, An Unblinking Eye,
NEWSWEEK, June 9, 2008. BBC News, US warned on deadly drone attacks, October 28, 2009. Peter Bergen and
Katherine Tiedemann, Revenge of the Drones: An Analysis of Drone Strikes in Pakistan, (New America Foundation)
2|KANWAR

tracking these developments over the past year, the most popular and influential has been P.W.
Singer‘s book Wired for War: the Robotics Revolution and Conflict in the 21st Century.4
Singer‘s book limits explicit discussion of International Humanitarian Law (IHL) issues to a
single short chapter, but this gap can be filled by recent academic works on weapons law. The
application of IHL to robotic weapons is discussed generally in William H. Boothby‘s Weapons
and the Law of Armed Conflict5 and in more detail in Armin Krishnan‘s Killer Robots: Legality
and Ethicality of Autonomous Weapons (2009) and Ronald Arkin‘s Governing Lethal Behavior
in Autonomous Robots (2009).

Taken together, the four books highlight the tensions between of autonomy and accountability in
robotic warfare and suggest an original account of fundamental changes taking place in the field
of IHL. This Review Essay argues that from the point of view of IHL the concern is not the
introduction of robots into the battlefield, but the gradual removal of humans. In this way the
issue of weapon autonomy marks a paradigmatic shift from the so-called ―humanization‖ of
IHL.6 ―Humanization‖ already conflates two senses of ―humanity‖ noted by Henri Meyrowitz:
(1) humanity understood as the ―defining characteristic of the human race‖ (menschheit), , but
also by humanity understood as ―a feeling of compassion towards other human beings‖
(menschlichkeit), by virtue of human (rights), humane treatement. that in humanitarian law
humanity– menschheit is safeguarded through humanity–menschlichkeit.7 If the role of human
combatant recedes, is the era of the ―humanization‖ of International Humanitarian Law (IHL)
effectively over? Which, if any of these will persist when robots take over combat functions?
What will IHL accomplish in a post-human context?

II. WIRED FOR WAR

In Wired for War, P.W. Singer describes how robotic weapons have been developed, anticipated
or reflected in the realms of popular culture, ethics, science fiction, technology, futurism,
military strategy, economics, politics, and law. Unlike his earlier book Corporate Warriors, an
academic study of private military companies, Wired for War is interview-based and written in a
style accessible to a non-academic audience. But in keeping with his previous work, Singer gives
a balanced treatment of controversial developments. He manages to avoid raising alarm at the
mere fact of professional soldiers being supplanted by proxies (robots or contractors), though he
acknowledges popular anxieties of ―losing control.‖ While owing his enthusiasm for the topic to

October 19, 2009. EJIL: Talk! (Blog), The United States' Use of Drones in Pakistan, September 2009. Jane Mayer,
The Predator War: What are the risks of the C.I.A.'s cover drone program? THE NEW YORKER, October 26, 2009.
4
P.W. SINGER, WIRED FOR WAR: THE ROBOTICS REVOLUTION AND CONFLICT IN THE 21ST CENTURY, Penguin Press
(2009)
5
WILLIAM H. BOOTHBY, WEAPONS AND THE LAW OF ARMED CONFLICT (2009)
6
Theodor Meron, The Humanization of Humanitarian Law, 94 AM. J. OF INT‘L. L. 239 (2000).
7
RENE PROVOST, INTERNATIONAL HUMAN RIGHTS AND HUMANITARIAN LAW. Cambridge University Press (2002) at
5-6; quoting Henri Meyrowitz, ‗R´eflexions sur le fondement du droit de la guerre’, in CHRISTOPHE SWINARSKI ED.,
STUDIES AND ESSAYS ON INTERNATIONAL HUMANITARIAN LAW AND RED CROSS PRINCIPLES IN HONOUR OF JEAN
PICTET (GENEVA/THE HAGUE: ICRC/NIJHOFF, 1984) at 419, 426–31.
3|KANWAR

fictional precursors, Singer is measured and careful to describe the terrain of robotics as it
actually exists, drawing his informants from industry, the military, academia, and politics. In the
journey from science fiction space operas to the facts on the ground, the reader is reminded, for
example, that the very real presence of warbots on the battlefield does not promise the
excitement of Hollywood versions (e.g., Star Wars, the Terminator, or Iron Man). Instead, they
are more often relegated to work that is overly dirty and dull, and not only dangerous. In close
detail, most robots perform repetitive tasks, earning the names ―drone‖ or ―drudgery‖ which was
the original meaning of ―robot.‖ Yet, the most recent phase of robotic warfare has added the
dimension of lethality, placing robotic technology squarely in the category of weapons (means
and methods of warfare) and raising the possibility that IHL provides an appropriate framework
to regulate it.

In the first half of the book Singer walks us through the science, production, economics, and
politics of this technology, and we become aware of the state of its development its current
capabilities, and potential uses. Throughout these encounters, Singer and his informants freely
speculate on advantages and drawbacks of the technology. Can they work? Can they be
controlled? If they improve the efficiency of killing, is it a good thing? What if they make killing
enemy targets more reliable, while saving civilian lives? Would such advantage to
technologically superior forces be considered a violation of honorable combat in the sense that
predates IHL? Will the technological benefits extend to other kinds of humanitarian activity,
such as increasing the protection of civilians by robots sweeping an area for landmines, or
performing defensive functions? In the second half of the book the panoply of voices give way to
Singer‘s own and the questions begin to gather around a steady theme: the increasing
―autonomy‖ of robots, or what Singer calls ―taking humans out of the loop.‖ Automation of
warfare does not yet mean complete ―autonomy‖ of robotics (we will discuss this distinction in
detail below), but the concern is that life and death decisions will be made on the battlefield
without direct input from humans. What is referred to as ―autonomy‖ is not artificial intelligence
capable of supplanting human responsibility; instead it is an increase in time and distance
between human actions and their results. In particular, there is an anthropomorphic fallacy,
interesting thought, experiments and speculative fiction assumption that agency or responsibility
should be distributed as though robots are combatants rather than weapons in the meaning of
IHL. For now, robots must be treated as means and methods of warfare, roughly equivalent to
the principles being articulated in instruments regulating land mines. Singer sometimes confuses
things by taking an anthropomorphic view of autonomy, treating Warbots as the most irregular
of ―combatants‖ but rather than muddling this distinction or even being added to artificial
persons (states, corporations, and other actors) they more clearly belong to the category of
weapons, whose use is considered an extension of human action, as ―means and methods‖ of
combat. Just as a knife extends the reach and lethality of a hand, sophisticated weapons like
warbots can be considered extensions of human action, and the primary difference is the increase
in time and distance intervening between the action and result.8 Technology has already put

8
MARSHALL MCLUHAN, UNDERSTANDING MEDIA: THE EXTENSIONS OF MAN. New York: McGraw-Hill, 1964 at
152. (Cultural theorist McLuhan argues that technology in general is a prosthetic extension of the human body: ―The
tool extends the fist, the nails, the teeth, the arm...‖). This is true of weapons in particular though agency is obscured
with the loss of proximity.
4|KANWAR

soldiers at increasing spatial and temporal distances from the killing they cause, increasing safety
of one side but also increasing the asymmetry between belligerents.

Singer is right to focus on the controversies of autonomy of robots, though he doesn‘t pursue all
of the ways increasing autonomy might trouble the application of IHL to weapons systems. On
one hand, the protection of one‘s own soldiers is always a goal for any military, and the
perception or even the reality of ―safe distance‖ from fighting can make war more palatable to
the public. Yet from the point of view of warrior‘s codes, the removal of one set of combatants
from a battlefield will seem less than honorable. From the point of view of the international
lawyer, the concern is not that the asymmetry of protection but that one side might be shielded
from legal consequences. For a series of partially coherent reasons, the ―human element‖ is seen
as ―indispensible‖: for providing judgment, restraint, and ultimately to take responsibility for
decisions. The first two of these suggest that there is a risk of violating IHL. Singer rightly
identifies the principles of discrimination and proportionality (connected to the prohibition of
unnecessary suffering). More to the point, to increase autonomy and remove the human element
implies two risks: (1) with the loss of judgment and restraint a risk of indiscriminate attacks,
endanger lives, but also (2) the risk of attenuating legal accountability in combat, endangering
the normative framework of IHL. Even if robots are so accurate in their targeting there remains
the meta-legal policy concern that IHL will become inapplicable or unenforceable.

Singer‘s conclusion is that technological development is insensitive to legal norms, and weapons
manufacturers are not thinking about legal regulation. Research and development in these
technologies would likely continue to grow not only despite, but because of the demand for
increased conformity with proportionality and discrimination norms. Many of the controversies
seem to require legal clarification, and these are becoming relevant as the reality of ―Predator
drones‖ makes headlines daily. The question of precision, targeting the right kill, and avoiding
the wrong kill, is at the center of the controversy over these weapons. If the UAVs are successful
it is because Predators‘ can hover above a target feeding for hours before and after a strike as
against manned ones, and the argument is made that more precise targeting means effective
killing of combatants and lower collateral damage. In any case, lawyers should be anticipating
the effect of these technologies on IHL. Thus, we must turn away from Singer‘s otherwise
satisfying book and towards frameworks that consider the legal and technological implications of
the next generation of (increasingly) ―autonomous‖ warbots.

III. AUTONOMY AND ACCOUNTABILITY

Autonomy must be described both in technological and legal terms. A machine needs at least
some minimal autonomy to be called a robot. In his book Killer Robots, Armin Krishnan defines
a robot as a machine that is programmable, that can sense its environment and that can
manipulate its environment, and autonomy as the relative capability of such a machine for
unsupervised operation. The less human supervision is needed, the greater the autonomy of the
machine. Krishnan reassures us that at the moment, that there are no robotic weapons that at
present deserve the designation ―autonomous.‖ He notes that ―Offensive robotic weapons such as
Predator or cruise missiles… are currently tele-operated or programmed to attack a certain
area/target, but… have the potential of becoming completely autonomous relatively soon.‖9
9
ARMIN KRISHNAN, KILLER ROBOTS: LEGALITY AND ETHICALITY OF AUTONOMOUS WEAPONS, Ashgate Press
5|KANWAR

There remains a crucial difference between remote platform systems (which exist) and
autonomous robots (which do not) exist. Krishnan offers a continuum between (1) robotic, (2)
unmanned, and (3) autonomous weapons. A robotic weapon is defined as: ―A computerized
weapon equipped with sensors, which may be tele-operated or autonomous. For example, smart
munitions and cruise missiles are ‗robotic‘, as they have sensors that guide them to their
target…‖ An unmanned system is defined as: ―a robotic sensor or weapons platform, which is
reusable and thus not destroyed through its use. An unmanned aerial vehicle (UAV) would count
as an unmanned system, but a cruise missile would not.‖ An autonomous weapon can be defined
as ―a computerized weapon that does not require any human input for carrying out its core
mission…―the capability of a weapon to independently identify targets and to trigger itself.‖

In legal terms, autonomy feeds into the applicability of the core concepts of weapons law. While
a full-length doctrinal treatment of warbots has yet to appear, William Boothby‘s book Weapons
Law is a handy general guide to a the relevant principles in IHL. Boothby defines ―weapon‖ is an
―offensive capability,‖ ―means‖ as the kinds of weapons themselves and methods as the manner
in which they are deployed.10 He notes as a historical matter that the introduction of novel
weapons has always been greeted with suspicion.

The earliest warriors, accustomed to conduct hostilities by using such escalation of


weapon capabilities fists, stones to nuclear and robotic weapons, each must have regarded
the first appearance of more advanced technologies as violating the laws of war.11

Warbots fit this pattern, but as with many weapons in the past, novelty cannot be equated with
illegality. Increasingly it is no longer the weapon itself that is focused upon, but the method of its
deployment. While the humanitarian phase, the chivalrous ideal of ―the right of Belligerent
Parties to choose methods or means of warfare‖ like duelists agreeing upon ―pistols at dawn‖ is
no longer unlimited; neither is IHL is no longer focused on the banning of weapons. Instead
limitations are placed on the manner of a weapon‘s use.

The use of any weapons are subject to the general rules and principles of customary and treaty
law of international armed conflict (in particular the principle of distinction and the prohibition
of unnecessary suffering), as well as to any other treaty law applicable for Contracting Parties.
Boothby only addresses Unmanned Combat Vehicles in the space of three pages12 but these are
sufficient to grasp the application of weapons law to existing unmanned systems, and can be
extended to newer versions of robotic systems. The key issues concerning the deployment of
unmanned systems are the following:

(i) distinction (between combatants and non-combatants and between military objectives
and civilian objects);

(2009). 1-2. Krishnan reassures us that ―Killer robots in the sense of lethal autonomous military robots do not exist.
The military robots that do exist are largely remote-controlled machines, which in rare cases carry weapons.‖
10
BOOTHBY, supra note 5 at page 4.
11
Id. at 1.
12
Id. at 230-233.
6|KANWAR

(ii) the prohibition to cause unnecessary suffering to combatants;


(iii) proportionality

These apply to any weapons, and were repeated by the International Court of Justice in its 1996
Advisory Opinion on the Legality of the Threat or Use of Nuclear Weapons (1996).13

It is worth noting that the structure of modern IHL is anthropocentric, it places human action at
the center, and therefore in weapons law, an enquiry regarding conduct precedes nature of the
weapons. Instead of prohibiting weapons ―of a nature‖, it prohibits the conduct of employing
weapons ―of a nature.‖ The basic principle of distinction between civilians and combatants and
between civilian objects and military objectives prohibits (1) conduct of employing weapons that
(a) cannot be directed at a specific lawful target and (b) are found as a result to be ―of a nature‖
to strike lawful targets and civilians or civilian objects without distinction; or (2) the effects of
which cannot be (a) limited as required by the law of international armed conflict and which (b)
therefore are of a nature to strike lawful targets and civilians or civilian objects without
distinction;14 The same is true of prohibition of unnecessary suffering or superfluous injury: what
are prohibited are (3) operations which employ weapons that are (a) calculated, to cause
unnecessary suffering or superfluous injury to combatants and (b) are therefore of a nature to do
so. Though it might seem logical to begin with the nature of a weapon, and find various uses of it
prohibited, this is not what is suggested in the current phase of IHL. Instead, the finding of
conduct violating the principles of discrimination, proportionality, and the prohibition on
unnecessary suffering or superfluous injury will lead in some instances to conclusions about the
nature of the weapons. There is no automatic prohibition on robotic weapons. One must first find
instances where these weapons cannot be properly targeted or cause excessive injury in order to
see how intrinsic these characteristics are to the technology itself.

IV. RULES AND TOOLS FOR A ―POST-HUMAN‖ ERA

Given the speed of technological change, (―Offensive robotic weapons such as Predator or cruise
missiles that are currently tele-operated or programmed to attack a certain area/target, but that
have the potential of becoming completely autonomous relatively soon‖) anticipating the advent
of autonomous weapons might not be a bad idea, and interventions might be sought in the
engineering of norms as well as technology. The greatest obstacles to automated weapons on the
battlefield are likely to be legal and ethical concerns. Forward thinking scholars have taken up
the challenge to law and technology into engagement. Both rules and weapons can each be re-

13
International Court of Justice in its 1996 Advisory Opinion on the Legality of the Threat or Use of Nuclear
Weapons, paras. 78 & 88),
14
St. Petersburg Declaration formulated, both explicitly and implicitly, the principles of distinction, military
necessity and prevention of unnecessary suffering, defined as the excess beyond disabling an enemy, captured
disabled or left the fight is not necessary to inflict violence upon : The Additional Protocols of 1977 reaffirmed and
elaborated on these principles, in particular that of distinction: ―(...) the Parties to the conflict shall at all times
distinguish between the civilian population and combatants and between civilian objects and military objectives and
accordingly shall direct their operations only against military objectives.‖ (Art. 48, Protocol I; see also Art. 13,
Protocol II).
7|KANWAR

tooled to accommodate the other. Two authors have taken up this challenge from opposite
vantage points: Armin Krishnan explores the creation of a legal regime to respond to the
technology; Ronald Arkin discusses the creation of technology that incorporates the legal rules.15

Krishnan turns away from the current state of IHL, which gives weak guidance and insufficient
constraint on these weapons, to outline possible parameters for future arms control agreements
for limiting the use of autonomous weapons. If we were to develop treaties, the most pressing
would be to define autonomous weapons under international law and agree on permissible roles
and functions for these weapons. First the scope of the treaty would rely upon an agreement on a
definition of a robotic or ―autonomous‖ weapon. After arriving at a definition that is both precise
enough to exclude irrelevant technologies while capacious enough to include future
developments, the real challenge will be the content of regulation. Under which circumstances
should the armed forces be allowed to use them? For example, states might bind themselves to
use autonomous weapons only for defensive functions, such as guarding military facilities and
no-fly zones. Another approach to arms control agreements has been to limit the number of
weapons, a step that states might take to prevent letting loose too many uncontrolled or poorly
monitored weapons in the world. An analogy could be made to the continued development of
nuclear weapons even as there has been effective international opinion and pressure keeping use
in check since the end of the Second World War. A third approach, not mentioned by Krishnan,
would be to use autonomous weapons only for difficult humanitarian missions not involving the
use of force such as the clearing of land mines. Any of these agreements would only work if
states could agree upon definitions and classifications, and that these definitions would keep pace
with changes in technology. At the same time, for any kind of international arms control treaty to
work the possibility of monitoring its compliance has to be prejudged. It is more likely that
principles constraining the use of the weapons (as with nuclear, biological, and chemical
weapons) defensive roles such as guard duty in closed off areas, clearly limited in numbers,
would be more successful than an outright ban (as in case of landmines).

Perhaps the most audacious as well as most optimistic contribution to this literature is Ronald
Arkin, Governing Lethal Behavior in Autonomous Robots which actually celebrates the
possibility of ―post-human‖ warfare guided by programmed ethics. Arkin is optimistic that the
compatibility of autonomous weapons with international law and the conventions of war for non-
proliferation, or opposite creating robots not only conform to international law but ―outperform
human soldiers in their capacity to use lethal force.‖ It produces an ―artificial conscience‖ based
in part on the rules of IHL.16 Arkin first makes the case that examines why soldiers routinely fail
in battle regarding ethical decisions and thereby often fail to follow following IHL. He puts forth
a positive agenda for designing warbots autonomous systems programmed with the capacity for
the ethical use of lethal force; and includes relevant IHL and weapons law. Arkin‘s utopianism
remains a step ahead of the futurism of the others authors reviewed above. Questions remain. In
the event of failure, should IHL follow the designer or the unit used to deploy the weapons?
Arkin avoids any implication that whatever ethical imperatives are programmed, a weapon

15
KRISHNAN, supra note 9.
16
RONALD ARKIN, GOVERNING LETHAL BEHAVIOR IN AUTONOMOUS ROBOTS Chapman & Hall; 1 edition (May 27,
2009) Arkin provides examples that illustrate autonomous systems‘ ethical use of force (including relevant Laws of
War), and is just as pessimistic when also examines why soldiers fail in battle regarding ethical decisions.
8|KANWAR

whose use or misuse cannot be attributed to any human agent is dangerous from the outset. This
brings us back to Singer‘s intuition that when humans are put at a distance from the battlefield
that endangers compliance with rule system, and applicability of the rule system itself.

V. CONCLUSION: A POST-SCRIPT TO THE HUMANIZATION OF IHL?

At the broadest level, as each of these books suggests, it seems the introduction of sophisticated
robotic weapons into combat does introduce sufficiently profound changes in the conduct of
armed conflict as to pose a challenge to existing IHL. There are at least two ways to view what
the paradigmatic challenges to the discourse pulling IHL further away from the humanization
and toward post-human. The first resistance is the predictable: the expansion of the law to
incorporate anything arguably on the outside. The second is more unsettling de-centering of the
discourse of ―humanity‖ which has had so many meanings. The first suggestion would be to look
beyond the weapon to find the human agent responsible. Here the question of applicability of
IHL must be revisited in each instance. For instances where human agency becomes so
attenuated as to seem to disappear from view, attribution must be identified under the complex
variable where deployment must be traced and programming stands-in for command. Already,
uniformed soldiers to program and troubleshoot missile guidance or military communication
systems. This extension of law is already at work, and may reach civilian computer technicians
thousands of miles away from battlefields may work side by side with. And with modern
technologies such as long-range missiles and unmanned missile-bearing planes, the focus on a
well-articulated weapons law is useful to IHL which has always struggled to keep pace with
technological innovations in the means and methods of combat. Optimistically, technological
advance can support humanitarian standards. Just as IHL must equipoise between humanitarian
ethos and effective killing, the weapons that are created during the humanization of IHL will
inevitably be described both in terms of their life-saving and life-ending capacities.
One can imagine a sharpening of humanity as an object of protection rather than as the initiator
of armed conflict. A second, more deconstructive view is that robots reveal the inhumanity
already present in the law of war. IHL clings to humanity in the first sense to inculcate the latter.
According to one of Singer‘s informants, ―To me, the robot is our answer to the suicide bomber.‖
The analogy between a robot and a suicide bomber is a chilling portent of post-human warfare.
What does the robot have in common with the suicide bomber? Both are at the extremities of
war: present in combat, lethal, and neither is entitled to the protections of IHL. In short, they are
objects of war not contemplated by humanitarian law, and placed discourse of ―humanity‖ in
question. They are post-humanitarian concerns, and perhaps, in a range of ways, ―post-human.‖
It is possible that in the near future, in light of the presence of robotic weapons making decisions
on the battlefield, and suicide bombers converting bodies into means and methods, the notion
―humanity‖ itself will require rethinking in its connection to IHL.

You might also like