Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
0Activity
0 of .
Results for:
No results containing your search query
P. 1
Law and Ethics for Autonomous Weapon Systems

Law and Ethics for Autonomous Weapon Systems

Ratings: (0)|Views: 5,947 |Likes:
Published by Hoover Institution
WHY A BAN WON'T WORK AND HOW THE LAWS OF WAR CAN. Public debate is heating up over the future development of autonomous weapon systems and the merit and risks associated with their use in war. Grounded in a realistic assessment of technology, this essay outlines a practical alternative with which to evaluate the use of autonomous weaponry that incorporates codes of conduct based on traditional legal and ethical principles governing weapons and warfare.
WHY A BAN WON'T WORK AND HOW THE LAWS OF WAR CAN. Public debate is heating up over the future development of autonomous weapon systems and the merit and risks associated with their use in war. Grounded in a realistic assessment of technology, this essay outlines a practical alternative with which to evaluate the use of autonomous weaponry that incorporates codes of conduct based on traditional legal and ethical principles governing weapons and warfare.

More info:

Published by: Hoover Institution on Apr 08, 2013
Copyright:Traditional Copyright: All rights reserved

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

04/06/2014

pdf

text

original

 
Anderson and Waxman 
Law and Ethics for Autonomous Weapon Systems 
Hoover Institution
 
 
Stanford University
   t  a  s   k   f  o  r  c  e  o  n  n  a   t   i  o  n  a   l   s  e  c  u  r   i   t  y  a  n   d   l  a  w
A NAtioNAl Security ANd lAw eSSAy
la an ehs Anms wapn Ssms
Why a Ban Won’t Work and How the Laws o War Can 
b Knnh Ansn an Mah waxmanJan Pkns task F n Nana S an la
www.hoover.org/taskorces/national-security 
Introduction
Public debate is heating up over the uture development o autonomousweapon systems.
1
Some concerned critics portray that uture, oten invokingscience-ction imagery, as a plain choice between a world in which thosesystems are banned outright and a world o legal void and ethical collapseon the battleeld.
2
Yet an outright ban on autonomous weapon systems,even i it could be made eective, trades whatever risks autonomous weaponsystems might pose in war or the real, i less visible, risk o ailing to developorms o automation that might make the use o orce more precise and lessharmul or civilians caught near it. Grounded in a more realistic assessmento technology
acknowledging what is known and what is yet unknown
aswell as the interests o the many international and domestic actors involved,this paper outlines a practical alternative: the gradual evolution o codes oconduct based on traditional legal and ethical principles governing weaponsand warare.A November 2012 U.S. Department o Deense policy directive on the topicdenes an “autonomous weapon system” as one “that, once activated, canselect and engage targets without urther intervention by a human operator.
3
 Some such systems already exist, in limited deensive contexts and or whichhuman operators activate the system and can override its operation, such asthe U.S. Patriot and Phalanx anti-missile systems and Israel’s Iron Dome anti-missile system.
4
Others are reportedly close at hand, such as a lethal sentryrobot designed in South Korea that might be used against hostile intrudersnear its border.
5
And many more lie ahead in a uture that is less and lessdistant.
6
 
Anderson and Waxman 
Law and Ethics for Autonomous Weapon Systems 
2
 
Hoover Institution
 
 
Stanford University
Autonomous weapon systems are entering the battleelds o the uture, but theyare doing so one small automated step at a time. The steady march o automation(in dierent operational unctions on battleelds that themselves vary greatly)is rankly inevitable, in part because it is not merely a eature o weaponstechnology, but o technology generally
anything rom sel-driving cars tohigh requency trading programs dealing in the nancial markets in nanosecondintervals too swit or human intervention. Automation in weapons technologyis also inevitable as a response to the increasing tempo o military operationsand political pressures to protect not just one’s own personnel but also civilianpersons and property.Just as increased automation in many elds is inevitable, automation in weaponswill occur, and is occurring, incrementally. Autonomy in weapon systemsmight positively promote the aims o the laws o war in some technologicalcongurations and operational circumstances
but not in others. Whileautonomy or weapon systems or its own sake is not a strategic goal othe U.S. military in weapons design, many actors will push automation insome circumstances into genuine weapon autonomy. In some operationalcircumstances, as both the decision-making power o machines and the tempoo operations potentially increase, that human role will be likely to slowlydiminish.
7
Though automation will be a general eature across battleeldenvironments and weapon systems, genuine
autonomy 
in weapons will probablyremain rare or the oreseeable uture and driven by special actors such asreaction speeds and the tempo o particular kinds o operations.The combination o
inevitable
and
incremental 
development o automatedsystems to the point, in some cases, o genuine weapon autonomy raises not onlycomplex strategic and operational questions but also proound legal and ethicalones. Advances in automation toward autonomy raise possibilities o tradeosbetween substantial gains in the ability to make war less destructive andharmul, primarily through the benets o greater automated weapon precision,on the one hand, and signicant dangers that military orce will be used moredestructively and with less ethical virtue on the other. Advancing automationraises cautious hopes among many, but also proound ears among some, aboutthe uture o war.Highlighting that the incremental automation o some weapon systems in someorms is inevitable is not a veiled threat in the guise o a prediction. Nor is itmeant to suggest that the path o these technologies is beyond rationally ethicalhuman control. On the contrary, we believe that sensible regulation o thesesystems as they emerge is both possible and desirable. But regulation has toemerge along with the technologies themselves, and against the backdrop oa world that will likely come to adopt, over coming decades, technologieso autonomy or sel-driving vehicles, or advanced nursing or elder-care robots,
 
Anderson and Waxman 
Law and Ethics for Autonomous Weapon Systems 
3
 
Hoover Institution
 
 
Stanford University
or any number o other technologies that evolve rom being increasinglyautomated to perorming some unctions with genuine machine autonomy.With many o these technologies, however, the machine will take actions withpotentially lethal consequences
and it will happen largely because peopleconclude over successive decades that machines can sometimes do certaintasks better than humans can.Recognizing the incremental evolution o these technologies is key to addressingthe legal and ethical dilemmas associated with their inevitability. This isparticularly so or the United States, because it is a leader both in developingand deploying new weapon technologies in ways visible to the world, and inormulating and conducting the legal, policy, and ethical processes o ormalweapons reviews, as required by the laws o armed confict. The certainyet gradual development and deployment o these systems, as well as thehumanitarian advantages that may be created by the precision o some systems,make some proposed responses unworkable as well as normatively dubious,indeed wrong.
8
A sweeping international ban treaty proposed by some advocacygroups alls into that category.The United States and its partners have grave interests
legal, moral, andstrategic
in developing, simultaneously with new automated and autonomousweapons, a broadly shared normative ramework and expectations or how thesesystems must perorm to be lawul. They have an interest in discussions andexchanges o views and practices with others in the world who are developingand deploying these weapons. Those interests make it imperative, though, thatthe United States and its partners understand that shared norms only comeabout with some shared inormation about each party’s own view o principles,policies, and practices regarding these weapons. The United States particularlymust thereore resist its own natural impulses toward secrecy and reticencewith respect to military technologies, at least where easible. U.S. interestsin technological and military secrecy must be balanced here against interests inshaping the normative terrain
the contours o how international law should beunderstood, interpreted, and applied to these new weapons, as well as inormalinternational expectations about appropriate technological design
on which itand others will operate militarily as automation evolves.Just as development o autonomous weapon systems will be incremental, so toowill development o norms about acceptable systems and uses. The United Statesand its partners must act, however, beore international expectations aboutthese technologies harden around either o two extreme alternatives: imposingunrealistic, ineective or dangerous bans based on sci- scenarios o killerrobots rather than realistic understandings o the new technologies and theiruses, or proceeding with ew or no constraints at all, which might well result in

You're Reading a Free Preview

Download