Special edition on unmanned vehicles and the law

JOURNAL OF LAW, INFORMATION AND SCIENCE Vol. 21(2) 201 1/2012 The Laws of Man Over

Vehicles Unmannned
201 1/2012

This volume may be cited as (2011/2012) 21(2) JLIS Published in 2011 ISSN-0729-1485 Articles, books for review, subscriptions outside North America and all enquiries should be addressed to: The Editor Journal of Law, Information and Science c/- The Law School University of Tasmania Private Bag 89 Hobart Tasmania 7001 Australia Subscriptions within North America should be addressed to: Wm W Gaunt & Sons Inc Gaunt Building 3011 Gulf Drive Holmes Beach Florida 34217-2199 USA Copyright © 2012 University of Tasmania All rights reserved. Subject to the law of copyright no part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise, without the permission of the owner of the copyright. All enquiries seeking permission to reproduce any part of this publication should be addressed in the first instance to The Editor, Journal of Law, Information and Science, c/Law School, University of Tasmania, Private Bag 89, Hobart, Tasmania 7001, Australia. Published by: Law School University of Tasmania Private Bag 89 Hobart Tasmania 7001 Australia Printed by: UniPrint University of Tasmania Private Bag 15 Hobart Tasmania 7001 Australia

Editorial Board
Chairman The Hon M D Kirby AC CMG Editor Dr Brendan Gogarty Law School, University of Tasmania Associate Editor Professor Dianne Nicol Law School, University of Tasmania Managing Editor Bruce Newey, LLB(Hons) LLM Members
Professor E Akindemowo Professor of Law Thomas Jefferson School of Law Professor J Mattick Institute for Molecular Bioscience University of Queensland Professor A Campbell Centre for Biomedical Ethics Yong Loo Lin School of Medicine National University of Singapore Dr J Kaye Centre for Health, Law and Emerging Technologies, University of Oxford Associate Professor J Forder Faculty of Law Bond University Professor A Christie Davies Collision Cave Professor of Intellectual Property Director, Intellectual Property Research Institute of Australia, University of Melbourne Professor J Bing Norwegian Research Centre for Computers & Law University of Oslo Dr M Rimmer Faculty of Law Australian National University Professor D Vaver St Peter's College University of Oxford Hon Magistrate Dr Roger Alasdair Brown Local Court of New South Wales Professor G Cho Geographic Information Systems & the Law University of Canberra The Hon Justice P Heerey Judge, Federal Court of Australia Melbourne Professor G Laurie School of Law University of Edinburgh Dr S Gibbons The Ethox Centre University of Oxford Dr Melissa DeZwart University of South Australia Professor L Edwards University of Sheffield

Professor M Purvis Professor of Information Science University of Otago Professor K Bowrey Faculty of Law University of New South Wales Associate Professor D Mendelson School of Law Deakin University Professor W van Caenegem Faculty of Law Bond University

Dr Chris Dent Intellectual Property Research Institute of Australia, University of Melbourne Professor E Clark Griffith Business School Griffith University Professor G Greenleaf Faculty of Law University of NSW Associate Professor Yee Fen Lim Nanyang Business School Nanyang Technological University

Associate Professor A Kenyon Faculty of Law University of Melbourne Professor D Hunter New York Law School Associate Professor Benjamin Goold Faculty of Law University of British Columbia

Inviting Contributors: We welcome scholarly and research articles of any length (preferably 5000–10,000 words) on topics related to law, information and science and papers giving information on major new research projects, new technologies and new applications in this area. Articles and papers should be accompanied by a 200–500 word abstract and brief biographical note. Contributions may be submitted in the form of email attachments to: submissions@jlisjournal.org, preferably in Microsoft Word. All articles and papers must be submitted in English and should be accompanied by an author agreement which can be found on the Journal of Law, Information and Science website: <http://www.jlisjournal.org/contributors.html>. All papers published as scholarly or research articles will be refereed by two referees pursuant to a ‘double-blind’ refereeing policy, but papers describing new research projects, technologies and applications will not be refereed except to the extent necessary to ensure that they provide an accurate description of the service or resource which they are describing. The editors retain the right to decide whether any contribution will be considered for publication as a scholarly or research article or as a descriptive paper. The Journal is published in conformity with the Australian Guide to Legal Citation. For further information please see the Journal’s website: <http://www.jlisjournal.org/contributors.html>. It would greatly facilitate the publishing process if papers are submitted in this reference style. Suggestions Welcome: The Editorial Board welcomes any suggestions readers may have of ways the Journal may improve. Simply address your comments to the Editor, Journal of Law, Information and Science, Law School, University of Tasmania, Private Bag 89, Hobart, Tasmania 7001, Australia or email: editor@jlisjournal.org. Information for Subscribers: Cost of the Journal is $44.00 (including GST) per issue for subscribers resident in Australia and $50.00 (Australian) for subscribers resident overseas (delivered economy airmail for overseas subscribers); (two issues per year). Readers can also purchase a 'one-off' copy of a particular issue. You may either send a cheque with your subscription notice, or we will invoice you when you receive your first issue. Intending subscribers should write, or send a fax or email message to: Publications Assistant, Faculty of Law, University of Tasmania, Private Bag 89, Hobart, Tasmania, Australia, 7001. Phone: (03) 6226 7552; Fax: (03) 6226 7623 Email: Law.Publications@utas.edu.au Journal Homepage: http://www.jlisjournal.org/

JOURNAL OF LAW, INFORMATION AND SCIENCE
Published by the Faculty of Law, University of Tasmania

Vol 21(2) CONTENTS
Editorial

2011/2012

BRENDAN GOGARTY.......................................................................................................................... i Précis Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art BRENDAN GOGARTY AND ISABEL ROBINSON ................................................................................... 1 Expert Commentaires Military Lethal Robotic Technologies: The Implications for Human Rights and International Humanitarian Law PHILIP ALSTON ............................................................................................................................... 35 UVs, Network-centric Operations, and the Challenge for Arms Control ARMIN KRISHNAN .......................................................................................................................... 61 Regulating the Use of Unmanned Combat Vehicles: Are General Principles of International Humanitarian Law Sufficient? MEREDITH HAGGER AND TIM MCCORMACK ................................................................................. 74 Unmanned Naval Vehicles at Sea: USVs, UUVs, and the Adequacy of the Law ROB MCLAUGHLIN ....................................................................................................................... 100 Seductive Drones: Learning from a Decade of Lethal Operations MARY ELLEN O’CONNELL ............................................................................................................ 116 Automating Warfare: Lessons Learned from the Drones NOEL SHARKEY ............................................................................................................................ 140 Taking Humans Out of the Loop: Implications for International Humanitarian Law MARKUS WAGNER ....................................................................................................................... 155 Civilian The Laws of Man Over Vehicles Unmanned JIM DAVIS ..................................................................................................................................... 166

Unmanned Vehicles, Surveillance Saturation and Prisons of the Mind BRENDAN GOGARTY..................................................................................................................... 180 Unmanned Vehicles: Subordination to Criminal Law under the Modern Concept of Criminal Liability GABRIEL HALLEVY ....................................................................................................................... 200 The Regulation of Unmanned Air System Use in Common Airspace ANNA MASUTTI ........................................................................................................................... 212 Guns, Ships, and Chauffeurs: The Civilian Use of UV Technology and its Impact on Legal Systems UGO PAGALLO ............................................................................................................................. 224 Unmanned Vehicles and US Product Liability Law STEPHEN S WU ............................................................................................................................ 234

Editorial
Since its first publication in 1981, the Editorial Board has identified key emergent areas which it believes warrant special attention by leading academics and professionals. These special editions are dedicated to informing debate about important science and technology in advance of its commercialisation, or if that is not possible, as soon as legal problems are identified with its introduction. This special edition is concerned with the legal implications of unmanned vehicles (UVs); that is, vehicles that operate without an onboard human controller. Whilst the use of unmanned military vehicles — especially unmanned aerial vehicles, or ‘drones’ — have been the subject of increasing publicity, scrutiny and debate, there has been a much quieter robotic revolution taking place across myriad transport based sectors and industries. Indeed, there are few forms of vehicles, be they airborne, seaborne or land based, that have not been, or are in the process of becoming ‘unmanned’. Given their relative novelty many of the uses to which these new technologies are being put have not been subject to social, legal or ethical consideration or review. This is, of course, a common feature of novel technology; regulatory review and debate more often than not occurs after a technology has been introduced, because of moral panic, the discovery of unexpected risks, the revelation of a legal loophole, or due to the influence of interest groups who capture and direct the debate. In such situations, informed and considered legal responses are challenging, if not impossible to achieve. The aim of this special edition, like the journal more generally, is to help bridge that regulatory gap between technological advance and law reform. As already noted, there has already been a widespread uptake in unmanned technologies, so that there is already some gap between technological and legal implementation. However, as was noted in the précis article to this edition: the horse has already bolted, so to speak, and any discussion of regulatory review will be limited by the pragmatic reality that UVs are now firmly entrenched in [a range of areas] ... That is not to say that the march of UVs should continue unabated. There is still a chance to at least shape the way UVs are used and how far they proliferate within militaries and beyond. That précis provided a background to the use, implementation and growth of unmanned technology. It is included in an abridged and updated version in this special edition. The remainder of the articles are commentaries by a range of international academics and lawyers on what they view as the most compelling or concerning aspects of the technology and if, or how, we should try to regulate or control it.

ii

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

The expert commentaries in this special edition of the JLIS are divided into two streams: the first focuses on aspects of the military use of UVs, while the second explores issues related to the civilian use of the technology. In the lead article, Professor Philip Alston examines the human rights dimensions of new robotic technologies. Professor Alston’s analysis is predicated on three principle assumptions. First, that unmanned vehicles which carry lethal weapons will soon operate on an autonomous basis; second, that these technologies have implications for human rights; and third that ‘there is no inherent reason why human rights and humanitarian law considerations cannot be proactively factored into the design and operationalisatation of the new technologies.’ However, Professor Alston warns that this will not occur unless the human rights community presses both public and private sector actors. Similarly, Meredith Hagger and Professor Tim McCormack consider the impact of developments in military robotic technology from an international humanitarian law (IHL) perspective. The authors analyse the ‘criticism of the increasing incidence’ of attacks involving Unmanned Combat Vehicles (UCVs). The authors argue that the primary concern lies not with the use of UCVs, but rather that they are used in covert operations which often place them beyond the purview of the law. The authors also question whether in some circumstances IHL is indeed the appropriate legal framework to regulate the use of UCVs. Professor Mary Ellen O’Connell’s work also considers the scrutiny and compliance issues arising from remote assassination by UCVs. She argues that the very existence of UCV technology may well be lowering inhibitions to kill citing a growing complacency with respect to drone attacks. Professor O’Connell’s contribution seeks to, among other things, raise awareness of this fact. Professor O’Connell questions whether existing laws that regulate ‘manned systems’ are sufficient to regulate the use of UCVs. The tendency towards moral disengagement is further explored by Professor Noel Sharkey in his contribution. He cautions that current ‘man-in-the-loop’ systems have already had this affect and taking man out of the loop will only serve to compound moral disengagement. Professor Sharkey questions whether we really comprehend the limits of this technology and the questions about human responsibility it raises. In addition to moral disengagement, Professor Sharkey notes three other areas that offer valuable lessons in this context: targeted killings in covert operations; expansion of the battlespace; and the illusion of accuracy. The transition from ‘man-in-the-loop’ weapons systems to fully automated platforms is a theme that is also addressed by Professor Markus Wagner. He notes that rapid technological development in this area has led to a vigorous public debate which has largely focused on the legality of the use of UAVs in targeted killings. However, Professor Wagner considers that some of the most profound questions about UVs will arise if or when new generations of technology permit fully autonomous systems to engage in warfare.

Editorial

iii!

According to Assistant Professor Armin Krishnan, the tendency for UV technology to avoid existing accountability and transparency mechanisms will also pose challenges to the development of an effective arms control treaty relating to their use. In his discussion, Professor Krishnan argues that any UV arms control treaty will only be as effective as the monitoring and verification mechanism which supports it. He further notes that the miniturisation of UVs and their integration into ‘network-centric’ operations, will create further difficulties for arms controls in this area. Associate Professor Rob McLaughlin — who considers the regulation naval UVs — takes a somewhat different view of UV regulation arguing that existing regimes of governance are capable of evolving to adequately respond to the challenges posed by UV technology. In developing this argument, Professor McLaughlin focuses on two of the more general, contextual issues raised by the use of UVs in the context of maritime operations law, that of ‘status; and the poise and positioning of maritime forces’. Switching to the use of UVs in a civilian context, a question that is sure to challenge both legislatures and courts in the near future is who should be held legally responsible for offenses committed by unmanned vehicles? This question is assuming greater importance as the technology rapidly transitions from military to civilian use. In his contribution, Professor Gabriel Hallevy discusses liability for criminal offences committed by UVs. He concludes that current models of criminal liability are not only relevant to unmanned vehicles, but are also available. Of course the potential legal consequences of the advent of self-driving cars are not only restricted to the realm of criminal law. Emeritus Professor Jim Davis discusses the appropriateness of negligence, the relevance of no-fault compensation schemes and the possible application of strict liability to the operation of unmanned vehicles on public roads, as well as the law of privacy. Professor Davis posits that all these legal systems contain sufficient flexibility to accommodate any increase in covert surveillance that may result from the use of UVs. Approaching the issue of privacy and UVs from a different perspective, Dr Brendan Gogarty considers the implications of the increased surveillance reach provided by UV technology on civil society. He argues that UVs provide those controlling them with an unprecedented ability to engage in ‘global, persistent, surveillance’ of both external and domestic populations, cautioning that a failure to engage in debate about the appropriate legal (and social) responses may ultimately lead to an erosion of not only privacy but also of other civil liberties. Professor Anna Masutti’s contribution on the other hand considers the question of how unmanned aerial systems (UASs) can be used safely in civilian spaces. She notes that while UASs are already widely used under specific conditions and segregated airspaces in the military context, their use in the civilian sphere is largely still in the early stages of development.

iv

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

Professor Masutti points out that until appropriate measures and regulations are developed, the full potential of civilian uses of UASs will not be realised. Professor Ugo Pagallo considers questions of traffic safety arising from civilian UVs; in particular their use in border security and policing; water related emergency and hazard management; and as ‘chauffeurs’ or unmanned ground vehicles. According to Professor Pagallo, these uses all affect (or will do so in the future) various aspects of civilian traffic safety, international humanitarian law, contractual obligations and strict liability. Professor Pagallo’s contribution explores the myriad of challenges posed by UVs to these legal frameworks. Professor Stephan Wu provides an overview and summary of the kinds of cases seen to date concerning the liability arising from the use of unmanned vehicles and robots generally. In doing so, Professor Wu reveals helpful insights about the future of unmanned vehicle liability. We are extremely honored to have elicited the support and contribution from such an eminent group of scholars and academics. We are very proud of the wide-ranging expert commentary they have produced. We hope that you find this edition interesting and insightful and hope that it will contribute to the process of regulatory reform in all the areas discussed. Brendan Gogarty Editor

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art
BRENDAN GOGARTY AND ISABEL ROBINSON Abstract
This introductory overview is an update to the original précis paper in JLIS vol 19(1) by Gogarty & Hagger. It provides a contemporary summary of unmanned technology in use at the time of publication. Legal analysis and commentary from the original précis has been removed. Comments and responses by authors in this edition should be taken to refer to the original précis paper.

1

Introduction

In this paper we will examine the current state of unmanned vehicle (UV) technology. We will begin by defining the key terms of art relating to UV technology. We will subsequently set out a brief history of UVs, prior to the turn of the century and then consider why their use has exploded following it.

1.1 Definition and terms
There is, as of yet, a lack of consistency in the nomenclature and taxonomy of unmanned vehicles. As with the précis we will utilise the following acronyms, recognising they are not universally accepted. Expert commentators in this edition have also adopted the following terms. 1.1.1 Common acronyms, synonyms and key terms • UVs: Any vehicle which operates without a human in direct physical contact with that vehicle. • UV variants: The four acronyms used to describe UVs operating in different environments are UAVs (unmanned aerial vehicles), UGVs (unmanned ground vehicles), USVs (unmanned [water] surface vehicles), and UUVs (unmanned underwater vehicles). • UCV variants: Refers to weaponised UVs. UVs designed specifically for this purpose usually include the term ‘combat’ within the acronym; hence a UCAV is an unmanned combat aerial vehicle.

EAP 1

2

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

• Drones: The term ‘drone’ is arguably the most common and widespread synonym for UVs. In particular it is used to refer to unmanned aerial vehicles (UAVs).1 • Remote vehicles2: These generally refer to vehicles over which a human has direct, albeit remote, control. For instance a human operator receives visual images from cameras or sensors on-board a UV and steers it by cable (tethered control) or wireless signal (remote control). This form of human/machine interface is referred to as ‘teleoperated’ control. • Robotics: The more autonomous forms of UVs are often referred to as robots or robotic systems. The Oxford English Dictionary (OED) describes a robot as ‘a machine … designed to function in place of a living agent, esp. one which carries out a variety of tasks automatically or with a minimum of external impulse’. 1.1.2 Autonomy UVs vary in their form and complexity, but perhaps the most important distinguishing feature, especially for the purposes of this article, is the degree to which a UV can operate without human control and direction. Modern UVs are all ‘controlled’ to one degree or another; however modern technology platforms and ‘artificial intelligence’ (AI) give drones the capacity to function without direct human intervention. UAVs in current use can, for instance, be set general patrol coordinates and then left to pilot themselves; while surveillance UGVs can independently patrol long stretches of border, only alerting a human controller when suspicious activity is detected. Due to this increasing level of independence, UVs are often referred to as ‘autonomous vehicles’. However, it is clear that, at present, no drone in active military or commercial use is actually ‘autonomous’, in the sense that they are completely independent or self-governing. In this edition we will continue to maintain a distinction between ‘semi-autonomous’ and ‘fully autonomous’ drones. Semi-autonomous drones are given broad operating instructions by operators, but are left to carry out routine functions within those parameters, such as navigation or monitoring operations. Critical decisions, such as whether to fire weapons or follow a suspect target off routine patrol paths are currently left to a human operator to veto or directly control. In this respect military

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
1 2

Indeed, the Oxford English Dictionary describes a drone as ‘a pilotless aircraft or missile directed by remote control.’ Other common terms used to describe UVs include Remotely Piloted Vehicles and Remotely Operated Vehicles.

EAP 2

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

3

officials sometimes describe this form of artificial intelligence as ‘supervised autonomy’.3 Fully autonomous drones would not require such a human veto. Rather, they would be given general instructions and then left to fulfil their directives according to their programming and artificial intelligence. In this way a fully autonomous drone would be akin to a soldier who is given a general directive — for instance, ‘secure that hill’ — but, apart from observing general rules of engagement would be left to fulfil the mission according to programming.4

2

The Historical Use of Unmanned Vehicles

As we stated above, unmanned vehicles are by no means a novel technology. Ancient civilisations are known to have built a variety of unmanned craft, even flying ones.5 Although some of these may have simply been for science or spectacle, more often than not ancient UVs were used to provide advantage on the battlefield. In that arena, unmanned vehicles were seen as advantageous as they could, on the one hand, maximise the influence over the zone of conflict whilst, on the other hand, minimise exposure of personnel to the risks created by the conflict.6 This trend continued into the mechanisation

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
3

John Keller, The time has come for military ground robots (2010) 20(6) Military & Aerospace Electronics <http://www.militaryaerospace.com/index/display/articledisplay/363893/articles/military-aerospace-electronics/volume-20/issue6/features/special-report/the-time-has-come-for-military-ground-robots.html> (accessed 10 March 2012). According to the UN Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Philip Alston, ‘[a] number of countries are already reportedly deploying or developing systems with the capacity to take humans out of the lethal decision-making loop.’ One such autonomous robotic system is an unmanned watchtowers deployed by Israel on the Gaza border, armed with machine guns that locates targets and ‘transmits information to an operations command centre where a soldier can locate and track the target and shoot to kill.’ Future plans include a Watchtower that will remove human intervention from the identify/target/shoot process. A similar system is being used by South Korea in the demilitarised zone and has reportedly been ‘equipped with the capacity to fire on its own.’ See Philip Alston, Interim Report of the Special Rapporteur of the Human Rights Council on Extrajudicial, Summary or Arbitrary Executions, UN Doc A/65/321, (23 August 2010) 15. The ancient Greek engineer Archytas is said to have invented the first UAV, a mechanical pigeon, in the 4th Century BC. It was recorded as having flown some 200 meters. Kimon P Valavanis, Advances in Unmanned Aerial Vehicles: State of the Art and the Road to Autonomy (2007). Hence, the vast majority of early R&D in unmanned vehicles was directed towards gathering surveillance from, or delivering payloads to, high-risk territory. The Greeks and Chinese, for instance, set unmanned ships on fire and steered them into their enemies’ fleets to cause panic and destruction or break their formation. Chinese generals also made use of kites for military reconnaissance. In 200 BC, the

4

5

6

EAP 3

4

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

of war following the industrial revolution; indeed some of the first machines to enter onto the modern battlefield were UVs.7 Yet, despite being involved in most major armed conflicts from that period to the turn of the millennium,8 the impact of UVs on the conflict zone — with some notable exceptions by the Israelis9 — was rather minimal.10

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Chinese General Han Hsin of the Han Dynasty was said to have flown a kite over the walls of a city he was attacking to measure how far his army would have to tunnel to reach past the defences. See Michael John Haddrick Taylor and David Mondey, Milestones of Flight (Jane’s, 1983); Kenneth S Smith Jr, The Intelligence Link – Unmanned Aerial Vehicles and the Battlefield Commander (1990) GlobalSecurity.org <http://www.globalsecurity.org/intell/library/reports/1990/index.html> (accessed 2 March 2012).
7

Including unmanned surveillance balloons that dropped explosives on enemies (patented in 1863), remotely controlled torpedoes (1866) and aerial kites equipped with cameras remotely controlled by a long string to take surveillance photos of enemy positions and fortifications (1898). See Office of the Secretary of Defense (US) Unmanned Aircraft Systems Roadmap 2005 – 2030, (2005) k-1, (‘US OSD Roadmap’). During the 1980s, the Israeli air force successfully used UAVs to detect, and draw fire from, Syrian anti-aircraft batteries, allowing manned jets to then remove the threat. Following this success, Israel expanded its drone program, placing extensive resources into the novel technology and how it could be integrated into combat systems and strategy. By the turn of the century Israel was using a range of UVs to provide Intelligence, Surveillance and Reconnaissance (ISR) data from, or adjacent to dangerous enemy territory that could be provided via up-to-the-minute feeds to commanders, air support, battle units and strike teams. See Adam Stulberg, ‘Managing the Unmanned Revolution in the U.S. Air Force’ (2007) 51(2) Orbis 253. Although the German V-1 bombs that terrorised London during the late part of WWII are often cited as the first successful UAV attack, we would not consider them either true UAVs in the modern sense, nor truly ‘successful’. Whilst the technology behind V-1s was, at the time, groundbreaking, it was not capable of providing a significant advantage over traditional, manned vehicles. In part this was because the systems were too costly to operate both in terms of real costs but also in terms of payload efficiency: only about one quarter of V-1s were to hit their targets, with the remainder failing. V-1s are simply single use, single target ‘terror weapons’ which ‘lacked precision guidance’. The guidance problems that plagued V-1s would also be a problem for post-war UAVs. These problems included short duration aloft and communications limitations, which required a line-of-sight to the UV or at the least close proximity to it. Whilst this was acceptable in nonconflict arenas, for instance where the drones were used as test targets, the limitation undermined one of the main advantages of UV technology, that is, removing humans from the area of risk. See Bill Yenne, Attack of the Drones: A History of Unmanned Aerial Combat (Zenith Press, 2004) 19; see also, Daren Sorenson, Preparing for the Long War: Transformation of UAVs in Force Structure Planning for Joint Close Air Support Operations (2006) Joint Forces Staff College (US) 14–15, <http://en.scientificcommons.org/35201347> (accessed 12 March 2012).

8 9

10

EAP 4

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

5

A number of factors might account for the sidelining of UVs from mainstream combat roles during the twentieth century. One is the lack of support by some operations planners and military commanders, due to the unproven, untested and initially unreliable technology.11 Early UVs did however prove successful within aerospace reconnaissance, decoy and target roles;12 which made them popular with the intelligence community. However, that meant that much of the research and development in the area was highly classified,13 and as such it is hard to determine just the number of UVs deployed to conflicts and covert operations.14

2.1 Non military roles
UVs tended to have an even smaller role outside of the military. The main exceptions to this general rule were within exploratory UUVs and agricultural UAVs. The oceans are relatively uncluttered and do not require highly complex navigation. This made early UUV development easier.15 UUVs proved useful in undersea mapping, and later in wreck detection and submarine rescue.16

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
11

As Goebel states: ‘The whole idea of reconnaissance drones seemed to be completely dead, but at the last moment the USAF rescued the program. One of the interesting themes in defence programs is how new military systems are often initially proposed in grand terms, with whizzy features and the latest technology. When the grand plan proves too complicated and expensive, the military then backtracks, finally ending up with a much more modest solution, often a minimal modification of an existing system. Interestingly, such compromise solutions often prove far more effective than expected.’ See Greg Goebel, Unmanned Aerial Vehicles (2010) ‘The Lightning Bug Reconnaissance Drones’ v2.0.0 [3.0], <http://www.vectorsite.net/twuav.html> (accessed 01 March 2012). Where they were not required to undertake complex navigation to avoid obstacles or hazards, and therefore did not require a large amount of command and control and therefore were less susceptible to jamming or spoofing. See Goebel, ibid. Although Newcome postulates that part of the reason that information about drone use in conflicts like the Vietnam War was suppressed was a fear that it would affect the livelihoods of human fighter pilots by creating a push towards the roboticisation of the air force. See Laurence Newcome, Unmanned Aviation: A Brief History of Unmanned Aerial Vehicles, American Institute of Aeronautics and Astronautics (AIAA) (2004) 67–69. UVs featured in conflicts such as the Vietnam War (see US OSD Roadmap, above n 8, p k-1) although it is clear that they did undertake important surveillance and decoy missions. See Newcome, ibid, 69. G N Roberts, ‘Trends in Marine Control Systems’ (2008) 32 Annual Reviews in Control 263. Indeed UUVs — albeit tethered versions — gained a great deal of public attention during the 1990s with the discovery and exploration of undersea wrecks like the Titanic, the Lusitania, and the Bismarck, which could only have been made possible through robotic UV systems. In fact, the first ‘golden age’ in UV technology occurred under the oceans more than a decade before it did in the air.

12

13

14

15 16

EAP 5

6

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

Obviously these roles had a naval/military utility, yet they also were important for other sectors, particularly marine research and the resource industry. Despite such vehicles being unmanned during this period, the reality was that most commercial, research and military UUVs were ‘tethered’ to a human operator and could not truly be said to be semi-autonomous.17 Another exception to the military focus of UV development has been in aerial spraying of agricultural crops, in particular by the Japanese who trialled unmanned helicopters as early as the 1950s.18 Although early UVs were initially more like a remote controlled vehicle, by the turn of the century Japanese rotary-wing UAVs were advanced enough to navigate to preprogrammed routes without direct human oversight, and undertook tasks such as crop spraying, agricultural monitoring or scientific mapping.19

2.2 UVs in the 21st century
The latter part of the 20th century saw the advent of the ‘digital revolution’, which resulted in dramatic advances in computing processing power, sensor technology and satellite telecommunications.20 These technical developments permitted a commensurate evolution in UV independence and autonomy and by the turn of the century, technology was sufficiently advanced to generate

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
See Andrew Henderson, ‘Murky Waters: The Legal Status of Unmanned Undersea Vehicles’ (2006) 53 Naval Law Review 55, 57.
17 18

Roberts, above n 15, 266. With commercial use starting in the 1970s. See Mark Peterson, ‘The UAV and the Current and Future Regulatory Construct for Integration into the National Airspace System’ (2006) 71 Journal of Air Law and Commerce 521, 546. Ibid. Satellite technology seems to have played a large part in drone development. Before reliable satellite imagery could be obtained, drones were attractive as low risk alternatives to manned fly-overs of risky territory. However, as satellite imagery became more reliable and of better resolution it was favoured over drones as a much less provocative way of collecting intelligence data: see Goebel, above n 11, ch 5. Other factors which contributed include: central processing units aboard UVs were much more powerful and could effectively manage a wider range of functions that were previously required human oversight; Roboticisation and miniaturisation meant that previously manual controls could be handed over to the central processing unit; Digitisation and miniaturisation made for lighter, more efficient vehicles, which could be deployed for longer periods and over longer distances. The efficiency gains permitted a wider range of on-board sensors to be installed. Improvements in sensor technology allowed a much wider spectrum of visual and non-visual data to be collected at a higher resolution than before. Digital compression overcame previously detrimental information ‘bottlenecks’ and permitted much more of this data to be transmitted to the controller. For information on the ‘digital revolution’ see generally, Stephen Hoare, Digital Revolution (20th Century Inventions) (Raintree, 1998).

19 20

EAP 6

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

7

real interest in deploying UVs outside of covert military operations.21 However, it was perhaps the terrorist attacks in September 2001 in the United States that served as the most important catalyst for the adoption of UVs as a key counterinsurgency tool. Of particular note is the ability of UVs to provide global, persistent surveillance; reduce the sensor-to-shoot cycle; and undertake dull dirty and dangerous roles. These factors are discussed in greater detail below. 2.2.1 Catalysts for the UV revolution: ‘Global Persistent Surveillance’ The terrorist attacks on the US in 2001, led to the so-called ‘war on terror’, and a decisive shift in the military strategy of the US and its allies. As its name suggests, the war on terror is one waged against asymmetric opposition — usually small groups, or even individuals, who may be dispersed, highly mobile and located in remote locations.22 The US response to these challenges was, in part, a policy of ‘global persistent surveillance’ which aimed to ‘deny enemies sanctuary by developing capabilities for persistent surveillance, tracking, and rapid engagement’.23 This refocussing of US strategic and military policy shifted intelligence, surveillance and reconnaissance (ISR) operations from the periphery of covert operations to the centre of regular military engagements.24 The result was increased demand, funding and research into platforms that could undertake consistent, wide-scale, and highpowered ISR duties. 2.2.2 Catalysts for the UV revolution: sensor to shooter cycle A characteristic of the war on terror has been the disparity in logistical, technological and numeric strength between the US, and the armed groups opposing it. Those opponents have adopted an asymmetric response, involving the use of decentralisation, force dispersion, concealment, ambush techniques and the ability to quickly disappear into remote locations or amongst civilian populations.25

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
21

See Peter Van Blyenburg and Philip Butterworth-Hayes, ‘UVS International Status Report on US UAV Programmes’ in 2005 Year Book: UAVs Global Perspective (2005) 112. Anthony Cordesman, Center for Strategic and International Studies, ‘The Lessons of Afghanistan: War Fighting, Intelligence, and Force Transformation’ (2002) 26. Donald Rumsfield, quoted in ibid. R Ackerman, ‘Persistent Surveillance Comes into View’ (2002) Signal Magazine, 18. See, Steven Metz and Raymond Millen, Insurgency and Counterinsurgency in the 21st Century: Reconceptualizing Threat and Response (2004) Strategic Studies Institute (SSI) monographs <http://handle.dtic.mil/100.2/ADA428628> (accessed 5 April 2012); Frank Hoffman, ‘Complex Irregular Warfare: The Next Revolution in Military Affairs’ (2006) 3(50) Orbis 395, 395–407; Mark Clodfelter ‘Airpower versus Asymmetric Enemies – A Framework for Evaluating Effectiveness’ (2002) 16(3) Air

22 23 24 25

EAP 7

8

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

Countering asymmetric warfare has required that conventional forces adopt a similar level of speed and versatility. In traditional warfare there is often a significant lapse between detecting and engaging an enemy, commonly referred to as the ‘sensor-to-shooter cycle’.26 Reducing the sensor-to-shooter cycle was a major concern for the conventional forces operating in the post 2001 middle-east conflicts. The longer the delay, the higher the chance the enemy would either disappear into countryside or urban areas, or mount a surprise attack or ambush.27 2.2.3 Catalysts for the UV revolution: dirty, dull and dangerous The growth of UV technology has also been attributed to their propensity to undertake ‘dull, dirty and dangerous’ roles.28 As a result, UVs have become extremely popular amongst military and governmental planners and decision makers. This is not least because of the highly politicised nature of modern warfare and the belief amongst administrators and strategists that the public has a low tolerance for domestic troop casualties in foreign conflicts.29 Furthermore, troop management and efficiency are extremely important in modern military operations, which have become increasingly focused upon ‘winning the peace’ after the initial ‘shock and awe’ tactics have moved resistance into the hills or into the cities of conflict zones.30 Stabilisation requires resources on the ground to patrol civilian areas for threats, and to increase troop engagement with local populations to help build trust and

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
and Space Power Journal 37; Montgomery C Meigs, ‘Unorthodox thoughts about asymmetric warfare’ (2003) 33(2) Parameters, 5-6.
26

See Randal Bowdish, Theater-Level Integrated Sensor-to-Shooter Capability and its Operational Implications (1995) US Joint Military Operations Report <http://handle.dtic.mil/100.2/ADA293332> (accessed 5 April 2012). This as especially true in war zones where insurgency forces had accessibility to and expertise in using small surface-to-air missiles. See Cordesman, above n 22, 30. US OSD Roadmap, above n 8, 2. See also, Gregory J Nardi, Autonomy, Unmanned Ground Vehicles, and the U.S. Army: Preparing for the Future by Examining the Past (2009) School of Advanced Military Studies United States Army Command and General Staff College Fort Leavenworth, Kansas 10, <http://handle.dtic.mil/100.2/ADA506181> (accessed 4 April 2010). Despite almost constantly being engaged in one war or another, there is a perception among many western military powers that, since the Vietnam conflict, the public has a low tolerance for domestic troop casualties arising out of foreign conflicts. See Charles Levinson, ‘Israeli Robots Remake Battlefield; Nation Forges Ahead in Deploying Unmanned Military Vehicles by Air, Sea and Land’ Wall Street Journal (New York, NY) 13 January 2010, A10. Although whether this is actually the case has been questioned: see Christopher Gelpi, Peter D Feaver and Jason Riefler, ‘Success Matters: Casualty Sensitivity and the War in Iraq’ (2006) 3(30) International Security 7. Sarah Kreps, ‘Debating American Grand Strategy After Major War: American Grand Strategy after Iraq’ (2009) 4(53) Orbis 629.

27 28

29

30

EAP 8

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

9

support.31 UVs transfer risk from soldier to robot, permitting commanders to transfer troops to vital human-centric roles.32

3

A Love Affair with a Predator

In the preceding section we identified some of the main catalysts that lead to the adoption of UVs in the ‘war on terror’. The Predator UAV, which has been used from the outset of this conflict, provides a clear illustration of how the new political and military paradigms that have arisen as part of this war, have fostered the UV revolution. The Predator UAV is a lightweight turboprop propelled plane just over eight metres in length, first developed in the mid-1990s for the US Central Intelligence Agency (CIA).33 Each Predator UAV operates as part of a cohesive and integrated weapons system, made up of four UAVs with onboard sensors, a ground control station and a satellite communication suite.34 All parts of this weapons system can be packed for rapid deployment and transport to remote locations within a very short period of time, with human operators remaining in one location controlling UAVs in another remote location, often on another continent and in a different time zone. Like other UV systems, Predators also offer a highly flexible and customisable equipment platform. Removing the pilot from an aerial vehicle creates about 2.3 metric tonne of extra carrying capacity,35 freeing up space and weight which can be used to retrofit a wide range of sensors or specialised equipment to suit the task at hand.36 Alternatively, they can also be fitted with weapons

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
31 32 33

Ali A Jalali, ‘Winning in Afghanistan’ (2009) 39(1) Parameters 5. See Nardi, above n 28, 10. The Predator was developed for the CIA by General Atomics Aeronautical Systems and is based on earlier Israeli UAV systems. See Yenne, above n 10, 56-57. For information on the Predator UAV see US OSD Roadmap, above n 8, 4. See also Bill Gunston, ‘Unmanned Aircraft – Defence Applications of the RPV’ (1973) 4(188) Royal United Services Institute for Defense Studies Journal 41. It is for this reason that predator and similar drone systems are often referred to as Unmanned Aerial Systems or (UAS). See R J Newman, ‘The Little Predator That Could’ (2002) 3(85) Air Force Magazine 48. This is because, not only is the pilot no longer on board, there is no longer the need for a cockpit, ejector seats, atmospheric protections and controls. Indeed removing the pilot also renders much of the armor required to protect a human occupant redundant. See Gunston, above n 33. For instance, Predator drones undertaking ISR duties carry a large range of sensor equipment including high-powered colour and night vision equipped cameras, infra-red and heat sensors. See Newman, above n 34, 51.

34

35

36

EAP 9

10

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

systems, the most popular of which is the Hellfire missile, a long-range, supersonic missile designed for ‘precision’37 attacks on heavy armour.38 Prior to 2001, the Predator was used sparingly outside of covert operations, in part as a result of latency issues and a lack of integration with mainstream military forces.39 However, by 2001 communications problems were largely overcome and it became apparent that the CIA was already using a small number of Predator drones to covertly search for Osama Bin Laden in Afghanistan.40 From October 2001, Predators were flying ISR missions, and in February 2002, the Predator undertook its first operational strike, armed with hellfire missiles. In the wake of these initial sorties, analysts lauded the Predator as a panacea for the special operating conditions required by the war on terror.41 What was most exciting for military planners was its ability to pass real-time ISR data to strike teams and decision makers, located both inside and outside of the conflict zone. Predators solve much of the ‘sensor-to-shooter cycle’ problems in the insurgent focused Afghan and Iraq conflicts by providing live surveillance feeds to combat teams that are able to engage with the target instantly.42 In addition to the aforementioned benefits of UVs, the versatility of the predator platform and its transportability have also been credited with its rapid adoption and expansion post 2001. Predators, like other UAVs, are also extremely inexpensive to operate in comparison to conventional manned equivalents.43 Furthermore, they act as ‘force multipliers’, allowing soldiers and operatives to have a much wider view of the battlefield than they would have previously had.44 They also reduce soldiers’ workloads, allowing troop

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
37

Even though this term is used it is well accepted that, whilst the targeting may be precise the Hellfire’s collateral damage may not be. See Roy Braybrook, ‘Strike Drones: Persistent, Precise and Plausible’ (2009) 4(33) Armada International 21. Ibid. Ibid. Ibid. Newman, above n 34, 48; Cordesman, above n 22, 62-63; Stulberg, above n 9, 251. Cordesman, above n 22, 60-61. United States Air Force, Unmanned Aircraft Systems Flight Plan 2009-2047 (2009) <http://www.fas.org/irp/program/collect/uav.htm> (accessed 1 February 2012) (‘US Flight Plan’). Eyes of the Army: U.S. Army Roadmap for UAS 2010-2035 (2010) U.S. Army UAS Center of Excellence, Report no ATZQ-CDI-C, 72 <http://www.fas.org/irp/program/collect/uas-army.pdf> (accessed 20 March 2012) (‘US Army Roadmap’).

38 39 40 41 42 43

44

EAP 10

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

11

energies to be directed towards critical areas that still require active human involvement.45

3.1 An expanding aerial presence – from sideline support to central strategy
Military advances, especially by technology rich superpowers like the US are driven by a consistent belief that scientific and industrial progress will guarantee both military supremacy and success at war.46 Thus, despite continuing caution by some military strategists, the Bush Administration made funding of high tech UAVs a ‘top priority’ in its 2003 budget.47 Government spending on drone programmes has increased ever since, with the Obama Administration spending US$5 billion on drones in the 2012 budget.48 The result has been a marked increase in the number49 and type of UVs used on the battlefield by the US, and a revolutionary shift in the focus of modern military operations. As Stulberg writes, ‘[i]t is now conventional wisdom that we stand at the dawning of the unmanned aerial vehicle (UAV) revolution in military affairs.’50 Prior to 2001, the US Department of Defence deployed less than 50 UAVs; by 2006 the number was well over 3,000,51 and in 2012, the Pentagon now has approximately 7,500 UAVs.52 The US Air Force trains more UAV

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
45

The US Army views UAS’ success in its ability to ‘significantly augment mission accomplishment by reducing a Soldier’s workload and their exposure to direct enemy contact. The UAS serve as unique tools for the commander, which broaden battlefield situational awareness and ability to see, target, and destroy the enemy by providing actionable intelligence to the lowest tactical levels.’ See US Army Roadmap, ibid, 1. See Jack Beard, ‘Law and War in the Virtual Era’ (2009) 103(3) American Journal of International Law 409, 412. Newman, above n 34, 58. ‘Predator Drones and Unmanned Aerial Vehicles (UAVs)’, The New York Times (online), 5 March 2012, <http://topics.nytimes.com/top/reference/timestopics/subjects/u/unmanned_a erial_vehicles/index.html?scp=1spot&sq=unmanned%20aerial%20vehicle&st=cse> (accessed 14 March 2012). Alan Brown, ‘The Drone Warriors’ Mechanical Engineering Magazine (online) January 2010 <http://memagazine.asme.org/Articles/2010/January/> (accessed 1 March 2012). Stulberg, above n 9, 251. United States Government Accountability Office, Unmanned Aircraft Systems: Improved Planning and Acquisition Strategies Can Help Address Operational Challenges (Testimony Before the Subcommittee on Tactical Air and Land Forces, Committee on Armed Services, House of Representatives, 6 April 2006) 5. Levinson, above n 29; ‘Predator Drones and Unmanned Aerial Vehicles (UAVs)’, above n 48.

46 47 48

49

50 51

52

EAP 11

12

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

operators than conventional pilots, reflecting the new direction of aerial warfare.53

4

Current Aerial Applications

Modern UAVs can basically be separated out into three main classes:54 micro and small; medium altitude; and high altitude, long endurance (HALE).55 Micro and small UAVs are typically less than a metre in length, while micro UAVs are measured in centimetres. Launch is usually by hand or by catapult, with the drone flying at low altitudes and limited ranges.56 They are usually battery powered and therefore very quiet.57 Small and micro UAVs are most commonly used by ground units to provide short-range, up to the minute ISR data.58 They are also favoured by intelligence bodies such as the CIA.59 Whilst this class has been previously restricted to largely ISR roles, the US Air Force

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
53 54 55

Ibid. S A Kaiser, ‘Legal Aspects of Unmanned Aerial Vehicles’ (2006) 55(3) Zeitschrift Fur Luft-Und Weltraum-Recht 344, 345-346. An informative list can be found at the US Flight Plan website, see above, n 43. A more comprehensive overview can be found at the Goebel Public Domain review of UAVs, see Goebel, above n 11. See also NATO’s three class classification system as set out in Strategic Concept of Employment for Unmanned Aircraft Systems in NATO, 4 January 2010 <http://www.japcc.org/> (accessed 19 March 2012). Although some of the micro rotary wing vehicles can take off of their own accord, and some micro UVs have been developed which can ‘cling’ to the sides of building then release themselves into flight. See Alexis Desbiens and Mark Cutkosky, ‘Landing and Perching on Vertical Surfaces with Microspines for Small Unmanned Air Vehicles’ (2009) 57 Journal of Intelligent and Robotic Systems 131. James F Abatti, Small Power: The Role of Micro and Small UAVs in the Future (2005) Air Command and Staff College, 184. For instance, the RQ-11 Raven can be stored in a backpack, is launched into the air by hand to allow troops in the field to ‘see over the next hill’ which could be over 10 kilometres away. See AeroVironment Inc, ‘AeroVironment Receives $37.9 Million In Orders For Digital Raven UAS, Digital Retrofit Kits’ (Press Release, 23 February 2010); AeroVironment Inc, ‘War on Terrorism Boosts Deployment of Mini-UAVs’ (Press Release, 08 July 2002). Both press releases are available at <http://www.avinc.com/resources/press_room/> (accessed 15 April 2010). The CIA have reportedly used ultra-quiet micro-drones, ‘roughly the size of a pizza platter [that] are capable of monitoring potential targets at close range, for hours or days at a stretch. See Joby Warrick and Peter Finn, ‘Amid outrage over civilian deaths in Pakistan, CIA turns to smaller missiles’, Washington Post (Washington DC) 26 April 2010, A8.

56

57 58

59

EAP 12

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

13

is currently procuring a micro weaponised UAV known as a Switchblade, which ‘launches from a small tube that can be carried in a backpack.’60 Medium Altitude Long Endurance (MALE) UAVs generally operate at the same altitudes as conventional commercial aircraft.61 The Predator is a medium altitude UAV, but is now joined by a wide spectrum of flying vehicles.62 A second generation hunter-killed Predator B, for instance — also known as the ‘Reaper’ — is capable of reaching altitudes of 15.8 kilometres and can fly up to 36 hours before refuelling.63 It has also been designed to provide a more combat focused platform (spawning the term ‘Unmanned Combat Aerial Vehicle’ UCAV), and can now carry laser guided bombs, Hellfire air-to-ground missiles, munitions and soon an air-to-air missile system.64 The most updated derivative of the Predator is the MQ-1C Gray Eagle (or Sky Warrior) with the capacity to carry four Hellfire missiles.65 Two turbo-fan variants of the Predator have also been designed. The Predator B ‘Mariner’, a maritime version of the Predator that has been adapted to fly even longer ranges for naval surveillance as well as take-off and land from seaborn vessels,66 as well as a stealth focussed, turbo-prop Predator variant

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
60

The Switchblade has been developed as part of the US Air Force Lethal Miniature Aerial Munition System (LMAMS) procurement program. See ‘US Air Force Awards AeroVironment $4.2m for Switchblade Loitering Munition System’, Unmanned Aerial Vehicles (UAV) News (online), 16 February 2012 <http://www.unmanned.co.uk/unmanned-vehicles-news/unmanned-aerialvehicles-uav-news/us-air-force-awards-aerovironment-4-2m-for-switchbladeloitering-munition-system/> (accessed 19 March 2012); Gary Mortimer, ‘Lethal Miniature Aerial Munition System (LMAMS) to be deployed soon?’ sUAS News (online), 1 January 2011 <http://www.suasnews.com/2011/01/3260/lethalminiature-aerial-munition-system-lmams-to-be-deployed-soon/> (accessed 19 March 2012). Kaiser, above n 54, 345. See US OSD Roadmap, above n 8, 3-13. Which can be undertaken in the air. The Reaper is also able to be fitted with additional fuel tanks, allowing a fully laden drone (including hundreds of kilos of munitions) to stay aloft for up to two days. See Goebel, above n 11. The 4763-kg Reaper is cleared not only for Hellfire but also for the much heavier GBU-12 Paveway II, GBU-38 Jdam and GBU-49 Enhanced Paveway II, based on 227-kg (class) warheads. See Braybrook, above n 37. Alston, above n 4, 13; Unmanned Editor, ‘Specifications Data Sheet,’ Unmanned: Ground, Aerial, Sea and Space Systems, 1 July 2011 <http://www.unmanned.co.uk/autonomous-unmanned-vehicles/uav-dataspecifications-fact-sheets/gray-eagle-uas-unmanned-aerial-vehicle-uavspecifications-data-sheet/> (accessed 13 March 2012). ‘Ocean-Going Drones’ (2006) 12(165) Aviation Week & Space Technology 56.

61 62 63

64

65

66

EAP 13

14

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

(the Predator C ‘Avenger’) which can fly at 400 knots true airspeed and is the fastest in the Predator family.67 A range of rotary wing vessels in this class are also in development or in active use, for surveillance and targeting with weaponised versions close to being deployed. The MQ-8B Fire Scout, for instance, is an unmanned helicopter system which is able to be launched from ocean going platforms and travels at speeds of 200 kilometres per hour at up to 6,000 metres for up to eight hours without refuelling.68 It is able to fire a range of missiles and rockets and carries day/night and multispectral sensors with targeting lasers for strikes by larger aerial vehicles.69 High Altitude and Long Endurance (HALE) UAVs fly at altitudes over nine kilometres and are designed for wide area, long-term surveillance. Typically they can stay aloft for long periods of time, providing ISR data over an extremely large target area. Given the highly covert nature of the high altitude spy drones they tend to be highly classified and shrouded in mystery.70 One exception is the Northrop Grumman RQ-4 Global Hawk, which can reach altitudes exceeding 19 kilometres.71 Operating at this altitude provides the craft with a surveillance range of over 100,000 square kilometres via high-powered sensors, which can see through clouds, darkness and dust.72 One military strategist described them as being ‘like a low Earth orbit satellite that’s present all the time.’73 The additional advantage of operating at high altitude is that the fighter-jet sized UAV is far outside the range of most

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
67

It internalises all storage and weapons bays and is designed to avoid visual and radar detection. The Avenger is also favoured by the Navy given its rear turbofan propulsion system is much safer in naval scenarios. See Goebel, above n 11. US Company Northrop Grumman is currently developing the Fire X which will combine elements of both the MQ-8B Fire Scout and the Bell 407 helicopter, and will have a flight capacity of up to 14 hours. See website of Fire X Manufacturer: ‘Fire X: Medium Range Vertical Unmanned Aircraft System,’ <http://www.as.northropgrumman.com/products/fire-x/index.html> (accessed 18 March 2012). US OSD Roadmap, above n 8, 9. In 2007 for instance, a UAV resembling a sleek stealth bomber — minus the cockpit — was observed in Khandahar, and subsequently referred to as the ‘Beast of Kandahar’. In 2009 the US Air force confirmed that the UAV was in fact an ‘RQ-170 Sentinel’ tactical surveillance platform. No further information has been provided about the UAV. See Goebel, above n 11. The record set by the Global Hawk was 19,928 meters. See, Records: Experimental and New Technologies World Records, FAI Record File Num #7352 <http://records.fai.org/uav/aircraft.asp?id=2151> (accessed 18 March 2010). That means that only five Global Hawks are required to provide high altitude ISR for the whole of the Afghan landmass (and of those, only three need to be aloft at one time). Newman, above n 34, 52.

68

69 70

71

72

73

EAP 14

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

15

air defence systems, allowing relatively low risk and constant ISR surveillance. This also frees up human operators from the need to constantly monitor for ground-based threats.

4.1 Swarms
As noted above, early UAV systems, operated as part of a cohesive and integrated system, often with a series of unmanned vehicles (in the Predator’s case four). These were originally operated separately, but more recent technology allows for the simultaneous deployment of multiple UVs from a single control station. These ‘swarms’ allow a ‘single operator [to] monitor a group of semi-autonomous aerial robotic weapons systems through a wireless network that connects each robot to others and to the operator.’74 Swarm technologies have been heralded as a ‘milestone in UAV flight’ as the best Unmanned Aerial System can be assigned to each request.75 Further, they will allow for improved response time and reduced manning requirements.76 Future swarms may also include combinations of unmanned air, sea and ground vehicles.

4.2 UCAVs
Whilst UAVs began primarily as surveillance craft, they are increasingly used for combat roles. Whilst originally this involved retrofitting UAVs with weapons systems a large amount of effort is now going into creating combat specific UCAVs.77 Facilitating this transition are a range of lightweight missile systems currently in development. These lighter payloads will allow for the weight gains to be put towards improving the engines, armour or stealth capabilities of the drones.78 Since the outset of the war in Afghanistan in 2001, the number of UCAVs in use, as well as the situations in which they have been used, has grown exponentially. UCAVs are set to be the biggest combat system in US military. In October 2011, a US Predator and a French warplane hit two vehicles fleeing Gaddafi’s home town of Sirte, forcing the convoy to disperse, after which Gaddafi was caught by rebels.79

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
74 75 76

Alston, above n 4. US Flight Plan, above n 43, 30. ‘“Swarm” UAV Reconnaissance Demonstrated’, Homeland Security Newswire (online), 19 August 2011 <http://www.homelandsecuritynewswire.com/swarmuav-reconnaissance-demonstrated> (accessed on 18 March 2012). Braybrook, above n 37. Lightweight air-to-surface missiles now under development will open the groundattack role to far greater numbers of drone platforms. This in turn will pave the way for heavier, stealthy, dedicated unmanned combat air vehicles (UCAVs). See Braybrook, ibid. ‘Predator Drones and Unmanned Aerial Vehicles (UAVs)’, above n 48.

77 78

79

EAP 15

16

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

In parallel to the US Department of Defense UAV programme in Afghanistan and Iraq, the CIA has been reportedly running covert UCAV operations in Yemen, Pakistan80 and Somalia81 as well as ISR missions in Iran82 and Syria.83 The CIA programme in Pakistan has received significant attention due to the allegedly high number of civilian deaths caused by UCAV strikes. According to research conducted by The Bureau of Investigative Journalism (TBIJ) in 2012, there have been 260 UAVs strikes since President Obama took office in 2009, with approximately 128 strikes in 2010 and 76 in 2011.84 Although there are no official statistics on the number of casualties, TBIJ research states that between 282 to 535 civilians had been “credibly reported” killed in drone attacks, including more than 60 children.85

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
80

According to the UK based non-government organisation, Reprieve, the CIA drone programme in Pakistan began in 2004 under the Bush administration, and has expanded dramatically under the Obama Administration. See ‘Drone Strikes’, <http://www.reprieve.org.uk/investigations/drones/> (accessed 15 March 2012); see also, Andrew Orr, ‘Unmanned, Unprecedented, and Unresolved: The Status of American Drone Strikes in Pakistan Under International Law’ (2011) 44 Cornell International Law Journal 730. Job Henning, ‘Embracing the Drone,’ The New York Times (online), 20 February 2012 <http://www.nytimes.com/2012/02/21/opinion/embracing-thedrone.html?_r=1&scp=4&sq=unmanned%20aerial%20vehicle&st=cse> (accessed 14 March 2012); Craig Whitlock, ‘U.S. drone base in Ethiopia is operational’, The Washington Post (online), 28 October 2011 <http://www.washingtonpost.com/world/national-security/us-drone-base-inethiopia-is-operational/2011/10/27/gIQAznKwMM_story.html?hpid=z3> (accessed 14 March 2012). According to media reports, Iran claims to have shot down a US RQ-170 Sentinel drone in Iranian airspace. See Saeed Kamall Dehghan, ‘Iran to exhibit US and Israeli Spy Drones,’ The Guardian (online), 15 December 2011 <http://www.guardian.co.uk/world/2011/dec/15/iran-exhibit-american-spydrones> (accessed 14 March 2012). Agence France Presse, ‘US drones monitor events in Syria: Report’, DefenseNews, 18 February 2012, <http://www.defensenews.com/article/20120218/DEFREG02/302180003/U-SDrones-Monitor-Events-Syria-Report> (accessed 14 March 2012). David Pegg, ‘Drone Statistics Visualised’, The Bureau of Investigative Journalism (online), 10 August 2011 <http://www.thebureauinvestigates.com/2011/08/10/resources-and-graphs/> (accessed 14 March 2012). Chris Woods and Christina Lamb, ‘Obama terror drones: CIA tactics in Pakistan include targeting rescuers and funerals’, The Bureau of Investigative Journalism (online), 4 February 2012 <http://www.thebureauinvestigates.com/2012/02/04/obama-terror-drones-ciatactics-in-pakistan-include-targeting-rescuers-and-funerals/> (accessed 14 March 2012); ‘Predator Drones and Unmanned Aerial Vehicles’ above n 48.

81

82

83

84

85

EAP 16

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

17

5

A Move to the Ground

Whilst UVs have become the centrepiece of modern air warfare, UGVs have a much more complex operating and navigational environment. That is not to say that UGVs are not in use by the armed forces; in fact, more ground robots (12,000 in total) are used in Afghanistan and Iraq than UAVs (approximately 7,000). However, the majority of these are remotely controlled or ‘teleoperated’86 and not semi-autonomous.87 Teleoperated UGVs are used in a wide variety of situations which pose immediate risks to human combatants; in particular ordinance disposal, urban scouting, and doorway breaching.88 Small UGVs can also be fitted with a variety of cameras and sensors to see through smoke, at night or detect the existence of explosives, chemical, biological or radiological agents.89 A weaponised teleoperated UGV,90 the Special Weapons Observation Remote Direct-Action System (SWORDS) was approved for use in Iraq in 2008.91 SWORDS are nearly silent to operate and can move as fast as a running person, climb stairs and rock piles, move through wire barriers, sand, snow and water and correct themselves if knocked over.92 Larger teleoperated vehicles have been designed to rescue and provide first aid to injured troops under fire, ‘with minimal intervention by medic or other first responder operators.’93 Others have been developed for repair and

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
86

See definition section above. Teleoperated UGVs are controlled much in the same way as a remote control toy car, with a human operating the vehicle a short distance away, either by sight or via on-board cameras. The most common role for teleoperated UGVs in contemporary conflicts is in the neutralisation of improvised explosive devices: US OSD Roadmap, above n 8, 19. Levinson, above n 29. Nardi, above n 28, 40. SWORDS can be fitted with a range of high velocity, sniper, or machine guns or even rocket launchers. See Stew Magnuson, ‘Armed Robots Sidelined in Iraqi Fight’, National Defence Magazine (online) May 2008, <http://www.nationaldefensemagazine.org/archive/2008/May/Pages/Armed22 65.aspx?PF=1> (accessed 15 April 2012). Ibid. However, it is unclear whether the unit has been used or not, as some concerns were raised about the UGVs reliability. K Jones, ‘Special Weapons Observation Remote Recon Direct Action System (SWORDS)’ in Platform Innovations and System Integration for Unmanned Air, Land and Sea Vehicles (Paper 36, Meeting Proceedings, AVT-SCI Joint Symposium) 36–1, 36–8. Katie Drummond, ‘Pentagon Seeks Robo-EMS to Rescue Wounded Warriors’, Wired (online) 3 March 2010, <http://www.wired.com/dangerroom/2010/03/pentagon-seeks-robo-ems-torescue-wounded-warriors/#more-22983> (accessed 2 April 2012).

87 88 89 90

91 92

93

EAP 17

18

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

reconstruction under fire, such as moving dirt or repairing craters in runways.94 Whilst the majority of UGVs are currently teleoperated, there is a concerted effort to field more autonomous vehicles, which do not require constant human oversight and control. Autonomous or semi-autonomous land based navigation is perhaps the most challenging of the environments for UV programmers and engineers due to the plethora of ‘nontrivial navigational capabilities’ required to effectively operate in ground roles.95 However, the Israelis have made significant inroads integrating autonomous UGVs into active military practice.96 The Guardium UGV, for instance, is a small armoured all terrain vehicle equipped with a wide array of cameras and sensors. It can patrol to pre-programmed coordinates without human control and react to unscheduled events.97 It was deployed on the Israeli border to detect infiltrators after humans undertaking the same roles were attacked and kidnapped in 2006.98 A weaponised combat version of the Guardium has been trialled and certified by the Israeli army.99 South Korea is reportedly using a similar UGV to the Guardium to patrol its border with North Korea.100 South Korea also operates stationary robotic platforms that can detect, identify and target intruders in a completely autonomous way, if permitted.101 In the US, there has been a concerted effort by the government to bring UGV autonomy up to the level of UAVs and indeed provide for more autonomous

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
94 95

See D W Gage, ‘UGV History 101: A Brief History of Unmanned Ground Vehicle (UGV) Development Efforts’ (1995) 13(3) Unmanned Systems Magazine, 2. In this respect both Russian and American space exploration programs have provided major advances to artificial intelligence systems. Indeed, the Russians, unable to afford manned moon exploration, instead placed resources into UVs, placing them at forefront of UGV development until quite recently. See Gage, ibid, 6. This can be attributed to the fact that there is an ongoing state of war in that country combined with a low tolerance for casualties amongst the populace. It does so, ‘in line with a set of guidelines specifically programmed for the site characteristics and security routines’. See the Manufacturer website for the Guardium, <http://www.g-nius.co.il/unmanned-ground-systems/guardiumugv.html> (accessed 12 April 2012). Levinson, above n 29. It can carry over 1000 kilos of weapons and munitions. See GENIUS Unmanned Ground Systems (2010) <http://g-nius.co.il/unmanned-groundsystems/avantguard.html> (last accessed 12 April 2012). See Brown, above n 49. Ronald C Arkin, Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture (2007) Georgia Institute of Technology, 5.

96 97

98 99

100 101

EAP 18

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

19

and complex AI in the future.102 Currently, the US is trialling a number of medium to large UGV systems.103 These include: the Black-I Robotics unmanned crossover land vehicle, similar in weight and specifications to the Guardium UGV;104 a larger, truck sized, Multifunction Utility Logistics Equipment (MULE) UGV designed mostly for transport and operations support;105 and heavier six-ton UGV tank code-named the ‘Crusher’ for heavy payloads and rugged terrain.106 The Crusher can operate in semi-autonomous mode, or be remotely teleoperated by satellite link.107

6

On and Under Water: Naval UVs

6.1 Surface vehicles
Unmanned surface vehicles (USVs) are arguably the least developed of the UV family, despite the fact that the surface of the water — at least calm water — is perhaps the most easily navigable environment for a robotic AI. Indeed, robotic technology is sufficiently advanced that UV systems can be retrofitted to (up to fifteen per control unit) conventional watercraft to provide them with semi-autonomous functions.108 There have been recent forays into semiautonomous UAVs however. The Israeli Protector is a nine metre sealed, rigid hull USV,109 designed to protect against seaborn terrorist attacks.110 It

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
102 103 104

National Research Council (US), Technology Development for Army Unmanned Ground Vehicles (2002) 1-12. Office of the Secretary of Defense (US), Unmanned Systems Integrated Roadmap, (2009) Report no FY2009–2034, 111-134 (‘Integrated Roadmap’). Although it is also designed to undertake perimeter patrols and surveillance, the US is currently focusing much of their UGV deployment strategy on gear transport for ground units. The Black-I Robotics UGV is designed to carry packs, food, water, and ammunition for light infantry forces, which it will follow automatically through a range of terrains for up to eight-hour shifts before refueling. See Black-I Robotics <http://www.blackirobotics.com> (accessed 14 May 2012). Integrated Roadmap, above n 103, 116. Ibid 118. Ibid. The UAPS20 is an ‘Unmanned Autopilot System’ designed by an Italian company, SIEL, which can be fitted to a rigid-hulled inflatable boat to turn it into a low cost USV that can undertake relatively complex waypoint navigation as well as teleoperated control. Up to fifteen boats can simultaneously be controlled for a wide range of tasks, from harbor patrol and surveillance, to ordinance countermeasures and even as a UAV or UUV launch platform. See SIEL, <http://www.sielnet.com/index.php/products/usv> (accessed 20 April 2012). The company also cites the possibility of using the system for ‘naval targets’ but does not provide any further information on how this may work, quite possibly because the most obvious weaponised use of the system would be as a boat-bomb. See RAFAEL, <http://www.rafael.co.il/Marketing/358-1037en/Marketing.aspx> (accessed 12 March 2010).

105 106 107 108

109

EAP 19

20

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

operates a water jet engine, allowing it to travel at speeds of 50 knots and can patrol in semi-autonomous mode; although its stabilised machine guns are currently teleoperated by a human controller, as is its public address system.111 It is now in full service by the Israeli Navy.112 While the US has shown some interest in small patrol USVs,113 it appears to have set its sights on developing much larger USV platforms. In 2010, the US Defense Advanced Research Projects Agency (DARPA) launched the Continuous Trail Unmanned Vessel (ACTUV) program.114 The project seeks to develop a frigate sized USV ‘for theatre or global independent deployment’ capable of tracking modern diesel electric submarines. DARPA hopes for a highly autonomous vessel ‘founded on the assumption that no person steps aboard at any point in its operating cycle.’ Communications with base are to be ‘intermittent’ for the ‘global, months long deployments with no underway human maintenance or repair opportunity.’115 The ACTUV program is still in

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
110

Such as the use of an explosive laden motorboat against the USS Cole in 2000. See Erik Sofge ‘Robot Boats Hunt High-Tech Pirates on the High-Speed Seas’ Popular Mechanics (online) 1 October 2009, <http://www.popularmechanics.com/technology/engineering/robots/4229443> (accessed 12 March 2012). S J Corfield and J M Young, ‘Unmanned surface vehicles – game changing technology for naval operations’ in G N Roberts and Robert Sutton (eds), Advances in Unmanned Marine Vehicles (2006) IEE Control Series, 313. Which operates it in a semi-autonomous manner to patrol harbors, gather ISR, laying and remove ordinance and engage in electronic warfare. See Matthew Graham, Unmanned Surface Vehicles: An Operational Commander’s Tool for Maritime Security (2008) Joint Military Operations Department, Naval War College, 10 <http://handle.dtic.mil/100.2/ADA494165> (accessed 20 April 2012). See Sofge, above n 110. The US Navy is currently exploring the capabilities of its Sea Fox USV – “a remote controlled five-meter rigid hull inflatable boat” – to deploy non-lethal weapons including “a directional acoustic hailer, eye dazzling laser and flash-bang munitions.” See Unmanned Editor, ‘US Navy Equips Unmanned Surface Vehicles with Non-Lethal Weapons,’ Unmanned Surface Vehicles (USV) News (online), 7 February 2012, <http://www.unmanned.co.uk/unmanned-vehicles-news/unmanned-surfacevehicles-usv-news/us-navy-equips-unmanned-surface-vehicles-with-non-lethalweapons/> (accessed 19 March 2012); See also US Navy website, <http://www.navy.mil/view_single.asp?id=114818> (accessed 19 March 2012). Defense Advanced Research Projects Agency, ASW Continuous Trail Unmanned Vessel (ACTUV) Phase 1, (2010) <https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-BAA-1043/listing.html> (accessed 20 April 2010). Ibid.

111

112

113

114

115

EAP 20

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

21

progress, with phase two out of a four phase cycle set to commence in July 2012.116

6.2 Underwater vehicles
More prominent, both in military and civilian use, are USVs’ undersea cousins, UUVs. Ordinance clearing UUVs were deployed by the allies in the early part of the second Iraq war to clear naval mines.117 As a result a number of navies have fitted destroyer fleets with permanent on-board UUVs.118 In 2004, the US Navy mapped a twenty-year ‘UUV Master Plan’ that would substantially integrate UUVs into all aspects of its operations.119 The UUV Master Plan envisions UUVs being used for a wide range of undersea operations,120 to the extent that current manned undersea vehicles may become redundant or extremely limited in future conflicts. These include: ISR collection and distribution; undersea mapping; the creation of moveable naval data and communications networks; countermeasure and decoy operations; and ‘time critical strike capabilities against undersea, surface, air and land targets.121

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
116

DARPA is currently soliciting proposals for phases 2-4 which will involve designing, building and testing the vessel. See Defense Advanced Research Projects Agency, Tactical Technology Office, Anti Submarine Warfare (ASW) Continuous Trial Unmanned Vessel (ACTUV) <http://www.darpa.mil/Our_Work/TTO/Programs/AntiSubmarine_Warfare_(ASW)_Continuous_Trail_Unmanned_Vessel_(ACTUV).aspx > (accessed 18 March 2012). During 2003, Australian, British and US UUVs cleared over 2.5 million square meters of the Iraqi coast of mines. Global Security Org, Intelligence Collection Programs and Systems (14 May 2008) <http://www.globalsecurity.org/intell/systems/uuv.htm> (accessed 20 April 2010). Including the US and the UK in 2004: see ‘Unmanned Remote Minehunting System Installed for USS Momsen Commissioning’ Space Daily (online) 31 August 2004, <http://www.spacedaily.com/news/uav-04zzo.html>; Nicolas von Kospoth, Royal Navy Introduces New Reconnaissance UUV (24 February 2010) Defpro.focus <http://www.defpro.com/daily/details/515/> (accessed 12 April 2012). Department of Navy (US), The Navy Unmanned Undersea Vehicle (UUV) Master Plan (9 November 2004) United States Navy Report <http://www.navy.mil/navydata/technology/uuvmp.pdf> (accessed 12 April 2010) (‘UUV Master Plan’). Based on four pillars ‘Force Net, Sea Shield, Sea Strike, and Sea Base’. See Henderson, above n 16, 57. UUV Master Plan, above n 119.

117

118

119

120 121

EAP 21

22

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

7

The Drone Gold Rush

As a result of the demand for UV technology, market commentators have noted that there is a drone gold rush. According to the US Teal Group, the global UAV market is currently worth US$6 billion a year,122 and will rise to US$12 billion a year by 2018.123 Although the global UV market has traditionally been dominated by US124 and Israeli companies, competitors in Europe125 and Asia-Pacific are multiplying rapidly.126 More than forty countries now have UV programs and the competition between these countries for market and technological dominance is increasing.127 All of the major EU arms companies are now involved in UV production or prototype development.128 China is reportedly developing its own UAV program, including a copy of the Predator UAV.129

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
122

iCD research estimates the global value to be at US$7 billion: see Airforcetechnology.com, Snapshot: The Global Market for Unmanned Aerial Vehicles <http://www.airforce-technology.com/features/feature125724/> (accessed 19 March 2012). Steven Zagola, David Rockwell and Philip Finnegan, World Unmanned Aerial Vehicle Systems: Market Profile and Forecast, Executive Summary, 2011 <http://tealgroup.com/index.php?option=com_content&view=frontpage&Itemid =1> (accessed 19 March 2012) (‘Teal Group Executive Summary’). In 2011, US companies built approximately 1,800 drones out of the 2,600 made worldwide. See Andrew Rettman, ‘EU firms Join Gold Rush on Drones’, EU Observer (online), 17 February 2012, <http://euobserver.com/13/115283> (accessed 19 March 2012). UK and French Defense Departments are currently sponsoring a joint program called Telemos, which aims to produce a medium altitude long endurance (MALE) UCAV by 2020. See the Manufacturer website for BAE Systems: <http://www.baesystems.com/cs/groups/public/documents/document/mdaw /mdm3/~edisp/baes_026385.pdf> (accessed 17 March 2012). In response, German and Italian companies are working together to develop equivalent MALE technology: see Unmanned Editor, ‘Cassidian, Alenia Join Forces for UAV Projects’, Unmanned Aerial Vehicles News (online), 20 December 2011 <http://www.unmanned.co.uk/unmanned-vehicles-news/unmanned-aerialvehicles-uav-news/cassidian-alenia-join-forces-for-uav-projects/> (accessed 19 March 2012). Cameron Stuart, ‘Drones, Lives and Liberties,’ The Australian (Sydney), 1 March 2012, 11. The market leaders in UV technology are the US, Japan and Israel, with France following closely behind. See UVS International, UAV Categorisation, in Yearbook: UAVs Global Perspective (2004) 156. Teal Group Executive Summary, above n 123. Noel Sharkey, quoted in Rettman, above n 124.

123

124

125

126 127

128 129

EAP 22

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

23

8

Beyond the Military – The Transition to Civilian Use

In this section we consider the civilian uses of UVs, both now and into the future. We noted above that UVs have not been used as extensively for civilian purposes as they have military ones. We also highlighted two exceptions to this general rule, the first being limited agricultural use and the second, undersea operations. Whilst the former represented only a very small component of global industrial usage, UVs played a dominant role in the latter. Indeed, it is said that the ‘golden age’ of UUV technology occurred more than a decade before the UAV revolution, when the public were provided footage of undersea wrecks like the Titanic through the tethered cameras of robotic submersibles.130 As groundbreaking and popular as such operations were, they were actually made possible because of a knowledge and resource pool created by virtue of commercial and industrial uses of the technology; for instance, as petrochemical and mineral extraction, or subsea pipeline and cable laying and maintenance.131 Those industries have a particular interest in developing robotic technologies that could supplant humans in the undertaking of ‘dirty, dangerous or dull’ jobs in alien, high risk, environments. Above the water however, there was much less of an impetus to the development of expensive alternatives to human operated vehicles and UV development has therefore historically been driven the military sectors of wealthier nations seeking to transfer the risk from human combatants to machine ones. Recently there has been marked transition from military to civilian uses for drone technologies. This has been driven by a number of factors: • Inter-agency transfer: As drones have moved beyond being highly expensive prototype hardware to more mainstream military and research vehicles there has been an increasing willingness for interagency transfer of drones for civilian use or trials.132 • Increasing international demand: As a result the of the increasing market competition for ever an ever wider range of countries unmanning their military sectors, the price of drones has decreased

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
130 131 132

In fact, the first ‘golden age’ in UV technology occurred under the oceans more than a decade before it did in the air. See Henderson, above n 16, 57. Stephanie Showalter, ‘The Legal Status of Autonomous Underwater Vehicles’ (2004) 38(1) Marine Technology & Society Journal 80. For instance, armies have provided drones to police forces for trials, air forces have similarly provided UVs to search and rescue teams to deal with large-scale emergencies. See R Johnson, NASA drones aid firefighters (2008) Electronic Engineering Times 1535, 9-10; Randal C Archibold, ‘U.S. Adds Drones to Fight Smuggling’ New York Times (New York) 8 December 2009, A.25; and Graham Warwick, ‘Drug Drones’ (2009) 170 Aviation Week & Space Technology 22.

EAP 23

24

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

significantly bringing them within reach of non-military bodies, whom manufactures view as an important new market.133 • Public R&D Support: The massive R&D push into drone technology and computing generally has brought both know-how and inexpensive technology into the wider public arena. • Increased access to powerful hardware platforms: Over the past two decades computing power and hardware systems have become incredibly powerful, inexpensive and, more importantly, widely available to commercial markets.134 Consumers can now purchase ‘off the shelf’ systems that are almost, if not as, complex and powerful as those available to the military.135 Conversely, the military has become increasingly reliant on commercial hardware, consequently much of the technology used in the construction of UVs are available on the open market.136 Drone technology is increasingly within the reach of public bodies, private companies and even individuals. This trend will most likely continue. We have already set out some of the roles that UVs are being used for by such bodies, recognising that as the technology becomes more accessible a range of other applications will no doubt come online.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
133

Stafford writes that when ‘commercial drones do take off, four groups of businesses would be looking to cash in. Academic researchers … [with] associations with small, specialist companies that build UAVs. Older commercial companies … have long sold drones as toys. A handful of major corporations already have a toe-hold in the market. And military contractors have perfected the secret designs of the world’s best-performing drones — those already used by air forces and spy agencies.’ See Ned Stafford, ‘Spy in the sky’ (2007) 7130(445) Nature 808. David S Alberts, The Unintended Consequences of Information Age Technologies: Avoiding the Pitfalls, Seizing the Initiative (University Press of the Pacific, 2004) 26– 28. Indeed, modern military vehicles and platforms often rely on a mix of military grade and commercially available technology. Jay Stowsky, ‘Secrets to shield or share? new dilemmas for military R&D policy in the digital age’ (2004) 2(3) Research Policy 257. As Gormley notes, ‘Military breakthroughs are increasingly resulting from commercial, rather than secret military, research’. See Dennis M Gormley, ‘Hedging Against the Cruise-Missile Threat’ (1998) 40(1) Survival 92. As the US Administration admits, ‘Technological advances in propulsion that were previously driven by military-sponsored research are now largely driven by commercial interests—fuel cells by the automotive industry, batteries by the computer and cellular industries, and solar cells by the commercial satellite industry. [UVs] are therefore more likely to rely on COTS [commercial off the shelf] or “COTS-derivative” systems.’ See US OSD Roadmap, above n 8, 52.

134

135

136

EAP 24

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

25

8.1 Border security
Border security and customs roles are particularly well suited to UAVs,137 which are now used to detect illegal transborder activities, border infringements,138 drug139 and people smuggling.140 More often than not, these agencies utilise craft, such as the Predator drone, which are directly seconded from the military and, as of yet, it is rare to find UVs specifically designed for non-military surveillance.

8.2 Policing
Policing is another sector in which UVs are beginning to appear. The British police have been particularly enthusiastic about UVs and, under the rubric of the UK Government Home Office, have been developing a nationwide drone program since at least 2007.141 The program reportedly includes trialling

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
137

For instance, Reaper drones are now deployed by the international anti-piracy task force to scout for Somali pirates in the Indian Ocean. The drones are operated from a base in Germany to follow and record movements of suspect pirate vessels. Although many boats have been captured it has been extremely hard to prove that they were involved in piracy. The ability of the drones to capture video of suspect movements, over long periods of time (up to 18 hours) without detection makes them perfect for the detection and evidence-gathering role. See Will Ross, ‘Drones Scour the Sea for Pirates’ BBC News (online) 10 November 2009 <http://news.bbc.co.uk/2/hi/africa/8352631.stm> (accessed 15 March 2012). Countries like Australia that have larger border areas are reportedly trialling semiautonomous patrols of large areas of its northern approaches. See Ari Sharp ‘Unmanned aircraft could soon patrol borders’ The Age (online), 6 April 2010 <http://www.theage.com.au/national/unmanned-aircraft-could-soon-patrolborders-20100405-rn4l.html> (accessed 1 May 2012). In late 2009, the US Department of Homeland Security expanded its use of drones into external jurisdictions, including the Caribbean and South America to spot and track drug smugglers. See Archibold, above n 132. The US Navy is also trialling drones over unspecified countries, seeking to use them to detect submersible vehicles that have been used to smuggle drugs into the US. See Warwick, above n 132. US Predator drones for instance have been used to patrol the Canadian and Mexican borders. See Warwick, above n 132. In Europe, the EU’s border agency, Frontex, is reportedly trialling UAV surveillance in Greece, the main entry point for asylum seekers into the EU. Rettman, above n 124. Paul Lewis, ‘CCTV in the sky: police plan to use military-style spy drones’, The Guardian (online) 23 January 2010, <http://www.guardian.co.uk/uk/2010/jan/23/cctv-sky-police-plan-drones> (accessed 10 April 2012). However, note an earlier talk by the Home Office which was reported by La Franchi. See Peter La Franchi, ‘UK Home Office plans national police UAV fleet’, Flight International (online) 17 July 2007, <http://www.flightglobal.com/articles/2007/07/17/215507/uk-home-officeplans-national-police-uav-fleet.html> (accessed 10 April 2012). Police in Australia are also trialling drones which may be used for detecting drug crops and finding

138

139

140

141

EAP 25

26

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

medium and low altitude UAVs, with an arrest being assisted by the use of a small UAV for the first time in 2010.142 The program envisions military UAVs being modified for a wide range of civilian law enforcement activities, including ‘routine monitoring of antisocial motorists, protesters, agricultural thieves and fly-tippers’143 as well gathering evidence of ‘vandalism, graffiti or littering.’144 At the 2012 London Olympics, unarmed UAVs will be used for crowd surveillance and security.145 In addition to drones, UK police are also using UGVs including the Wheelbarrow Mk9 remote explosive ordinance device, while the UK National Rail and London Fire Brigade are using small UGVs to deal with acetylene rail fires.146 According to reports, other police forces have also sought to arm ground and aerial drones with Tasers for non-lethal engagement of suspects.147 Although

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
missing persons. See Kate Kyriacou, ‘Queensland Police trial hi-tech surveillance drones to chase criminals’, The Courier Mail (online), 14 March 2012 <http://www.couriermail.com.au/news/technology/attack-of-the-drones/storyfn7cejkh-1226298835589> (accessed 19 March 2012). Following an incident in which a police helicopter was shot down in Rio di Janerio, police are now using Israeli UAVs to patrol favelas or shantytowns. See ‘State of the Art’ (Summer 2011) 1(2) Unmanned Systems: Mission Critical <http://issuu.com/auvsi/docs/usna_mission_critical_summer/18> (accessed 25 March 2012).
142

Although no conviction was recorded. See, ‘Unlicensed police drone grounded’, BBC News (online), 16 February 2010, <http://www.clickliverpool.com/news/national-news/128901-merseysidepolice-drone-fails-to-convict-car-thief.html> (accessed 10 April 2012). Ibid. David Hambling, ‘Future Police: Meet the UK’s Armed Robot Drones’ Wired News (online) 10 February 2010 <http://www.wired.co.uk/news/archive/201002/10/future-police-meet-the-uk%27s-armed-robot-drones> (accessed 25 May 2012). See Lewis, above n 141; Stephen Graham, ‘Olympics 2012 Security: Welcome to Lockdown London’ The Guardian (online) 12 March 2012, <http://www.guardian.co.uk/sport/2012/mar/12/london-olympics-securitylockdown-london?INTCMP=SRCH> (accessed 14 March 2012). Yvonne Headington, ‘UGVs Ok with UK Police; UAVs up in the Air,’ (Summer 2011) 1(2) Unmanned Systems: Mission Critical, 9 – 11, <http://issuu.com/auvsi/docs/usna_mission_critical_summer/11> (accessed 25 March 2012). See Lewis, above n 141. However, the authors’ could find no official verification of this. The Sheriff’s Office of Montgomery County, Texas has reportedly been operating a Shadowhawk drone with the capacity to fire a Taser gun since November 2011. It is unclear however, whether the drone has been used in an armed capacity. See ‘Tase of Our Lives’, The Daily (online), 12 March 2012

143 144

145

146

147

EAP 26

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

27

the use of Taser drones could not be verified by the authors, two French companies market small and micro UAVs which can variously be armed with a 44mm flash-ball-gun,148 tear-gas canisters,149 or Tasers.150

8.3 Patrolling and inspection
The need to patrol large restricted areas is not limited to the military. Various industries require ground and air surveillance. For instance, semiautonomous UGVs have been suggested for a range of industries including: nuclear and electric power plants; railway lines and tracks; sensitive industrial and research areas; oil and gas pipelines, refineries and storage areas; zoos, wildlife reserves and safaris and even private farms and ranches.151 Semi-autonomous patrol vehicles are obviously well suited to monitoring gaols and detention centres, many of which are now privately operated.152 Dull and routine operations, such as car parking inspection, have also been highlighted as a possible role for semi-autonomous UGVs.153 Similarly, the need to inspect cars and vehicles for bombs or other hazards is not limited to the military; security firms protecting hotels, conference centres and other organisations at risk of terrorist activities are very interested in robots that can undertake these dangerous tasks.154

8.4 Emergency and hazard management
Adapted military drones have also proven successful in emergency management fire fighting, where they can be used for monitoring operations in dangerous environments.155 For instance, predator drones with specially

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
<http://www.thedaily.com/page/2012/03/12/031212-news-armed-drones-1-2/> (accessed 14 March 2012).
148

‘Eurosatory 2004 - Tecknisolar Seni designs armed mini-UAV for anti-terror operations’, Flight International (online), 22 June 2004, <http://www.flightglobal.com/articles/2004/06/22/183201/eurosatory-2004tecknisolar-seni-designs-armed-mini-uav-for-anti-terror-operations.html> (accessed 25 May 2010). Ibid. See iDrone Website, <http://www.idrone.fr/ (accessed 20 March 2012). See Israel Aerospace Industries Ltd website: <http://www.iai.co.il/34056-31663en/Groups_Military_Aircraft_Lahav_Products_UGV.aspx> (18 April 2012). Douglas McDonald, ‘Public Imprisonment by Private Means - The Re-Emergence of Private Prisons and Jails in the United States, the United Kingdom, and Australia’ (1994) 34 British Journal of Criminology 29, 29. Richard Bloss, ‘By air, land and sea, the unmanned vehicles are coming’ (2007) 34(1) The Industrial Robot 12, 14. Ibid. Fire fighters can be blinded by smoke and debris during firefighting operations and wander into areas that are dangerous. For instance, certain regions of the fire

149 150 151 152

153 154 155

EAP 27

28

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

designed heat sensors were provided to Californian authorities to help them battle against the massive wildfires that ravaged that state in 2008.156 In that case only fire surveillance was provided, but in the future, custom-built fire fighting and water bombing UAVs may be used to combat fires, removing human pilots from the high-risk environment of wildfires. In a more recent example, Global Hawk UAVs were used following the tsunami and earthquake in Japan in March 2011 to provide ‘real time data to disaster relief.’157 UVs also promise to provide ground support in areas inaccessible to rescue crews. Small teleoperated and semi-autonomous UGVs designed for reconnaissance in houses and caves are well adapted to exploring earthquake, disaster zones and other hazardous terrain for survivors.158 Both the Japanese fire service159 and the Israeli military160 have been have been trialling rescue UVs that can rescue injured persons in high-risk areas. Not only would these be important in troop rescue, but they also could be used to extract civilians from remote regions, disaster zones, fires or even riots.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
may be too hot for humans, or areas of the ground may be covered in ash that would cause the firefighters’ boots to melt.
156

Heat detecting and radar equipment were retrofitted to the drones so that they could ‘see through’ the smoke layer to provide fire fighters with up-to-the-minute intelligence on the fire as well as any obstructions, hazards or impediments not visible to human eyes on the ground. See Johnson, above n 132, 9-10. Despite resistance in Europe, small UAVs are also being used to monitor fire ‘hot spots’ by fire services in Hungary and Spain. Lindsay Voss, ‘Unmanned Systems vs. Wildfires’ (Summer 2011) 1(2) Unmanned Systems: Mission Critical 30, 32-33 <http://issuu.com/auvsi/docs/usna_mission_critical_summer/33> (accessed 25 March 2012). Saira Syed, ‘Drone Markets Target Asia for Growth,’ BBC News (online), 16 February 2012 <http://www.bbc.co.uk/news/business-17028684> (accessed 15 March 2012). Brian Yamauchi and Pavlo Rudakevych, ‘Griffon: A Man-Portable Hybrid UGV/UAV’ (2004) 5(31) Industrial Robot 443, 443. Brian Ashcraft, ‘Just Press “Save”: Disaster search-and-rescue in robot-crazy Japan’ (2009) Popular Science (online) 14 May 2009, <http://www.popsci.com/scitech/article/2007-07/autonomous-flyingambulances-could-save-troops#> (accessed 2 February 2010). David Axe, ‘Autonomous Flying Ambulances Could Save Troops’ (2007) Popular Science (online) 7 November 2007 <http://www.popsci.com/scitech/article/200707/autonomous-flying-ambulances-could-save-troops#> (accessed 2 February 2010).

157

158 159

160

EAP 28

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

29

8.5 Remote exploration works and repair
In the undersea environment, UUVs have been used for decades to undertake repairs to hulls, pipelines, or oil rigs.161 More autonomous UUVs are being developed which will undertake this work automatically.162 UUVs are also being used for underwater exploration, including the US Oceans Observation Initiative which aims to conduct a bottom to surface mapping of ocean activities over a period of three decades. The Initiative will operate with two major arrays on the East and West coast of the US, as well as four stations in the Pacific, off the coast of Greenland, Argentina and Chile. UUVs, including the Remus 600 and Slocum gliders will be used to transmit data from approximately 800 instruments to researchers (and civilians) around the world, with the first data expected to be available in 2013.163 Repair systems are in development on land, including maintenance of remote drilling stations, mineral exploration in remote areas, as well as plumbing and maintenance robots that travel subterranean sewer pipes monitoring for weakness or structural breaches, automatically repairing the damage, or, where that is not possible, recording and alerting controllers to it.164 Israeli companies have produced a range of heavy UGVs for bulldozing and earthmoving, which are in active use, to undertake structural works under fire. Whilst teleoperated, future earthmoving UGVs are likely to be automated to undertake routine maintenance of runways, fire-trails, civil engineering, resource transport, or clearing forest and farmland.165

8.6 Urban transport
Whilst UGVs are able to operate off-road and for limited on-road military uses, it is relatively well accepted that they are not yet ready for the nontrivial

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
161

Carl E Nehme, Modeling Human Supervisory Control in Heterogeneous Unmanned Vehicle Systems (PhD thesis, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, 2009) 28. Ibid. Brett Davis, ‘Discovery and Exploration: Ocean Observatories Initiative Takes Shape Under the Oceans’ (Winter 2011) 1(4) Unmanned Systems: Mission Critical (online) 7-11 <http://issuu.com/auvsi/docs/mission_critical_winter_2011/1> (accessed 25 March 2012). Researchers at the University of California, Irvine are developing drone technology which would repair aging subterranean pipes from the inside using carbon fibre. See Tom Vasich, No Mere Pipe Dream University of California - Irvine <http://www.uci.edu/features/2010/02/feature_piperobot_100208.php> (accessed 12 January 2012). Howard Cannon, Extended Earthmoving with an Autonomous Excavator, (Master's thesis, Technical Report CMU-RI-TR-99-10, Robotics Institute, Carnegie Mellon University, 1999).

162 163

164

165

EAP 29

30

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

navigation required to operate on public highways and roads.166 Despite this, there have been concerted efforts to advance technology to a level where it can safely operate in civilian traffic zones. Proponents hope that one day automated vehicles will act as taxis, reduce traffic congestion, combat global warming emissions, and reduce road fatalities.167 One of the leaders in the field, Google, has completed over 200,000 miles with its fleet of autonomous Prius vehicles.168 The Prius uses ‘artificial-intelligence software that can sense anything near the car and mimic the decisions made by a human driver.’169 It can even be programmed for different driving personalities.170 Most major automobile companies are also developing autonomous or semiautonomous vehicles,171 such as BMW’s ConnectedDrive Connect (CDC) system which operates using four types of sensors – radar, camera, laser scanners and ultrasound distance sensors – to detect cars in front and in

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
166

The nontrivial navigational requirements for civilian motor traffic are simply beyond most of today’s artificial intelligence systems. Semi-autonomous UVs must deal with complex road rules, highly congested traffic, varying road and weather conditions and non-automotive traffic such as cyclists and pedestrians. More to the point, they must deal with other vehicles that may not be strictly adhering to the same road rules they will be programmed with along with unexpected events, emergencies or impediments (such as a child or animal straying onto the road). See, for instance, futurist and urban designer Michael Arth’s, forthcoming book, ‘The Labors of Hercules: Modern Solutions to 12 Herculean Problems’ (online) 2009 <http://michaelearth.com/herc_V_eco.html> (accessed 26 May 2010). Luke Vandezande, ‘California may be next to legislate autocars’, AutoGuide (online), 1 March 2012, <http://www.autoguide.com/autonews/2012/03/california-may-be-next-to-legislate-autonomous-cars.html> (accessed 25 March 2012) ; Tom Vanderbilt, ‘Let the Robot Drive: The Autonomous Car of the Future is here’, Wired (online) 20 January 2012, <http://www.wired.com/magazine/2012/01/ff_autonomouscars/all/1> (accessed 25 March 2012). John Markhoff, ‘Google Cars Drive Themselves, In Traffic’, The New York Times (online), 9 October 2010, <http://www.nytimes.com/2010/10/10/science/10google.html?pagewanted=1& _r=2> (accessed 25 March 2012). Ibid. See for example, the Chevrolet EN-V developed by General Motors, which is a two seat electric urban mobility vehicle. Audi and Volkswagen developed the Autonomous Audi TT which completed a 14,110-foot mountain summit in 2010. Japanese company ZMP is currently selling its autonomous vehicle, Robocar to researchers for US $84,000. See ‘State of the Art,’ (Spring 2011) 1(1) Unmanned Systems: Mission Critical (online) 21, <http://issuu.com/auvsi/docs/missioncritical_spring_final_hi/23> (accessed 25 March 2012).

167

168

169

170 171

EAP 30

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

31

adjacent lanes.172 The vehicle was trialled on the German Autobahn in 2011, and is expected to go into production ‘in a few years.’173 Although conservative estimates predict that autonomous cars will be sold commercially by 2020, more enthusiastic proponents hope to have such cars on the road by 2015.174 Pre-empting this shift in the urban landscape, legislation has been implemented in the US state of Nevada, requiring the adoption of regulations authorising autonomous vehicles.175 Both the US and the European Union have been funding autonomous UGV research and development since the 1980s. DARPA has attempted to encourage public sector involvement in UGV autonomy through the DAPRA Grand Challenges, a series of task-based competitions pitting different UGVs against each other, most recently in the urban environment, for a total prize pool of US$3.5 million.176 The US Department of Transportation Intelligent Transportation Systems Joint Programme Office is developing vehicle-to-vehicle (V2V) and vehicle-toinfrastructure (V2I) technology whereby unmanned cars rely on ‘connected and cooperative systems to communicate with the roads and each other.’ For

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
172

Tara Kelly, ‘BMW Self-Driving Car: Carmaker Shows off Hands-free Car on Autobahn,’ The Huffington Post (online), 26 January 2012 <http://www.huffingtonpost.com/2012/01/26/bmw-self-drivingcar_n_1234362.html> (accessed 25 March 2012). Peter Murray, ‘A Look at BMW’s Semi-autonomous Driving Car’, Singularity Hub (online), 2 February 2012 <http://singularityhub.com/2012/02/02/a-look-atbmws-semi-autonomous-driving-car/> (accessed 25 March 2012). Ibid. Peter Murray, ‘Driverless Cars Bought Closer to Reality as Nevada Passes Bill’, Singularity Hub (online), 28 June 2011 <http://singularityhub.com/2011/06/28/driverless-cars-brought-closer-toreality-as-nevada-passes-bill/> (accessed 25 March 2012). Similar bills have also been introduced in California, Hawaii, Oklahoma, Florida and Arizona. See Amanda Crawford, ‘Google’s Driverless Cars get Boost as California Mimics Nevada’, Business Week (online), 1 March 2012, <http://www.businessweek.com/news/2012-03-01/google-driverless-cars-getboost-in-california> (accessed 25 March 2012). The Challenge aims to develop ‘technology that will keep warfighters off the battlefield and out of harm’s way. The Urban Challenge features autonomous ground vehicles maneuvering in a mock city environment, executing simulated military supply missions while merging into moving traffic, navigating traffic circles, negotiating busy intersections, and avoiding obstacles.’ See DARPA, Urban Challenge Overview, <http://archive.darpa.mil/grandchallenge/overview.asp> (accessed 2 April 2012). However, a civilian car maker has been eying the technology, see Jon Stewart, ‘Robot cars race around California’ BBC News (online) 5 November 2007 <http://news.bbc.co.uk/go/pr/fr/-/2/hi/technology/7078245.stm> (accessed, 25 May 2012).

173

174 175

176

EAP 31

32

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

example, a vehicle could detect another car that has run a red light, and would respond accordingly to avoid a collision. Current V2V technology allows vehicles to avoid up to 80% of dangerous traffic scenarios, however more work is needed to counter concerns about privacy and cyber security.177 The Department is also seeking external input through its Connected Vehicle Technology Challenge.178 The European Commission is currently funding the Safe Road Trains for the Environment (SATRE) project, which commenced in 2009 and aims to develop safe and effective ‘road train’ technology. The system would allow individual drivers to link up to the rear of a train of vehicles which would be controlled by a lead vehicle. Cars would be outfitted with a navigation system and a transmitter/receiver unit, which would allow them to locate and the nearest train and relax, sleep, or work during their commute. Upon arrival at the destination, the driver could split off from the train and retake control of the vehicle.

8.7 Drone journalism
Although domestic regulations in many countries currently limit the use of UAVs for civilian and commercial purposes,179 several news agencies are operating micro drones capable of obtaining footage from remote or dangerous areas.180 As UAVs are more fully integrated into commercial

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
177

Jerry Hirsch, ‘Cars that Communicate Could Improve Safety’, The Los Angeles Times (online), 20 February 2012 <http://www.latimes.com/business/money/la-fi-moconnected-vehicles-20120220,0,3927662.story?track=rss> (accessed 25 March 2012). Stephanie Levy, ‘Car talk: the science and politics behind vehicles that talk to each other and the roadways’, (Spring 2011) 1(1) Unmanned Systems: Mission Critical (online) 28 <http://issuu.com/auvsi/docs/missioncritical_spring_final_hi/25> (accessed 25 March 2012). For example, under existing UK regulations, only UAVs lighter than 20kg can be legally flown and operators must have a permit from the Civil Aviation Authority. See Ryan Gallagher, ‘Surveillance drone industry plans PR effort to counter negative image’, The Guardian (online), 2 February 2012 <http://www.guardian.co.uk/uk/2012/feb/02/surveillance-drone-industy-preffort> (accessed 19 March 2012). In the US, Congress passed a Bill in February 2012 which will allow for integration of privately owned drones into commercial airspace by 2015. See Brian Bennett, ‘FAA moves toward allowing unmanned drones in U.S. airspace,’ Los Angeles Times (online), 8 March 2012 <http://articles.latimes.com/2012/mar/08/news/la-pn-faa-drones-us-airspace20120308> (accessed 19 March 2012). For example, a Hextacopter drone was been used by Australia’s Nine Network, in a failed attempt to obtain aerial footage of government detention centres for asylum seekers on Christmas Island. See Paige Taylor and Nicolas Perpitch, ‘Sixty Minutes drone crashes off death cliff’, The Australian (online), 14 May 2011 <http://www.theaustralian.com.au/media/sixty-minutes-drone-crashes-offdeath-cliff/story-e6frg996-1226055615740> (accessed 19 March 2012).

178

179

180

EAP 32

Unmanned Vehicles: A (Rebooted) History, Background and Current State of the Art

33

airspace, drone journalism – including civilian-journalism and paparazzijournalism – is set to increase.181

8.8 Other areas
The civil use of UAVs could be significant and extensive: private and insurance investigation; event coverage; traffic management and monitoring; fisheries protection; real-time disaster reconnaissance and management; aerial surveillance by Surf Life Saving groups;182 coverage of large public events; mechanised agriculture; power line surveying; aerial photography; film and cinematography; surveillance of foreign Embassies and Consulates;183 scientific research; environmental monitoring and so on.

Conclusion
Stulberg, quoted above, noted in 2007 that we at the ‘dawning’ of a UV revolution. It is now safe to say that the revolution is very much upon us, certainly in the military sector, but increasingly in the civilian one. Even in the two years since the précis to this special edition, upon which this article is based, was written there have been significant advances in UV technology, the way it is used and where it is deployed. As the UK Ministry of Defense reported in 2011: [UVs] have already changed, and will continue to change, the way that we conduct warfare. Associated technologies are developing at an unprecedented rate and the relentless nature and speed of these advancements make it hard to assimilate, analyse and fully understand the implications: this makes it difficult to plan clearly and confidently for the future.184

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
181

The first instance of civilian drone journalism to gain international attention was in 2011, when a freelance journalist used a small drone to take birds eye footage of a violent protest in Warsaw. See Mark Corcoran, ABC News (online), ‘Drone Journalism Takes Off’, 21 February 2012 <http://www.abc.net.au/news/2012-0221/drone-journalism-takes-off/3840616> (accessed 19 March 2012). Surf LifeSaving Australia is trialling UAVs to monitor beaches for sharks and civilians in trouble. See Cameron Stuart, ‘Drones, Lives and Liberties’, The Australian, 1 March 2012, 11. Unarmed UAVs have been trialled by the US State Department to help protect American Embassies and Consulates in Iraq. See ‘Predator Drones and Unmanned Aerial Vehicles (UAVs)’, The New York Times (online), 5 March 2012 <http://topics.nytimes.com/top/reference/timestopics/subjects/u/unmanned_a erial_vehicles/index.html> (accessed 15 March 2012). UK Ministry of Defence, The UK Approach to Unmanned Aircraft Systems, Joint Doctrine Note 2/11 (JDN 2/11), 30 March 2011, Concl-1.

182

183

184

EAP 33

34

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

It is impossible to completely predict the true form of these advances, or the impact they will have on society. It is also important not to overestimate their impact or their risks. Modern society has proved remarkably adept at integrating and normalising technological developments, especially once any moral panic relating to their introduction subsides. On the other hand, the negative impacts of some technological advancements have only become clear subsequent to their introduction and integration into society; which makes them much harder to regulate and control. Ensuring that such risks are managed in a balanced manner which permits us to benefit from the advances requires prospective consideration, deliberation and regulation. That can be particularly challenging when such advances are so ‘speed[y]’ and ‘relentless’. However, if we do not at least make an attempt we might find ourselves overrun by the technology before we can translate the discussion into effective action (assuming any action is needed). The remainder of this special edition is therefore dedicated to predicting and evaluating the legal issues arising from this technological revolution whose dawn has already appeared to have passed.

EAP 34

Lethal Robotic Technologies: The Implications for Human Rights and International Humanitarian Law
COMMENT BY PHILIP ALSTON* 1 Introduction

Remote-controlled aerial weapons systems and their ground-based equivalents are already a commonplace. Those best known to the general public are the unmanned aerial vehicles, or drones, whose use has grown exponentially in recent years.1 The next major development is the advent of armed, robotic weapons systems that can operate more or less autonomously. Over the past decade, the number and type of unmanned vehicle systems (UVS) developed for, and deployed in, armed conflict and law-enforcement contexts has grown at an astonishing pace. The speed, reach, capabilities and automation of robotic systems are all rapidly increasing. Unmanned technologies already in use or currently in advanced stages of development — including unmanned airplanes, helicopters, aquatic and ground vehicles2 — can be controlled remotely to carry out a wide array of tasks: surveillance, reconnaissance, checkpoint security, neutralisation of an improvised explosive device, biological or chemical weapon sensing, removal of debris, search and rescue, street patrols, and more.3 They can also be equipped with weapons to be used against targets or in self-defence. Some of these technologies are semi-automated, and can, for example, land, take off, fly, or patrol without human control. Robotic sentries, including towers equipped with surveillance capacity and machine guns, are in use at the borders of some countries. In the very near future, the technology will exist to create

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*

John Norton Pomeroy Professor of Law, New York University School of Law. The author was UN Special Rapporteur on extrajudicial, summary or arbitrary executions from 2004 until 2010 and this article draws on work originally done in that context. I am very grateful to Hina Shamsi and Sarah Knuckey for their assistance. See generally, Philip Alston, ‘The CIA and Targeted Killings beyond Borders’ (2011) 2 Harvard National Security Journal 283. The technical literature breaks down the different categories of UVS according to the environment in which they are used. Thus unmanned ground vehicles (UGVs) are used on land, unmanned air systems (UASs) in the air, and unmanned maritime systems (UMS) in the water. For a detailed description of the different types of systems under development by the United States military see, Office of the Secretary of Defense, Department of Defense, FY 2009–2034 Unmanned Systems Integrated Roadmap, 6 April 2009, Annex H, 193, <www.acq.osd.mil/psa/docs/UMSIntegratedRoadmap2009.pdf> (‘DOD Roadmap 2009’). For a highly accessible overview see P W Singer, Wired for War: The Robotics Revolution and Conflict in the 21st Century (Penguin, 2009); see also Steve Featherstone, ‘The Coming Robot Army’, Harpers (February 2007).

1

2

3

EAP 1!

36

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

robots capable of targeting and killing either with minimal human involvement or without the need for any direct human control or authorisation. Some of this technology is either unambiguously beneficial or can be used to clearly positive effect, including, most importantly, saving the lives of civilians by making the targeting of combatants more accurate and reducing collateral damage and limiting military personnel casualties by reducing their battleground presence and exposure. However, the rapid growth of these technologies, especially those with lethal capacities and those with decreased levels of human control, raise serious concerns that have been the subject of surprisingly little attention on the part of human rights or humanitarian actors, although some military lawyers, philosophers, ethicists and roboticists have begun to address some of the issues.4 The general lack of international attention to this issue is understandable. Other humanitarian issues — human-induced starvation in Somalia, killing and sexual violence in the Democratic Republic of the Congo, drug and gang-related killings in Mexico, or the unfolding revolutions of the Arab Spring — seem far more immediately pressing. Moreover, the resources, time, and staffing capacities in the United Nations, non-governmental organisations and think tanks focusing on this range of issues are always stretched and their capacities to think pro-actively are accordingly limited. In addition, until such weapons are actually deployed, anything that smacks of science fiction seems more at home in the movies than in analyses of existing governmental actions or policies. The extensive attention devoted both by specialists and by the media to the challenges posed by cyberwarfare and the need to elaborate upon the rules that apply in determining the appropriate types of response to attacks

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4

See, for example, Summary of Harvard Executive Session of June 2008, Unmanned and Robotic Warfare: Issues, Options and Futures, 14 <www.boozallen.com/media/file/Unmanned_and_Robotic_Warfare.pdf> (‘2008 Harvard Session’); Ronald Arkin, Governing Lethal Behaviour in Autonomous Robots (CRC Press, 2009); Peter Asaro, ‘How Just Could a Robot War Be?’ in Adam Briggle, Katinka Waelbers and Philip Brey (eds), Current Issues in Computing and Philosophy (IOS Publishing, 2009); William H Boothby, Weapons and the Law of Armed Conflict (Oxford University Press, 2009); Jason Borenstein, ‘The Ethics of Autonomous Military Robots’ (2008) 2(1) Studies in Ethics, Law and Technology <http://www.degruyter.com/view/j/selt.2008.2.1/selt.2008.2.1.1036/selt.2008.2.1 .1036.xml?format=INT>; Charles J Dunlap, Jr, ‘Technology: Recomplicating Moral Life for the Nation’s Defenders’ (1999) 24 Parameters: US Army War College Quarterly 24; Noel Sharkey, ‘Automated Killers and the Computing Profession’ (2007) 40(11) Computer 124, doi: 10.1109/MC.2007.372; Noel Sharkey, ‘Death Strikes from the Sky: The Calculus of Proportionality’ (2009) 28(1) IEEE Technology and Society 16, doi: 10.1109/MTS.2009.931865; Robert Sparrow, ‘Robotic Weapons and the Future of War’ in Paolo Tripodi and Jessica Wolfendale (eds), New Wars and New Soldiers: Military Ethics in the Contemporary World (Ashgate, 2011); Robert Sparrow, ‘Predators or Plowshares? Arms Control of Robotic Weapons’ (2009) 28(1) IEEE Technology and Society 25, doi: 10.1109/MTS.2009.931862; Patrick Lin, George Bekey and Keith Abney, Autonomous Military Robotics: Risk, Ethics, and Design (report prepared for the United States Department of the Navy, 2008) <http://ethics.calpoly.edu/ONR_report.pdf>.

EAP 2!

Lethal Robotic Technologies: Implications for Human Rights and International Humanitarian Law

37

on the internet and on militarily or strategically important computer hardware systems, stands in marked contrast to the lack of interest in robotic technologies.5 It is striking to recall that the earliest thinking done about these issues put human rights concerns at the forefront. As early as the 1930s, the science fiction writer Isaac Asimov had begun to formulate what came to be called his Three Laws of Robotics. He expressed these in the following terms in a 1942 short story: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.6 In his later writings he added a fourth law, to which he gave primacy over the others by naming it the zeroth law: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.7 Despite this very early flagging of the potential for robotics to wreak havoc in what we now understand to be human rights terms, today’s human rights community has, at least until very recently, tended to view advances in robotics as an exotic topic that does not need to be addressed until the relevant technologies are actually in use. Several factors might help to account for this reticence. First, much of the information about these developments remains confined to military research establishments and specialist scientific literature. Moreover, much of it remains secret and the only information available to a researcher will be through information leaked to the media that will provide enough information to generate concern but not nearly enough to provide a foundation for further exploration. Second, understanding the technologies requires expertise beyond that of most human rights experts. At best, these might be seen as issues that fall within the domain of those dealing with the laws of armed conflict, rather than human rights. While the past couple of decades have seen a remarkable degree of integration of human rights and humanitarian law, even this convergence is far from complete and the overall field still tends to rely to a surprising extent upon reconciling

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
5

This contrast was commented upon in Kenneth Anderson, ‘The Rise of International Criminal Law: Intended and Unintended Consequences’ (2009) 20 European Journal of International Law 331, 345, n 29. Three Laws of Robots (2012) Wikipedia <http://en.wikipedia.org/wiki/Three_Laws_of_Robotics>. Ibid.

6

7

!

EAP 3!

38

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

different sources of expertise rather than on individual experts who have thoroughly mastered both areas of law. It is hardly surprising then that an understanding of robotics and other new technologies has yet to be combined with the requisite legal expertise. Third, the attractions of greater use of robotic technologies greatly overshadow, in the public mind, the potential disadvantages. The minimisation of casualties on the side of the technology wielders, and the prospect of ever-greater precision and discernment hold obvious appeal. And finally, there is a North-South dimension, in that the North has the money and the technical know-how to develop the technologies, while many of the negative consequences of their use will fall much more heavily on poorer countries in the South. But the human rights community is not alone in being slow to focus on the ethical dimensions of robotic technologies. A recent survey asked the top 25 stakeholders in the leading professional trade group dealing with robotics, the Association for Unmanned Vehicle Systems International (AUVSI),8 whether they could foresee ‘any social, ethical, or moral problems’ emanating from the development of unmanned systems. Sixty per cent of the respondents answered ‘No.’9 Additional obstacles to research into the ethical or human rights dimensions of robotics include the extent to which existing funding is dominated by defence-related initiatives by either governments or private companies with a stake in the outcome, and a reluctance on the part of foundations to provide funding for research in what still appears to them to be of a speculative or at least futuristic nature.10 The analysis that follows is predicated on three principal assumptions. The first is that the new robotic technologies are developing very rapidly and that the unmanned, lethal weapons carrying vehicles that are currently in operation will, before very long, be operating on an autonomous basis in relation to many and perhaps most of their key functions, including in particular the decision to actually deploy lethal force in a given situation. The second is that these technologies have very important ramifications for human rights in general and for the right to life in particular, and that they raise issues that need to be addressed urgently, before it is too late. The third is that, although a large part of the research and technological innovation currently being undertaken is driven by military and related concerns, there is no inherent reason why human rights and humanitarian law considerations cannot be proactively factored into the design and operationalisation of the new technologies. But this will not happen unless and until the human rights

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
8

The Association identifies itself as ‘the world's largest non-profit organization devoted exclusively to advancing the unmanned systems and robotics community.’ See AUVSI, FAQ for Media (2012) <http://www.auvsi.org/news/mediaresources/faqformedia/>. The survey, conducted by Kendall Haven, is reported in Peter Singer, ‘The Ethics of Killer Applications: Why is it So Hard to Talk About Morality When it Comes to New Military Technology?’ (2010) 9(4) Journal of Military Ethics 299, 301, doi: 10.1080/15027570.2010.536400. Ibid 304-306.

9

10

EAP 4!

Lethal Robotic Technologies: Implications for Human Rights and International Humanitarian Law

39

community presses the key public and private actors to make sure it does; and because the human rights dimensions cannot be addressed in isolation, the international community urgently needs to address the legal, political, ethical and moral implications of the development of lethal robotic technologies.

2

Trends in the Development of Lethal Robotic Technology

The use of unmanned vehicle systems, including for lethal purposes, in the context of war is by no means confined to the twenty-first century.11 As long ago as the Second World War Germany used bombs attached to tank treads which were detonated by remote control, while the United States used radiopiloted bomber aircraft packed with explosives. The perceived need for intensive aerial reconnaissance, first of the Soviet Union and later of China, also led the US Air Force in close collaboration with the Central Intelligence Agency to pour billions of dollars into research on unmanned aerial vehicles starting in the early 1960s.12 While the subsequent history was one of competition among and within agencies, especially over the priority to be given to unmanned vehicles, manned aircraft and satellites, work on UVSs has continued more or less systematically for over half a century. But despite the extent of these precedents, the use of unmanned systems has dramatically increased since the attacks of 11 September 2001, and particularly since the start of the conflicts in Afghanistan and Iraq. These developments have been both accompanied and driven by an enormous growth in military research and development focused specifically on these systems. Military experts have noted that the two conflicts are serving as real-time laboratories of ‘extraordinary development’ for ‘robotic warfare’.13 There are various ways of classifying and characterising UVSs. In relation to airborne UVSs the United States Department of Defense tends to speak of Unmanned Aerial Systems (UASs), while the equivalent British Ministry speaks of ‘unmanned aircraft’ but also recommends that for public or media discourse the preferred terms are Remotely Piloted Aircraft or Remotely Piloted Aircraft System, with the latter describing the overall delivery system.14

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
11

See generally Thomas P Ehrhard, Air Force UAVs: The Secret History (Mitchell Institute for Airpower Studies, 2010). Ehrhard notes that the CIA, along with its sister agencies, such as the top-secret National Reconnaissance Office, funded more than 40 per cent of the total investment in research on unmanned aerial vehicles from 1960 through 2000. Ibid 5. 2008 Harvard Session, above n 4, 2. UK Ministry of Defence, Joint Doctrine Note 3/10 Unmanned Aircraft Systems: Terminology, Definitions and Classification (May 2010) <http://www.mod.uk/NR/rdonlyres/FBC33DD1-C111-4ABD-9518A255FE8FCC5B/0/JDN310Amendedweb28May10.pdf> (‘UK Joint Doctrine Note’).

12

13 14

!

EAP 5!

40

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

Within this overall category, the most common classifications are according to function, size, or level of automation. In terms of function, there is a distinction between those UASs designed primarily for reconnaissance, force application or protection activities, although such divisions are very flexible given the multi-purposing nature of many of the vehicles. In terms of size, a distinction is sometimes drawn between three different classes, primarily according to weight. Class I systems, weighing less than 150 kg, are generally ‘employed at the small unit level or for force protection/base security’.15 Class II, weighing between 150 kg to 600 kg, are ‘often catapult-launched, mobile systems that usually support brigade-level, and below, Intelligence, Surveillance, Target Acquisition and Reconnaissance requirements’, and generally operate at altitudes below 10,000 feet.16 Class III, weighing more than 600 kg, operate at high altitude, have the greatest range, endurance and transit speeds. They are used for ‘broad area surveillance and penetrating attacks.’17 In terms of automation levels, experts generally distinguish three separate types of systems depending on the degree of human involvement in their operation and functioning. The most common are remotely controlled vehicles that, although unmanned, are nevertheless fully controlled by human operators. These so-called ‘man-in-the-loop’ systems can be operated from nearby or at a great distance. Thus some unmanned aerial vehicles are controlled by operators based thousands of miles from the scene of the activity, while others are operated at much closer range. The second category of ‘man-on-the-loop’ vehicles are automated and carry out programmed activities without needing any further human involvement once they have been launched. The third category consists of autonomous vehicles (‘man-outof-the-loop’) which operate without human inputs in carrying out the range of functions for which they have been pre-programmed. But it is this third category that gives rise to the most significant definitional disagreements. Noel Sharkey’s contribution in this volume describes the different classifications used by the United States Navy and the United States Air Force, but both have in common a definition that requires ‘autonomous’ systems to have some attributes of human intelligence.18 In contrast, the British Ministry of Defence, offers what it describes as a simple definition: An autonomous system is capable of understanding higher level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
15 16 17 18

Ibid 2-5. Ibid. Ibid 2-6. Noel Sharkey, ‘Automating Warfare: Lessons Learned from the Drones’ (2011) 21(2) Journal of Law Information and Science, doi: 10.5778/JLIS.2011.21.Sharkey.1

EAP 6!

Lethal Robotic Technologies: Implications for Human Rights and International Humanitarian Law

41

of an autonomous unmanned aircraft will be predictable, individual actions may not be.19 The same report adds that an autonomous system ‘must be capable of achieving the same level of situational understanding as a human.’20 While the British definition will be difficult to satisfy, the United State approaches are eminently achievable. Thus the 2009 US Air Force review of the future reflects with confidence upon the prospect of ‘a fully autonomous capability, [involving] swarming, and Hypersonic technology to put the enemy off balance by being able to almost instantaneously create effects throughout the battlespace.’ Linked to this will be ‘[t]echnologies to perform auto air refuelling, automated maintenance, automatic target engagement, hypersonic flight, and swarming’. The report comments that the ‘end result would be a revolution in the roles of humans in air warfare’.21 While various countries are using and developing robotic technologies, the clear leader in the field continues to be the United States. In 2001 the US Congress set two ambitious goals for the military in relation to the use of unmanned, remotely controlled technology. It required the military to seek to ensure that, by 2010, one-third of the ‘operational deep strike force aircraft fleet’ would be unmanned and that by 2015, one-third of the operational ground combat vehicles would also be unmanned.22 Between 2000 and 2008, the number of United States unmanned aircraft systems increased from less than 50 to over 6,000.23 Similarly, the number of unmanned ground vehicles deployed by the United States Department of Defense increased from less than 100 in 2001 to nearly 4,400 by 2007,24 and 8,000 by 2011.25 The military

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
19 20 21

UK Joint Doctrine Note, above n 14, 2-3. Ibid. United States Air Force, Unmanned Aircraft Systems Flight Plan 2009-2047 (2009) 50, <http://www.fas.org/irp/program/collect/uas_2009.pdf> (‘Unmanned Aircraft Systems Flight Plan 2009-2047’). National Defense Authorization Act for Fiscal Year 2001, Pub L No 106–398, § 220(a), 114 Stat 1654, 1654A-38 (2000). See United States Government Accountability Office, Unmanned Aircraft Systems: Additional Actions Needed to Improve Management and Integration of DOD Efforts to Support Warfighter Needs (November 2008) Report to the Subcommittee on Air and Land Forces, Committee on Armed Services, House of Representatives <http://www.gao.gov/new.items/d09175.pdf>. Department of Defense, Report to Congress: Development and Utilization of Robotics and Unmanned Ground Vehicles (October 2006), 11, <http://www.techcollaborative.org/files/JGRE_UGV_FY06_Congressional_Repo rt.pdf> (‘Development and Utilization of Robotics and Unmanned Ground Vehicles’); US law requires that, by 2015, one third of US operational ground combat vehicles be unmanned. See Office of the Secretary of Defense, Unmanned Systems Roadmap 2007-2032 (2007) 6, <http://www.fas.org/irp/program/collect/usroadmap2007.pdf>; For fiscal year 2010, the US Department of Defense sought a budget of $5.4 billion for unmanned systems (including systems for use on land, in the air, and at sea), an increase of 37.5 per cent over the past two years. See ‘Pentagon’s Unmanned Systems

22

23

24

!

EAP 7!

42

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

budget estimates for the period 2009-2013 foresee expenditures of $18.9 billion on UVS activities.26 Private industry is also a key driver in the field as illustrated by the presence of around 100 companies working on unmanned systems technologies at one of the major defence industry conferences in September 2011.27 At present, the robotic weapons technologies most in use are systems that are remotely, but directly, operated by a human being. A well-known example is the ‘BomBot’, a vehicle which can be driven by remote control to an improvised explosive device, drop an explosive charge on the device, and then be driven away before the charge is detonated.28 Another example is the Special Weapons Observation Reconnaissance Detection System (SWORDS) and its successor, the Modular Advanced Armed Robotic System (MAARS). SWORDS is a small robot that can be mounted with almost any weapon that weighs less than 300 pounds, including machine guns, rifles, grenade launchers and rocket launchers, and can travel in a variety of terrains.29 It can be operated by remote control and video cameras from up to two miles away, and be used for street patrols and checkpoint security as well as to guard posts. MAARS is similar, but can carry more powerful weapons and can also be mounted with less-than-lethal weapons, such as tear gas.30 Sentry systems also exist which can patrol automatically around a sensitive storage facility or a base. The Mobile Detection Assessment and Response System (MDARS), for example, is a small robotic patrol force on wheels designed to relieve personnel of the repetitive and sometimes dangerous task of patrolling exterior areas and which can autonomously perform random patrols.31

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Spending Tops $5.4 billion in FY2010’, Defense Update (online) 14 June 2009, <http://defense-update.com/newscast/0609/news/pentagon_uas_140609.html>.
25

Joan Michel, ‘Unmanned Ground Vehicles in ISR Missions’ (2011) 1(3) Tactical Intelligence, Surveillance and Reconnaissance 5, <http://www.kmimediagroup.com/files/TISR_1-3_final.pdf>. DOD Roadmap 2009, above n 2, 4. Danielle Lucey, Unmanned Systems Make a Bang at DSEi [Defence and Security Equipment International] 2011 (13 September 2011) Association for Unmanned Vehicle Systems International <http://www.auvsi.org/news/>. Development and Utilization of Robotics and Unmanned Ground Vehicles, above n 24, 12. Singer, above n 3, 29-32. Ibid; see also Seth Porges, ‘Real Life Transformer Could Be First Robot to Fire in Combat’ Popular Mechanics (1 October 2009) <http://www.popularmechanics.com/technology/military/4230309>. MDARS — 21st Century Robotic Sentry System, General Dynamics Robotics Systems, <http://www.gdrs.com/about/profile/pdfs/0206MDARSBrochure.pdf>.

26 27

28

29 30

31

EAP 8!

Lethal Robotic Technologies: Implications for Human Rights and International Humanitarian Law

43

In terms of aerial weaponry, the level of automation that generally exists in currently deployed systems is limited to the ability of, for example, an unmanned combat aerial vehicle or a laser-guided bomb to be programmed to take off, navigate or de-ice by itself, or with only human monitoring (as opposed to control). In June 2010, trials were held in which helicopters had carried out fully autonomous flights,32 and later the same year test aircraft in Fort Benning, Georgia, had autonomously identified targets and taken the programmed action.33 For currently existing systems that have lethal capability, the choice of target and the decision to fire the weapon is made by human beings, and it is a human being who actually fires the weapon, albeit by remote control. With such weapons systems, there is, in military terminology, a ‘man in the loop’, so that the determination to use lethal force, as with any other kind of weapon, lies with the operator and the chain of command. Examples of such semi-automated weapons systems currently in use include Predator and Reaper drones34 deployed in the conflicts in Iraq and Afghanistan by the United States and the United Kingdom, and Israeli Harpy drones. Systems that would replace this generation of technology include the Sky Warrior, an unmanned aircraft system capable of taking off and landing automatically, with the capacity to carry and fire four Hellfire missiles.35 ‘Swarm’ technologies are also being developed to enable a small number of military personnel to control a large number of machines remotely. One system under development envisions that a single operator will monitor a group of semi-autonomous aerial robotic weapons systems through a wireless network that connects each robot to others and to the operator. Each robot within a ‘swarm’ would fly autonomously to a designated area, and ‘detect’ threats and targets through the use of artificial intelligence, sensory information and image processing.36

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
32

Olivia Koski, ‘In a First, Full-Sized Robo-Copter Flies With No Human Help’, Wired (online), 14 July 2010 <http://www.wired.com/dangerroom/2010/07/in-afirst-full-sized-robo-copter-flies-with-no-human-help/>. Peter Finn, ‘A Future for Drones: Automated Killing’, Washington Post (online), 19 September 2011, <http://www.washingtonpost.com/national/national-security/a-future-fordrones-automated-killing/2011/09/15/gIQAVy9mgK_story.html>. Unmanned Aircraft Systems Flight Plan 2009-2047, above n 21, 26. See descriptions at General Atomics Aeronautical, Sky Warrior (2012) <http://www.ga-asi.com/products/aircraft/er-mp-uas.php>; Defense Update, Sky Warrior Goes into Production to Equip U.S. Army ER/MP Program (9 July 2010) <http://www.defence-update.net/wordpress/20100709_sky_warrior_lrip.html>. Unmanned Aircraft Systems Flight Plan 2009-2047, above n 21, 33; A group of European firms, lead by Dassault, is developing similar technology for the European market: Erik Sofge, ‘Top 5 Bomb-Packing, Gun-Toting War Bots the U.S. Doesn’t Have’, Popular Mechanics (online), 1 October 2009, <http://www.popularmechanics.com/technology/military/4249209>.

33

34 35

36

!

EAP 9!

44

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

Robotic technology is also becoming faster and more capable of increasingly rapid response. Military strategic documents predict the development of technology that speeds up the time needed for machines to respond to a perceived threat with lethal force to micro or nanoseconds. Increasingly humans will no longer be ‘in the loop’ but rather ‘on the loop’ — monitoring the execution of certain decisions.37 The speed of the envisioned technology would be enhanced by networking among unmanned machines which would be able to ‘perceive and act’ faster than humans can. From a military perspective, autonomous robots have important attractions. Building on Sharkey’s analysis38 there are various reasons why they may be preferred to the current generation of weapons. First, autonomous systems may be less expensive to manufacture and they require fewer personnel to run them. Second, reliance upon satellite or radio links to pilot remotecontrolled vehicles makes them vulnerable to interference with the relevant frequencies. Third, there are economies of scale in operating multiple autonomous vehicles at the same time through the same system. Fourth, the 1.5 second delay involved in communicating directly through remote-control technology is problematic in the context of air-to-air combat. An autonomous machine, on the other hand, can make rapid decisions and implement them instantly. To date, armed robotic systems operating on any more than a semi-automated basis have not been used against targets. Senior military personnel in key user states such as the United Kingdom have suggested that humans will, for the foreseeable future, remain in the loop on any decisions to use lethal force.39 This is also the line generally taken by the United States Department of Defense. A major recent review concluded that for a significant period into the future, the decision to pull the trigger or launch a missile from an unmanned system will not be fully automated, but it noted that many aspects of the firing sequence will be, even if the final decision to fire will not likely be fully automated ‘until legal, rules of engagement, and safety concerns have all been thoroughly examined and resolved’.40 But these official policy statements appear in a different light when one listens to the views of the personnel more closely involved in these programs, such as when they are addressing enthusiastic audiences in the context of robotics industry gatherings. For example, Lt Gen Rick Lynch, commander of the US Army Installation Management Command, stated in August 2011 that he was ‘an advocate of autonomous vehicle technology. … There’s a place on the battlefield for tele-

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
37 38

Unmanned Aircraft Systems Flight Plan 2009-2047, above n 21, 41. Noel Sharkey, ‘Saying “No!” to Lethal Autonomous Targeting’ (2010) 9 Journal of Military Ethics 369, 377-378. British Air Marshal Steve Hillier sees ‘an enduring requirement for a human in the loop for decision-making. When you get to attack, you need someone to exercise judgement’. Quoted in Craig Hoyle, Farnborough: UK Unmanned Air Vehicles (2010) <http://www.flightglobal.com/articles/2010/07/13/344077/farnborough-ukunmanned-airvehicles.html>. DOD Roadmap 2009, above n 2.

39

40

EAP 10!

Lethal Robotic Technologies: Implications for Human Rights and International Humanitarian Law

45

operated systems, [but] we have to continue to advocate for pursuit of autonomous vehicle technology.’41 And most of those working in the industry seem to be firmly convinced that the advent of autonomous lethal robotic systems is well under way and that it is really only a matter of time before autonomous engagements of targets takes place on the battlefield.42 A number of countries are already reportedly deploying or developing systems with the capacity to take humans out of the lethal decision-making loop. For example: • Since approximately 2007, Israel has deployed remote-controlled 7.62 mm machine-guns mounted on watchtowers every few hundred yards along its border with Gaza as part of its ‘Sentry Tech’ weapons system, also known as ‘Spot and Shoot’ or in Hebrew, ‘Roeh-Yoreh’ (SeesFires).43 This ‘robotic sniper’ system locates potential targets through sensors, transmits information to an operations command centre where a soldier can locate and track the target and shoot to kill.44 Dozens of alleged ‘terrorists’ have been shot with the Sentry Tech system.45 The first reported killing of an individual with Sentry Tech appears to have taken place during Operation Cast Lead in December 2008.46 Two alleged ‘terrorists’ were killed by the system in December 2009,47 and another person was killed and four injured by Sentry Tech in March 2010; according to media accounts it is unclear whether the dead and injured were farmers or gunmen.48 Future plans envision a ‘closed loop’

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
41

Cheryl Pellerin, ‘Robots Could Save Soldiers’ Lives, Army General Says’, American Forces Press Service (online), 17 August 2011, <http://www.defense.gov/news/newsarticle.aspx?id=65064>. See Singer, above n 9; and Ronald C Arkin, Alan R Wager and Brittany Duncan, ‘Responsibility and Lethality for Unmanned Systems: Ethical Pre-mission Responsibility Advisement’ (GVU Technical Report GIT-GVU-09-01, GVU Center, Georgia Institute of Technology, 2009). Robin Hughes and Alon Ben-David, ‘IDF Deploys Sentry Tech on Gaza Border’, Jane’s Defence Weekly (6 June 2007). Noah Schachtman, ‘Robo-Snipers, “Auto Kill Zones” to Protect Israeli Borders’, Wired, 4 June 2007 <http://www.wired.com/dangerroom/2007/06/for_years_and_y/>. Anshell Pfeffer, ‘IDF’s Newest Heroes: Women Spotters on Gaza’s Borders’, Haaretz (online) 3 March 2010 <http://www.haaretz.com/print-edition/news/idf-s-newest-heroeswomenspotters-on-gaza-border-1.264024>. ‘Israeli War-Room “Look-Out” Girls Use New “See-Shoot” Remote Control’, BBC Monitoring Middle East, 9 January 2009. Yaakov Katz, ‘IDF Unveils Upgrades to Gaza Fence’, Jerusalem Post (online), 3 March 2010 <http://www.jpost.com/Israel/Article.aspx?id=170041>. Ali Waked, ‘Palestinians: 1 Dead, 4 Injured From IDF Fire in Gaza’, Ynet News (online), 1 March 2010, <http://www.ynetnews.com/articles/0,7340,L-3856218,00.html>.

42

43

44

45

46

47

48

!

EAP 11!

46

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

system, in which no human intervention would be required in the identification, targeting and kill process.49 • The Republic of Korea has developed the SGR-1, an unmanned gun tower that, beginning in July 2010, is performing sentry duty on an experimental basis in the demilitarised zone between the Democratic People’s Republic of Korea and the Republic of Korea.50 The SGR-1 uses heat and motion detectors and pattern recognition algorithms to sense possible intruders; it can alert remotely located command centre operators who can use the SGR-1’s audio and video communications system to assess the threat and make the decision to fire the robot’s 5.5 millimetre machine gun.51 Media accounts indicate that, although the decision to use lethal force is made now by human commanders, the robot has been equipped with the capacity to fire on its own.52 Such automated technologies are becoming increasingly sophisticated, and artificial intelligence reasoning and decision-making abilities are actively being researched and receive significant funding. States’ militaries and defence industry developers are working to develop ‘fully autonomous capability’, such that technological advances in artificial intelligence will enable unmanned aerial vehicles to make and execute complex decisions, including the identification of human targets and the ability to kill them.53 A 2003 study commissioned by the United States Joint Forces Command reportedly predicted the development of artificial intelligence and automatic target recognition that will give robots the ability to hunt down and kill the enemy with limited human supervision by 2015.54 Among the envisioned uses for fully automated weapons systems are: crowd control systems that range from non-lethal through to lethal; dismounted offensive operations; and armed reconnaissance and assault operations.55 One already developed ground robot, the Guardium UGV, is a high-speed vehicle that can be

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
49

‘Remotely Controlled Mechanical Watchtowers Guard Hostile Borders”, Homeland Security Newswire (online), 19 July 2010 <http://homelandsecuritynewswire.com/remotely-controlled-mechanical-watchtowers-guard-hostile-borders>; Schachtman, above n 44; Jonathan Cook, ‘Israel Paves the Way for Killing by Remote Control’, The National (Abu Dhabi), 13 July 2010. Kim Deok-hyun, ‘Army Tests Machine-gun Sentry Robots in DMZ’, Yonhap News Agency (online), 13 July 2010, <http://english.yonhapnews.co.kr/national/2010/07/13/14/0301000000AEN201 00713007800315F.HTML>. Ibid; Jon Rabiroff, ‘Machine gun-toting robots deployed on DMZ’, Stars and Stripes (online) 12 July 2010, <http://www.stripes.com/news/pacific/korea/machinegun-toting-robots-deployed-on-dmz-1.110809>. Sofge, above n 36. Unmanned Aircraft Systems Flight Plan 2009-2047, above n 21, 50. Featherstone, above n 3. DOD Roadmap 2009, above n 2, 10.

50

51

52 53 54 55

EAP 12!

Lethal Robotic Technologies: Implications for Human Rights and International Humanitarian Law

47

weaponised and used for combat support as well as border patrols and other security missions, such as perimeter security at airports and power plants.56 The United States has also recently stepped up its commitment to robotics through several initiatives that are likely to expand and expedite existing trends in this area. In June 2011 President Obama launched the National Robotics Initiative in an effort to ‘accelerate the development and use of robots in the United States’ on the basis of a partnership among various federal government agencies including the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), the National Institutes of Health (NIH), and the US Department of Agriculture (USDA). The objective is ‘to advance the capability and usability’ of next generation robotics systems and ‘to encourage existing and new communities to focus on innovative application areas’ by inter alia addressing ‘the entire life cycle from fundamental research and development to industry manufacturing and deployment.’57 At the same time, Obama launched the Advanced Manufacturing Partnership that will invest $500 million in manufacturing and emerging technologies, and commits at least $70 million of that amount to research in the robotics and unmanned systems industries. While much of the official rhetoric focuses on the peaceful uses of such technologies, the President effectively acknowledged the double-edged sword involved when he wryly observed at the time that ‘[t]he robots you make here seem peaceful enough for now.’58 While the United States is the central player in both the research and deployment of robotic military technologies, the survey above illustrates the fact that it is by no means alone. Various other States members of the Organization for Economic Cooperation and Development (the OECD) have also developed or are developing unmanned systems, including Australia, Canada, France, Germany, Israel, the Republic of Korea and the United Kingdom.59 In addition, over fifty countries have reportedly either purchased surveillance drones or begun their own programs. Given the flexibility and adaptability of drones, those designed for surveillance can readily be used for lethal activities as well. In terms of independent initiatives, Iran has already showcased an unmanned aerial vehicle with a range of 1,000 kms (620 miles),

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
56

GNIUS Unmanned Ground Systems, Guardian UGV, described at <http://www.gnius.co.il/unmanned-ground-systems/guardium-ugv.html> and <http://www.defenseupdate.com/products/g/guardium.htm>. National Science Foundation, National Robotics Initiative, <http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=503641&org=CISE> Stephanie Levy, Obama Announces Manufacturing Plan that Includes Unmanned Systems, Association for Unmanned Vehicle Systems International (24 June 2011) <http://www.auvsi.org/auvsi/news/#AMP>. See, Development and Utilization of Robotics and Unmanned Ground Vehicles, above n 24, 47 (describing research and development activities directed towards developing military capabilities for robotics and unmanned ground vehicles of United States’ allies).

57

58

59

!

EAP 13!

48

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

capable of carrying four cruise missiles.60 China has begun a major program which yielded 25 different models of unmanned aerial vehicles on display in November 2010. As the Wall Street Journal reported at the time, ‘the large number of UAVs on display illustrates clearly that China is investing considerable time and money to develop drone technology, and is actively promoting its products on the international market.’61 These include ‘attack drones’ being sold to countries such as Pakistan.62 And more recently, nonstate actors have begun to assert their capacity to make use of robotic technologies. The Ansar al-Islam militant group in Iraq has released an online video demonstrating its ability to produce and deploy crude robotic technology.63

3

Concerns

Despite the astonishing speed at which the robotic technologies have developed either for lethal purposes or with clear lethal capacity, the public debate over the legal, ethical and moral issues arising from their use is at a very early stage, and little consideration has been given to the international legal framework necessary for dealing with the resulting issues. There are many possible advantages flowing from the use of existing and developing technologies.64 They may be able to act as ‘force multipliers’, greatly expanding the capacity or reach of a military, and robots may be sacrificed or sent into hazardous situations that are too risky for human soldiers. They may be less economically costly than deploying humans, and, indeed, their destruction does not result in the ending of irreplaceable human life. As noted in a United States Government report, more and more robots

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
60

William Yong and Robert F Worth, ‘Iran’s President Unveils New Long-range Drone Aircraft’, New York Times (online), 22 August 2010 <http://www.nytimes.com/2010/08/23/world/middleeast/23iran.html>. Jeremy Page, ‘China’s New Drones Raise Eyebrows’, Wall Street Journal (18 November 2010). See also, ‘China building an army of unmanned military drones “to rival the US”’, MailOnline (online), 5 July 2011 <http://www.dailymail.co.uk/news/article-2011533/China-building-armyunmanned-military-drones-rival-U-S.html>. See ‘China develops military drones for Pakistan’ 7 July 2011, <http://www.chinadefense-mashup.com/china-develops-military-drones-for-pakistan.html>. (‘”The United States doesn’t export many attack drones, so we’re taking advantage of that hole in the market,” said Zhang Qiaoliang, a representative of the Chengdu Aircraft Design and Research Institute, which manufactures many of the most advanced military aircraft for the People’s Liberation Army.’) Noah Shachtman, ‘Iraq Militants Brag: We’ve Got Robotic Weapons, Too’, Wired, 4 October 2011, <http://www.wired.com/dangerroom/2011/10/militants-gotrobots/>. For more discussion of these arguments, see Ronald Arkin, Governing Lethal Behaviour in Autonomous Robots (CRC Press, 2009); Lin, Bekey and Abney, above n 4.

61

62

63

64

EAP 14!

Lethal Robotic Technologies: Implications for Human Rights and International Humanitarian Law

49

are being destroyed or damaged in combat instead of servicemen and women being killed or wounded, and this is the preferred outcome.65 Robots may be able to use lethal force more conservatively than humans (because they do not need to have self-preservation as a foremost drive),66 and their actions and responses may be faster, based on information processed from more sources, and more accurate, enabling them to reduce collateral damage and other mistakes made by humans. They may also be able to avoid mistakes or harm resulting from human emotions or states, such as fear, tiredness, and the desire for revenge, and, to the extent that machines are equipped with the ability to record operations and monitor compliance with legal requirements, they may increase military transparency and accountability. But these hypothetical advantages may not necessarily be reflected in the design or programming of actual technologies. And the reality, to date, is that technological developments have far outpaced even discussions of the humanitarian and human rights implications of the deployment of lethal robotic technologies. Kenneth Anderson has characterised the turn to robotic technologies by the United States as one of the unintended consequences of the increasing reach and effectiveness of the evolving regime of international criminal law. In his view, the emphasis on robotics has been driven, in significant part, by the need to respond to what he sees as the loss of reciprocity in international humanitarian law, especially in the context of asymmetric warfare. He gives the two standard examples to illustrate these trends: the use of human shields, and the tendency of some belligerent forces to conceal themselves in the midst of civilian populations. In terms of the latter problem, he seems not to be referring to the use of CIA or other Special Operations forces operating outside of uniform in civilian dominated areas, but rather to the practices of groups such as the Taliban and al-Qaeda with whom the United States is engaged in an armed conflict. Although he does not very clearly spell out the causal chain that explains how it is that international criminal law has generated ‘pressures to create whole new battlefield technologies’, there seem to be two elements.67 The first is that the new technologies facilitate more precise and accurate targeting and thus diminish the likelihood of allegations that the principles of distinction or proportionality have been violated. The second element consists of ‘emerging interpretations of law governing detention, interrogation, and rendition’ which create strong incentives to kill rather than capture individuals who are suspected of participating in hostilities. He explains this link by observing that ‘targeted killing via a standoff robotic platform [is] legally less messy than the problems of detention.’68

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
65

Development and Utilization of Robotics and Unmanned Ground Vehicles, above n 24, 9; See also DOD Roadmap 2009, above n 2, 10. Ronald C Arkin, ‘Ethical Robots in Warfare’, (2009) 28(1) Technology and Society Magazine 30, 32, doi: 10.1109/MTS.2009.931858. Anderson, above n 5, 345. Ibid 346.

66

67 68

!

EAP 15!

50

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

To the extent that there is an implication that the applicable law and the principles governing both the lethal use of force and targeting are more easily circumvented by using remote technology, this justification for the move to such approaches should raise alarm bells within the humanitarian community. Many concerns arise out of the move to robotic technologies for lethal purposes, and the following list is no more than an initial survey of the issues that require far more in-depth examination in the years ahead.69

3.1 The Problem of Definitions
As noted above,70 an initial hurdle in addressing the legal and ethical ramifications of these technologies concerns the lack of a uniform set of definitions of key terms such as ‘autonomous’, ‘autonomy’ or ‘robots’. Uses of these terms vary significantly among the militaries of different States, as well as among defence industry personnel, academics and civilians.71 Confusion can result, for example, from differences over whether ‘autonomous’ describes the ability of a machine to act in accordance with moral and ethical reasoning ability, or whether it might simply refer to the ability to take action independent of human control (eg a programmed drone that can take off and land without human direction; a thermometer that registers temperatures).72 As the international community begins to debate robotic technologies, it will need to at least seek a shared understanding of the systems and their characteristics.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
69

For more discussion of these arguments, see eg, Asaro, above n 4; Borenstein, above n 4; Sharkey, ‘Automated Killers and the Computing Profession’, above n 4; Sharkey, ‘Death Strikes from the Sky: The Calculus of Proportionality’, above n 4; Sparrow, ‘Robotic Weapons and the Future of War’, above n 4; Sparrow, ‘Predators or Plowshares?’, above n 4. See text accompanying notes 18-21 above. As pointed out by the UK Ministry of Defence, ‘The rapid, at times almost chaotic, development of UAS [unmanned aircraft systems] over the last 10 years has led to a range of terminology appearing in both military and civilian environments. As a result, some legacy terminology has become obsolete, while differing national viewpoints have made it difficult to achieve standardisation on new terms. ... Similarly, unmanned aircraft (UA)-related concepts such as autonomous and automated suffer from widely differing definitions, even within the United Kingdom. … All of these areas have the potential to cause confusion or misunderstanding when unmanned aircraft issues are discussed between military, industrial and academic audiences.’ See UK Joint Doctrine Note, above n 14, v; See also Office of the Under Secretary of Defense, Joint Robotics Program Master Plan FY2005: Out Front in Harm’s Way (Office of the Undersecretary of Defense, 2005) (‘Joint Robotics Program Master Plan FY2005’); Lin, Bekey and Abney, above n 4, 4-5; Singer, above n 3, 67 (defining ‘robot’). Compare, for example, definitions of ‘autonomous’, ‘semi-autonomous’ and ‘automation’ in Joint Robotics Program Master Plan FY2005, above n 71, and UK Joint Doctrine Note, above n 14.

70 71

72

EAP 16!

Lethal Robotic Technologies: Implications for Human Rights and International Humanitarian Law

51

3.2 International and criminal responsibility
One of the most important issues flowing from increased automation is the question of responsibility for civilian casualties or other harm or violations of the laws of war. As analysed at length in various prior reports by the Special Rapporteur on extrajudicial, summary or arbitrary executions,73 international human rights and humanitarian law, as applied in the context of armed conflict or law enforcement, set standards that are designed to protect or minimise harm to civilians, and set limits on the use of force by States’ militaries, police or other armed forces. When these limits are violated, States may be internationally responsible for the wrongs committed, and officials or others may bear individual criminal responsibility. Both the international human rights and humanitarian law frameworks are predicated on the fundamental premise that they bind States and individuals, and seek to hold them to account. Where robots are operated by remote control and the ultimate decision to use lethal force is made by humans, individual and command responsibility for any resulting harm is generally readily determinable. However, as automation increases, the frameworks of State and individual responsibility become increasingly difficult to apply. Who is responsible if a robot kills civilians in violation of applicable international law? The programmer who designed the program governing the robot’s actions, any military officials who may have approved the programming, a human commander assigned responsibility for that robot, a soldier who might have exercised oversight but opted not to do so? What if the killing is attributed to a malfunction of some sort? Is the government which deployed the robot responsible, or the principal engineer or manufacturer, or the individual who bore ultimate responsibility for programming, or someone else? What level of supervision does a human need to exercise over a robot in order to be responsible for its actions? Are circumstances conceivable in which robots could legitimately be programmed to act in violation of the relevant international law, or conversely, could they be programmed to automatically override instructions that they consider, under the circumstances, to be a violation of that law? Are there situations in which it would be appropriate to conclude that no individual should be held accountable, despite the clear fact that unlawful actions have led to civilian or other deaths?

3.3 Ethical dimensions
Some argue that robots should never be fully autonomous — that it would be unethical to permit robots to autonomously kill, because no human would clearly be responsible, and the entire framework of accountability would

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
73

See, for example, Philip Alston, Civil and political rights, including the questions of disappearances and summary executions: Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, UN Doc E/CN.4/2005/7 (22 December 2004); Philip Alston, Interim report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, A/61/311 (5 September 2006); and Philip Alston, Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, UN Doc A/HRC/14/24/Add.6 (28 May 2010).

!

EAP 17!

52

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

break down. Others, such as Ronald Arkin, argue that it will be possible to design ethical systems of responsibility.74 In his view, robots could be better ethical decision-makers than humans because they lack emotion and fear, and could be programmed to ensure compliance with humanitarian law standards and applicable rules of engagement. Still others respond that such thinking is predicated on unproven assumptions about the nature of rules and how robots may be programmed to understand them, and that it underestimates the extent to which value systems and ethics inform the application of the rules in ways that robots cannot replicate.75 In order to understand how to apportion responsibility for violations of the law, say some ethicists, more research needs to be done both to understand how and why humans themselves decide to follow the law and ethical rules, as well as the extent to which robotic programming mimics or differs from human decision-making. To the extent that unmanned systems are not being designed to support investigation, they raise additional transparency and accountability concerns. Perhaps most troublingly from an international law perspective, some have indicated that unmanned systems are not designed to support investigation. They do not archive information. They leave open the possibility of soldiers pointing to the machine, declaring, ‘I’m not responsible — the machine is’.76 In order to comport with States’ international law obligation to provide accountability for the use of lethal force, any unmanned weapons system, regardless of the degree of automation, must not hinder — and indeed should facilitate — States’ ability to investigate wrongful conduct.

3.4 Safeguards and standards for deployment
Another significant problem concerns the ability of robots to comply with human rights and humanitarian law, and the standards relevant to programming and the development of technology for deployment. What standards or testing must be conducted before armed machines are able to

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
74

Arkin, Wager and Duncan, above n 42; Ronald C Arkin, Patrick Ulam and Brittany Duncan, ‘An Ethical Governor for Constraining Lethal Action in an Autonomous System’ (GVU Technical Report GIT-GVU-09-02, GVU Center, Georgia Institute of Technology, 2009); See also R C Arkin, The Case for Ethical Autonomy in Unmanned Systems (2010) Georgia Institute of Technology <http://www.cc.gatech.edu/ai/robot-lab/onlinepublications/Arkin_ethical_autonomous_systems_final.pdf>; R C Arkin, Moral Emotions for Robots (2011) Georgia Institute of Technology <http://www.cc.gatech.edu/ai/robot-lab/online-publications/moral-final2.pdf>; and R C Arkin, P Ulam, and A R Wagner, ‘Moral Decision-making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception’ (2012) 100(3) Proceedings of the IEEE 571, doi: 10.1109/JPROC.2011.2173265, also available at <http://www.cc.gatech.edu/ai/robot-lab/online-publications/IEEEethicsv17.pdf>. For example, Peter Asaro, ‘Modeling the Moral User’ (2009) 28 IEEE Technology and Society 20, doi: 10.1109/MTS.2009.931863; Sharkey, ‘Death Strikes from the Sky: The Calculus of Proportionality’, above n 4; Sparrow, ‘Robotic Weapons and the Future of War’, above n 4. 2008 Harvard Session, above n 4, 8.

75

76

EAP 18!

Lethal Robotic Technologies: Implications for Human Rights and International Humanitarian Law

53

conduct crowd control, patrol in civilian populated areas, or be enabled to decide to target an alleged combatant? While any kind of technology has the potential to malfunction and result in lethal error, the particular concern with the rapid development of robotic weapons is whether — and the extent to which — technical safeguards are built into the systems to prevent the inadvertent or otherwise wrongful or mistaken use of lethal force. What programming or other technical safeguards have been and should be put in place to ensure that the precautions required by international humanitarian law are taken? What programming safeguards would international humanitarian law require? In part this debate will revolve around the interpretation given to the requirements specified in Article 36 of Additional Protocol I to the Geneva Conventions that provides: In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.77 Since the United States is not a party to this treaty, the issue thus becomes whether its requirements have become part of customary law. Troublingly, military and civilian experts acknowledge that robotic development in general is being driven by the defence industry, and that few systems in the field have been subjected to rigorous or standardised testing or experimentation.78 The United States military, for example, admits that in the interests of saving military lives in the conflicts in Iraq and Afghanistan, robotic systems may be deployed without the requisite testing for whether those systems are, in fact, reliable.79

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
77

Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts, opened for signature 12 December 1977, 1125 UNTS 3 (entered into force 7 December 1979) (‘Additional Protocol I’). For analyses of the requirements of Article 36 see Justin McClelland, ‘The Review of Weapons in accordance with Article 36 of Additional Protocol I’ (2003) 850 International Review of the Red Cross 397; and Isabelle Daoust, Robin Coupland and Rikke Ishoey, ‘New Wars, New Weapons? The Obligation of States to Assess the Legality of Means and Methods of Warfare’ (2002) 846 International Review of the Red Cross 345. 2008 Harvard Session, above n 4, 2. DOD Roadmap 2009, above n 2, 39-40 (‘The current commitment of combat forces has seen a number of unmanned systems fielded quickly without the establishment of the required reliability and maintainability infrastructure that normally would be established prior to and during the fielding of a system. This was justifiably done as a conscious decision to save Warfighter’s lives at the risk of reliability and maintainability issues with the equipment fielded.’)

78 79

!

EAP 19!

54

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

3.5 The principle of distinction
In the context of armed conflict generally, and especially in urban areas, military personnel often have difficulty discriminating between those who may be lawfully targeted — combatants or those directly participating in hostilities — and civilians, who may not. Such decision-making requires the exercise of judgement, sometimes in rapidly changing circumstances and in a context which is not readily susceptible of categorisation, as to whether the applicable legal requirements of necessity and proportionality are met, and whether all appropriate precautions have been taken. It is not clear what criteria would be used to determine whether a robot is ever capable of making such decisions in the manner required, or how to evaluate the programs that might purport to have integrated all such considerations into a given set of instructions to guide a robotic technology. In addition, there is the concern that the development of lethal capacity has outpaced the development both of safeguards against technical or communications error. For example, military strategic planning documents caution that it ‘may be technically feasible’ for unmanned aerial systems to have nuclear strike capability before safeguards are developed for the systems, and that ethical discussions and policy decisions must take place in the near term in order to guide the development of future unmanned aerial systems capabilities, rather than allowing the development to take its own path.80 There are also questions about how and when the benefits of speedy processing of intelligence and other data are outweighed by the risks posed by hasty decision-making. Man-on-the-loop systems, for instance, raise the concern that technology is being developed that is beyond humans’ capacity to supervise effectively and in accordance with applicable law. With respect to swarm technologies, some research has found that human operators’ performance levels are reduced by an average of 50 per cent when they control even two unmanned aircraft systems at a time.81 The research suggests that the possibility of lethal error rises as humans play a ‘supervisory’ role over a larger number of machines. Unless adequate precautions are taken and built into systems, the likelihood increases that mistakes will be made which will amount to clear violations of the applicable laws. A related concern is what safeguards should or must be put in place to prevent ultimate human control of robots from being circumvented, and what

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
80 81

Unmanned Aircraft Systems Flight Plan 2009-2047, above n 21, 41. P W Singer, ‘Robots at War: The New Battlefield’, (Winter 2009) Wilson Quarterly; see also Jessie Y C Chen, et al, Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies (July 2006) United States Army Research Laboratory, ARL-TR-3834, <http://www.dtic.mil/cgibin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA451379> (discussing research findings on benefits and drawbacks of automation).

EAP 20!

Lethal Robotic Technologies: Implications for Human Rights and International Humanitarian Law

55

safeguards can be implemented to prevent lethal robots from being hacked or used by, for example, insurgent or terrorist groups.

3.6 Civilian support
An important political consideration is whether the widespread use of robots in civilian settings, such as for law enforcement in cities, or in counterinsurgency operations, would alienate the very populations they were meant to assist. Over-reliance on technology increases the risk that policymakers and commanders will focus on the relatively easy use of armed or lethal tactics to the detriment of all the other elements necessary to end a conflict, including winning hearts and minds, and that policymakers will overestimate the ability of new technologies to achieve sustainable peace. In addition, while robots may have the benefit of not acting based on emotion, they also do not have the kind of sympathy, remorse or empathy that often appropriately tempers and informs the conduct of fighters and their commanders.

3.7 Use of force threshold and jus ad bellum considerations
To the extent that decisions about whether to go to war are limited by the prospect of the loss of the lives of military personnel, and the high economic cost of warfare, robotic armies may make it easier for policymakers to choose to enter into an armed conflict, increasing the potential for violating jus ad bellum requirements. This may be particularly the case where the other side lacks the same level of technology. Similarly, within the context of armed conflict, insofar as robots are remotely controlled by humans who are themselves in no physical danger, there is the concern that an operator’s location far from the battlefield will encourage a Playstation mentality to fighting and killing, and the threshold at which, for example, drone operators would be willing to use force could potentially decrease. Thus, the international community should consider whether and when reduced risk to a States’ armed forces resulting from the extensive use of robotic technologies might unacceptably increase the risk to civilian populations on the opposing side. While United States commentators have tended to reject these linkages, the most recent British Ministry of Defence report asks specifically whether ‘[i]f we remove the risk of loss from the decision-makers’ calculations when considering crisis management options, do we make the use of armed force more attractive?’ In support of a potentially affirmative response it quoted a comment made by General Robert E Lee following the Battle of Fredericksburg, which involved very heavy casualties on both sides: ‘It is well that war is so terrible — otherwise we would grow too fond of it.’82

4

Responses to Date by the Key International Actors

I noted at the outset of this article that the key actors within the international human rights regime have tended to be reticent about tackling emerging

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
82

Quote taken from James M McPherson, ‘Battle Cry of Freedom’ (1988) 551, cited in UK Joint Doctrine Note, above n 14, 5-9.

!

EAP 21!

56

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

issues especially in relation to technological developments. This reticence is on full display in relation to the development of potentially autonomous killing machines. Without seeking to present an exhaustive accounting of what has or has not been done to date, it is instructive to note the positions adopted by some of the principal actors and, in particular, governments, the United Nations, and the key NGOs, including the International Committee of the Red Cross (ICRC).

4.1 Governments and the United Nations political bodies
In October 2010, these issues were presented to the United Nations General Assembly in a report that I prepared in my capacity as UN Special Rapporteur on extrajudicial executions.83 In the report I recommended that urgent consideration should be given to the legal, ethical and moral implications of the development and use of robotic technologies, especially but not limited to uses for warfare. I suggested that the emphasis should be not only on the challenges posed by such technological advances, but also on the ways in which proactive steps can be taken to ensure that such technologies are optimised in terms of their capacity to promote more effective compliance with international human rights and humanitarian law. The report also called for greater definitional uniformity in relation to the types of technology being developed, and underlined the need for empirical studies to better understand the human rights implications of the technologies. In addition, it called for more reflection on the question of whether lethal force should ever be permitted to be fully automated. In response to these recommendations, which were presented in the context of a report that also dealt with other issues, the delegations of Canada,84 Liechtenstein,85 and Switzerland86 addressed the issue of robotics and human rights, but only in passing. For its part, the United States reiterated its view that matters which it classified as belonging solely to armed conflicts should not be discussed in human rights forums such as the UN Human Rights

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
83

See Philip Alston, Interim report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, UN Doc A/65/321 (23 August 2010). The report was formally presented to the General Assembly by my successor as Special Rapporteur, Professor Christof Heyns. The Canadian representative, Ms Boutin, asked ‘what actions States could take to counter concerns regarding the misuse of technology raised in the report’. UN GAOR, 65th sess, 27th mtg, 22 October 2010, UN Doc A/C.3/65/SR.27 (6 Dec 2010) 13-14 [13]. Mr Kerschischnig, representing Liechtenstein, asked ‘what measures should be taken at an international and national level to address' the issue of human rights and robotic technologies. Ibid 4 [15]. The Swiss representative, Mr Vigny, asked whether ‘any States that had met the obligation to determine whether their use would be prohibited by article 36 of Additional Protocol 1 of the Geneva Conventions or by any other international norm.’ He also asked the Special Rapporteur if he considered ‘it possible that a robot could be developed that would be a more ethical decision maker than a human’. Ibid 3 [8].

84

85

86

EAP 22!

Lethal Robotic Technologies: Implications for Human Rights and International Humanitarian Law

57

Council or the UN General Assembly.87 No substantive debate ensued and the General Assembly very clearly declined to take any action on the relevant recommendations contained in the report, despite the fact that many of the other issues identified in that and earlier reports were reflected in the resolution that was ultimately adopted in December 2010.88 Lacking an authorisation to pursue the issue, and in the absence of any significant civil society pressure to do something, the Office of the High Commissioner for Human Rights has yet to take any measures to address the issues surrounding robotic technologies. I will return below to the Office’s important potential role in this area.

4.2 Expert human rights bodies
For good reasons, most expert human rights bodies, whether courts or committees, tend to deal with issues in retrospect, or in other words after violations of applicable norms are alleged to have taken place. There are, however, powerful reasons why issues relating to robotic technologies should already be on the radar screens of bodies such as the European Court of Human Rights and the Human Rights Committee. While the United States Government, almost alone, continues to contest the relevance of human rights norms in situations that the government itself characterises as involving an armed conflict, and while a handful of scholars support such views,89 the reality is that human rights bodies are already engaged in significant ways in such issues and are insisting upon a nuanced approach to the issue.90

4.3 NGOs, including the ICRC
Of the major international human rights NGOs, neither Amnesty International nor Human Rights Watch (HRW) has taken up the issue of

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
87

Rather than addressing the substance of the report or the proposals it contained, Mr Baños, speaking on behalf of the United States of America, stated that ‘the Special Rapporteur had gone beyond his mandate in his comments on operations during armed conflict and many of the findings and conclusions in his final report seemed to be based on a fundamental confusion over the applicable framework or an imprecise reading of the substantive law, and had failed to take into consideration that the lawful use of force in armed conflict or in self-defence, in line with international humanitarian law, did not constitute an extrajudicial killing.’ Ibid 4 [14]. Extrajudicial, summary or arbitrary executions, GA Res 65/208, UN GAOR, 65th sess, 71st plen mtg, Agenda item 68(b), UN Doc A/RES/65/208 (21 December 2010). See Philip Alston, Jason Morgan-Foster and William Abresch, ‘The Competence of the UN Human Rights Council and its Special Procedures in relation to Armed Conflicts: Extrajudicial Executions in the “War on Terror”’ (2008) 19 European Journal of International Law 183. See Al-Skeini & Ors v The United Kingdom (European Court of Human Rights, Grand Chamber, Application no 55721/07, 7 July 2011).

88

89

90

!

EAP 23!

58

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

robotic technologies. HRW at least has a separate program focused on the relationship between arms and human rights, but it has tended to focus almost exclusively on the issues of landmines and cluster munitions.91 The most important initiative to date has been taken by a group of academics and practitioners from a range of relevant disciplines who, in 2009, created an International Committee for Robot Arms Control (ICRAC). The organisation’s mission statement indicates that its principal goal is to stimulate a wideranging discussion on the risks presented by the development of military robotics and the desirability of what they term an ‘arms control regime’ to reduce the resulting threat.92 In an endeavour to spell out more clearly what this might mean in practice the Committee organised an expert workshop in September 2010 on ‘limiting armed tele-operated and autonomous systems’. The workshop adopted, by majority vote rather than consensus, a statement that underscored the long-term risks of these developments and asserted that it is ‘unacceptable for machines to control, determine, or decide upon the application of force or violence in conflict or war’ and insisted that there should always be a human being responsible and accountable for any such decisions. The group also noted that state sovereignty is violated as much by an unmanned vehicle as it is by a manned one. It also endorsed the view that the these systems help to accelerate the pace and tempo of warfare, and that they ‘encourages states, and non-state actors, to pursue forms of warfare that reduce the security of citizens of possessing states.’ The statement concluded by calling for the creation of a new arms control regime designed to regulate the use of such systems.93 I shall return below to that proposal. By far the most important NGO in this field is the International Committee of the Red Cross which has been central at every turn in terms of developing, promoting, implementing, and monitoring standards governing the use of weapons in the context of armed conflict. To date, it has said surprisingly little about robotic technologies. In September 2011 the President of the ICRC delivered a speech in which he acknowledged the need to focus on robotic weapons systems in the future.94 His analysis succeeded in leaving almost all

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
91

For an indication of the relatively narrow focus of the arms program at HRW, see Human Rights Watch, Arms (2012) <http://www.hrw.org/category/topic/arms>. Indeed, the single reference to robots on the entire website of HRW comes in a report on political prisoners in Burma, in which one of them comments that ‘[w]e eat and sleep like robots.’ Human Rights Watch, Burma’s Forgotten Prisoners (September 2009) 11, <http://www.hrw.org/sites/default/files/reports/burma0909_brochure_web.pd f>. International Committee for Robot Arms Control, Mission Statement (2011) <http://www.icrac.co.uk/mission.html>. The statement of the 2010 Expert Workshop on Limiting Armed Tele-Operated and Autonomous Systems, Berlin, 22 September 2010 <http://www.icrac.co.uk/Expert%20Workshop%20Statement.pdf>. Jakob Kellenberger, ‘International Humanitarian Law and New Weapon Technologies’ (Keynote address delivered at the 34th Round Table on Current Issues of International Humanitarian Law, San Remo, 8-10 September 2011)

92

93

94

EAP 24!

Lethal Robotic Technologies: Implications for Human Rights and International Humanitarian Law

59

positions open. He began by evincing a degree of scepticism as to whether the development of truly autonomous systems will prove possible. But he immediately went on to acknowledge that if they do come to pass, they will ‘reflect a paradigm shift and a major qualitative change in the conduct of hostilities’ as well as raising ‘a range of fundamental legal, ethical and societal issues’. He determinedly kept open the possibility that such systems might be capable of enhancing the level of protection available by noting that ‘cyber operations or the deployment of remote-controlled weapons or robots might cause fewer incidental civilian casualties and less incidental civilian damage compared to the use of conventional weapons.’ He pointed in particular to the possibility that a greater level of precaution could be possible in practice because of the nature of such weapons and the contexts in which they might be deployed. His approach thus reflected a characteristically prudent response which left open all possibilities and ended by calling for more discussion, with an important role to be played by the ICRC: [I]t is important for the ICRC to promote the discussion of these issues, to raise attention to the necessity to assess the humanitarian impact of developing technologies, and to ensure that they are not prematurely employed under conditions where respect for the law cannot be guaranteed.95 The President did not suggest at any point that new legal standards or mechanisms might be needed, nor did he make any specific proposals designed to move the agenda forward. But the fact that the challenge provided by robotic technologies was acknowledged, and addressed in at least some detail, constitutes an important advance.

5

The Way Forward

The states which are at the forefront of developing lethal robotic technologies are not the ones that are going to take the initiative in addressing the legal and ethical issues within a framework governed by international law and monitored by independent organisations. Nor are the states that are the targets, or victims, of the use of such technologies likely to be well placed to lead an effort to stimulate international reflection upon the challenges that arise. As a result, the responsibility will fall inevitably upon an international actor to take the lead. The International Committee of the Red Cross is undoubtedly the key player, and it has now, somewhat belatedly, begun to explore the issues. There is, however, also an important role to be played by the United Nations as the principal proponent of international human rights law. Accordingly it would be highly appropriate and desirable for the UN Secretary-General to convene a group of military and civilian representatives from governments, as well as leading authorities in human rights and

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
<http://www.icrc.org/eng/resources/documents/statement/new-weapontechnologies-statement-2011-09-08.htm>.
95

Ibid.

!

EAP 25!

60

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

humanitarian law, applied philosophers and ethicists, scientists and developers to advise on measures and guidelines designed to promote the goal of accountability in relation to the development of these new technologies. The task of the group would be to consider what approaches might be adopted to ensure that such technologies will comply with applicable human rights and humanitarian law requirements. This would include: consideration of the principle that any unmanned or robotic weapons system should have the same, or better, safety standards as a comparable manned system; the spelling out of requirements for testing the reliability and performance of such technologies before their deployment; and measures designed to ensure the inclusion in the development of such weapons systems of recording systems and other technology that would permit effective investigation of and accountability for alleged wrongful uses of force.

EAP 26!

UVs, Network-centric Operations, and the Challenge for Arms Control
COMMENT BY ARMIN KRISHNAN* Introduction
The development of unmanned aerial vehicles (UAVs) and other unmanned vehicles (UVs) goes back at least a hundred years. During the Cold War they were used primarily for reconnaissance missions over heavily defended territories. In more recent times modern armed forces have begun to arm UVs and there is now the prospect that conventional wars (and not just nuclear wars) could be fought by remote control across continents. Defensive and reconnaissance roles for UVs seem to be far less ethically and legally problematic than the new offensive roles that UVs are gradually taking over. The growing autonomy of armed UVs is a particular concern as it raises the spectre of Terminators that can decide by themselves when to attack and what target to engage. As pointed out in the précis article, although we are still a long way from fully autonomous UVs, there are good reasons to assume that full autonomy for military UVs could materialise in the future because of rapid technological progress and because of considerations relating to military effectiveness. In the absence of relevant laws for restricting military UVs there could be serious negative consequences, such as increased dangers for civilians as a result of out of control robots, an increased propensity towards the use of force as a result of a combination of low political costs and a ‘Playstation’ mentality that makes killing too easy, and even the increased danger of accidental war triggered by automated defensive systems. At the same time, Brendan Gogarty and Meredith Hagger expressed hope that ‘[t]here is still a chance to at least shape the way UVs are used and how far they proliferate within militaries and beyond.’1 This commentary focuses on the problem of arms control for UVs. It will be argued that although international prohibition of autonomous UVs would be highly desirable, the challenges will be enormous because of the great complexity that comes with the integration of UVs into network-centric operations and the decreasing size of UVs. Network-centric operations make it difficult to determine the locus of a decision and the increasing !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*

Visiting Assistant Professor for Security Studies, Intelligence and National Security Studies Program, University of Texas at El Paso, Texas. I would like to thank Robert Sparrow for suggesting me as a commentator in this issue of the Journal of Law, Information and Science and Brendan Gogarty for inviting me to comment on the précis article. Brendan Gogarty and Meredith Hagger, ‘The Laws of Man over Vehicles Unmanned: The Legal Response to Robotic Revolution on Sea, Land and Air’ (2008) 19 Journal of Law, Information and Science 73, 144.

1

EAP 1

62

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

miniaturization of UVs makes them very difficult to detect, both of which creates difficulties for future arms control. In order for a future arms control treaty on armed autonomous weapons to work there needs to be an effective treaty monitoring and verification mechanism. History has shown that arms control agreements that have no effective monitoring mechanisms are prone to be violated which raises doubts about the wisdom behind such arms control agreements.2 A possible solution to the problem that is proposed in this comment is to restrict the acceptable roles and capabilities of UVs and to preventively prohibit UVs of a very small size regardless of whether they carry weapons or not.

1

UVs and Network-centric Warfare

In the mid 1990s, the US military developed the concept of a ‘system-ofsystems’, which became eventually known as network-centric warfare (NCW).3 The general idea was that all military units and command posts would form a network across which information could be shared freely, which would create a common operating picture and this would tremendously improve situational awareness.4 Networked forces can move and react much faster and use tactics that are impossible for armed forces that are not networked. In NCW there are three main elements that are brought together in a single computer network: sensors, which provide intelligence, surveillance, and reconnaissance (ISR); decision-making, which results from the processing, analysis, and dissemination of this information; and finally shooters, which enable the precision engagement of identified targets.5 There is the possibility of separating one or all of these elements, or to combine them in one neat package. Since NCW is driven by the aim of getting inside the enemy’s decision cycle by sensing, deciding and acting faster than the enemy, the combination of sensor and shooter is an obvious solution. For example, !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
2

One important example is the biological weapons program of the Soviet Union, which had been revealed by the defector Ken Alibek in 1992. He indicated that the Soviet Union had violated the Biological and Toxin Weapons Convention of 1972 on a truly massive scale. See Ken Alibek and Stephen Handelman, Biohazard (Random House, 2000). William A Owens, ‘The Emerging U.S. System-of-Systems’ (February 1996) Strategic Forum 63. Clay Wilson, ‘Network Centric Operations: Background and Oversight Issues for Congress’ (Congressional Research Service, 2007) 2-3. Arthur Cebrowski and John Garstka speak of three grids that make up network centric operational architectures, which they call information grid, sensor grid, and transaction grid. Arthur Cebrowski and John Garstka, ‘Network-centric Warfare: Its Origin and Future’ (1998) 124(1) Proceedings of the Naval Institute. The Department of Defense report on Network Centric Warfare similarly speaks of three domains of warfare (physical, information, and cognitive). See Department of Defense, ‘Network Centric Warfare: Department of Defense Report to Congress’ (July 2001) 3.7-3.9 <http://cio-nii.defense.gov/docs/pt2_ncw_main.pdf> (accessed 25 April 2011).

3 4 5

EAP 2

UVs, Network-centric Operations, and the Challenge for Arms Control

63!

UAVs are considered to be central to the vision of NCW because they improve situational awareness, as they can provide real time information of the battlefield that can be distributed by a command post to units in the field. Drones used by the Israeli Defense Force shortened the ‘sensor-to-shooter’ cycle to one minute during the 2006 Lebanon War.6 The tendency over the last 10 years with respect to drones has been to arm teleoperated reconnaissance drones with missiles so that they can immediately engage any ‘time sensitive target’ that they find. This is very important with respect to enemies such as insurgents and terrorists who can suddenly appear and disappear in urban environments.7 However, this does not imply that the use of teleoperated drones armed with missiles would be an optimal approach and that the tactics would remain effective in the future. There are indeed other ways of how to organise sensors, decision-making, and shooters within a network. One possibility is to make UVs completely autonomous so that they can respond to threats and engage targets at a speed far beyond human capability. Another possibility, which is currently more technologically viable, is to use unarmed autonomous UVs for target acquisition for any networked shooters that could be human operated, remotely operated, or even automated. The shooter receives the coordinates or other target data directly from the sensor and launches precision-guided munitions (PGMs).8 As the attack of the target could be carried out with hypersonic PGMs or with directed energy weapons,9 which could achieve near instant effects, it would not matter much that the actual shooter is distanced while the sensor that ‘acquires’ the target is close. The decision for attacking the target could come from anywhere in the network: it can come from a human operator in a location different from either the sensor or the shooter, it can come from an intelligent battle management system that controls other elements in the network, or it may even come from the UV that discovered the target itself. This means in a NCW environment, sensors that acquire targets for shooters de facto become part of a much larger integrated weapons system. A UV could continuously surveil an area, identify and track a target and call in an attack from a shooter in a distanced location and all of this could happen in an extremely short time frame. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
6 7 8

Barbara Opall-Rome, ‘Sensor to Shooter in 1 Minute: Inside the Israeli Air Campaign Against Hezbollah Targets’ (2006) 21(38) Defense News, 1. Gogarty and Hagger, above n 1, 81. The US was developing an autonomous missile system called Non-Line-Of-Sight Launch System or NETFIRES as part of the cancelled Future Combat Systems program. It was designed to be air dropped and then to be remote controlled by a network from which it would have received its targeting data. Although this particular program seems to be dead the concept appears to be still viable. This could include electromagnetic ‘railguns’, high-powered lasers, and even space-based weapons like the ‘rods from god’ high-precision kinetic energy projectiles. They could hit a target over hundreds of kilometres away within seconds. Such weapons are still on the drawing board, but they could be a reality within 10 years.

9

EAP 3

64

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

Why is this problem? NCW blurs the lines between sensor and shooter, target acquisition and intelligence, or intelligence and operations.10 Every element in the network becomes part of a gigantic weapons systems and the locus of any decision becomes extremely difficult to identify at least from an outside perspective. If the primary concern is that UVs could be armed and/or autonomous, NCW makes it futile to prevent the aforementioned negative effects that armed and autonomous UVs may have by simply outlawing armed autonomous UVs, since similar effects can be achieved with UVs that are unarmed and that have very limited autonomy. An unarmed UV can still function like an offensive weapon, if it is networked with a remote shooter and if it can guide PGMs to a target. Since NCW in principle allows both options: humans can take control of any robotic element in the network or they can delegate decisions to intelligent machines, it can become extremely difficult to determine whether a human made an attack decision or whether the decision was made by some intelligent software. For example, brilliant munitions like the Low Cost Autonomous Attack System (LOCAAS) include the option that a human operator can confirm targets, or if necessary can retarget or abort.11 Since the ability of such systems for reliable target recognition is not yet good enough, a human would be able to make much better targeting decisions. In the long run however, a single human operator may control numerous UVs at a time, meaning that much of the actual targeting would be largely automated with minimal human supervisory control. The US Department of Defense’s Unmanned Aerial Systems Roadmap 2005-2030 states, human oversight of a large number of UA operating in combat must be reduced to the minimum necessary to prosecute the information war. Automated target search and recognition will transfer initiative to the aircraft, and a robust, anti-jam communications network that protects against hostile reception of data is a crucial enabler of UA swarming.12 The report also points out that rules of engagement ‘may require the intervention of a human operator’.13 However, looking at it from the outside it is impossible to know whether a human is in fact in the loop, as the complete automation of the whole process could require little more than a software switch.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
10

John Ferris, ‘Networkcentric Warfare, C4ISR and Information Operations: Towards a Revolution in Military Intelligence’ (2004) 19(2) Intelligence and National Security 204; Elizabeth Stanley-Mitchell, ‘Technology’s Double-Edged Sword: The Case of U.S. Army Battlefield Digitization’ (2001) 17(3) Defense & Security Analysis 271. Ronald C Arkin, Governing Lethal Behavior in Autonomous Robots (CRC Press, 2009) 24-25. US Department of Defense, Office of the Secretary of Defense, Unmanned Aircraft Systems Roadmap 2005-2030 (2005) B-9. Ibid A-4.

11 12 13

EAP 4

UVs, Network-centric Operations, and the Challenge for Arms Control

65!

2

Miniaturisation

Martin Libicki published one of the most original and imaginative studies in the entire Revolution in Military Affairs (RMA) debate in 1994, called The Mesh and the Net: Speculations on Armed Conflict in an Age of Free Silicon.14 His argument was that on the future battlefield almost everything becomes visible through advanced sensors and that anything that is visible can be destroyed anywhere at any time. This means that large platforms are no longer survivable under these conditions. This results in the need for sensors and weapons that are small and can be deployed in large numbers. Battlefields could be littered with millions of microsensors and micro-weapons that form an intelligent network, which is extremely robust and resilient. Libicki calls this approach to warfare Fire Ant Warfare.15 The general perception is that robots or UVs would have to be very big or very sophisticated machines. This need not be the case: very large numbers of networked unsophisticated micromachines can be more dangerous and resilient than big platforms such as tanks. A single micromachine may have very limited capability in terms of payload and on-board intelligence, but collectively a swarm of such micromachines could be capable of autonomous intelligent behavior and be able to attack large weapons systems, equipment and human beings.16 The micromachines could neutralise large machines by attacking critical components such as electronics or engines. Of course, human beings are much more vulnerable and even something of microscopic size such as a chemical or a biological agent can be lethal. Nanomedicine currently under development works on nanomachines that can enter a human body in order to repair cells and arteries. Obviously, the same technology can be used for causing fatal damage in a human body.17 How realistic is the vision of Fire Ant Warfare? Currently many micro-UVs have a size of model aircraft of about 0.5m to 1m, but autonomous insect-size UVs are already under development.18 In fact, Sandia National Labs already !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
14

Martin Libicki, ‘The Mesh and the Net: Speculations on Armed Conflict in an Age of Free Silicon’ (McNair Paper No 28, National Defense University, Washington DC, 1994). Ibid 28. Steven Metz, ‘Armed Conflict in the 21st Century: The Information Revolution and Post-Modern Warfare’ (US Army War College Institute of Strategic Studies, 2001) 67-71; Jürgen Altmann, ‘Military Nanotechnology: Perspectives and Concerns’ (2004) 62(35) Security Dialogue 68. Ajay Lele, ‘Role of Nanotechnology in Defence’ (2009) 33(2) Strategic Analysis 237. The US DoD Unmanned Systems Roadmap claims ‘Microelectromechanical systems (MEMS) offer the prospect of radically reducing the size of all modalities of unmanned systems. Fingernail-size turbines and pinhead-size actuators on future, miniature aircraft could make today’s MAV prototypes appear unnecessarily large and bulky. MEMS-enabled UGVs could be deposited like

15 16

17 18

!
EAP 5

66

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

proved, with their Miniature Autonomous Robotic Vehicle (MARV) in 1996, that an autonomous microrobot the size of a cubic inch is technologically feasible.19 British Special Forces use micro-UAVs that are six inches in size to search houses, and in the future these micro-UAVs could carry explosives for attacking snipers.20 Weaponised micro-UVs will be developed in the near future if they are not already being operationally tested.21 The effective range of such micro-UVs would be very limited because of their size, but they could be delivered to distanced areas in many ways, including by larger UVs that release them within their effective range of operation. Nanotechnology may even enable machines of a molecular size, which could self-assemble into larger machines or objects and maybe even self-replicate.22 However, at the moment even the theoretical possibility of self-replicating nanobots is contested and it may well be that this aspect of nanotechnology will remain science fiction. The trend towards ever smaller and lighter UVs has been reinforced by the recent financial crisis, which contributed to the cancellation of many expensive defence acquisition projects such as the US Army’s Future Combat Systems program in 2009.23 As early as 2005, a US Air Force study had suggested that ‘[c]urrent advances in miniaturization are giving small UAVs capabilities comparable to their larger cousins at significantly lower costs.’24 More recently, a UK Ministry of Defence report entitled The UK Approach to Unmanned Aircraft Systems stressed the potential effectiveness of employing !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
unnoticed insects.’ US Department of Defense, Office of the Secretary of Defense, Unmanned Systems Roadmap 2007-2032 (2007) 45.
19 20

Compare Miniature Autonomous Robotic Vehicle (2003) Sandia National Labs <http://robotics.sandia.gov/Marv.html> (accessed 20 April 2011). David Hambling, Military Builds Robotic Insects (2007) Wired Danger Room Blog <http://www.wired.com/science/discoveries/news/2007/01/72543> (accessed 20 April 2011). The US Special Operations Command has an acquisition program for a Lethal Miniature Aerial Munition System, which according to Defense Update offers ‘the warfighter portable, non-line-of-sight precision strike capability against individual targets, ensuring high precision effect with a very low risk of collateral damage.’ US Air Force to Develop Micro-UAV Killer Drones for the Special Operations Command (2011) Defense Update <http://defense-update.com/products/l/31122010_lmams.html> (accessed 5 May 2011). J Storrs Hall calls the concept of nanobots that can self-assemble into any object ‘utility fog’: objects could appear or disappear like a hologram. J Storrs Hall, Nanofuture: What Is Next for Nanotechnology? (Prometheus Books, 2005) 188-193. John Markoff, ‘War Machines: Recruiting Robots for Combat’ (27 November 2010) New York Times (online) <http://www.nytimes.com/2010/11/28/science/28robot.html?scp=11&sq=john %20markoff&st=cse> (accessed 28 April 2011). James Abatti, ‘Small Power: The Role of Small and Micro-UAVs in the Future’ (2005) Air War College 172.

21

22

23

24

EAP 6

UVs, Network-centric Operations, and the Challenge for Arms Control

67!

large numbers of low-cost simple and smaller UVs instead of smaller numbers of big UVs.25 Smaller UVs are clearly the way forward. Micro-UVs pose very serious challenges to arms control because they are so easy to hide. Unlike large weapons systems that can be monitored with remote sensors, small UVs will not be discoverable on a satellite image. They could be secretly deployed by manned aircraft or larger UVs for ISR or offensive purposes. A large-scale use of very small micro-UVs would most likely produce visible results and result in evidence against the country that deployed them on the battlefield, but research and design of micro-UVs and their possession may go completely unnoticed unless there was a fairly intrusive weapons inspections regime with on-site inspections.

3

Manhunting

Philip Alston, the UN Special Rapporteur on Extrajudicial Executions, has pointed out that the use of drones for targeted killings of terrorists has become a major tendency in the past decade because of advances in robotic technologies.26 It is certainly no coincidence that targeted killing has, up to now, been the primary role of armed UVs in ongoing armed conflicts. One reason for this are the technological limitations of current UVs – they are not yet a match for more sophisticated manned platforms, which currently severely limits their role in conventional warfare. Another reason is the changing nature of war. More traditional interstate conflicts have become very rare and some would say they are on the verge of extinction, while unconventional conflicts now very much characterise contemporary warfare.27 The new enemies are terrorists, insurgents, and criminals, who have, apart from their own lives, very little to offer in terms of infrastructure and equipment that can be attacked.28 The hope is that by identifying and neutralising key individuals, the enemy and their operations can be disrupted and eventually defeated by depriving them of their most skilled and experienced leaders and operatives.29 At least for now, manhunting, the search for and neutralisation (kill or capture) of specific individuals, represents the current reality and possibly the future of warfare.30 UVs play !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
25 26 27 28 29 30

UK Ministry of Defence, The UK Approach to Unmanned Aircraft Systems (2011), 3-9 to 3-11. Philip Alston, Interim Report of the Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, 65th sess, UN Doc A/65/321 (23 August 2010). There are currently about ten ongoing armed conflicts and none of them are a conventional war in which regular forces fight each other. Michael Gross, Moral Dilemmas of Modern War: Torture, Assassination, and Blackmail in an Age of Asymmetric Conflict (Cambridge University Press, 2010) 112. Daniel Byman, ‘Do Target Killings Work?’ (2006) 85(2) Foreign Affairs 95, 103-105. George Crawford, Manhunting: Reversing the Polarity of Warfare (Publish America, 2008) 27-29.

EAP 7

68

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

an important role in manhunting for three main reasons: they provide continuous surveillance; they enable precision attacks in terrain that is difficult to access; and they minimise risk to military personnel. There are many legal issues with respect to the practice of targeted killing regardless of whether it is carried out by Special Forces, a sniper, or an armed UV and these have already been thoroughly analysed by legal experts like Nils Melzer, Avery Plaw, and Kenneth Anderson.31 This is not the place to rehash these well-developed arguments. However, there are some other concerns about targeted killing that are more specific to the use of armed drones, as highlighted by Philip Alston in his report.32 Alston identifies two specific concerns with respect to the use of armed drones in armed conflict, which are autonomy and accountability. Autonomous armed UVs could endanger innocent civilians as they may not be able to comply with international humanitarian law because of technological limitations, e.g. with respect to automated target recognition (ATR). Although this concern is generally valid, it seems very unlikely that the question of the autonomy of UVs would even be relevant in the context of targeted killing. After all, the intention is to kill very specific individuals and there is no pressing operational need for fully automating this process. Automation makes sense if the number of targets is large or if very high speed of action is required. For example, a high degree of automation is crucial for defeating another automated system such as an air defence system, as an automated system could respond extremely fast, but it is unnecessary for attacking a human being who may not even be aware that they are being targeted and who will in any case not be able to respond significantly faster than a human drone operator. However, another aspect of autonomy that goes beyond target selection is allowing a UV to autonomously search for, pursue, and kill a pre-selected target. This would indeed be the Terminator scenario: one could program a drone to find and kill one particular individual and the drone could use biometrics or similar methods to reliably identify and kill the individual without causing harm to anybody else. If this could be done covertly and without leaving many traces, it would create a nightmare world in which anybody could be quietly eliminated from distance. This scenario would greatly amplify the accountability issue which already exists with respect to the lack of transparency surrounding CIA drone strikes in Pakistan.33 Although one can clearly question the legality of these drone strikes, especially with respect to the issue of collateral damage and the !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
31

Nils Melzer, Targeted Killing in International Law (Oxford University Press, 2008); Avery Plaw, Targeting Terrorists: A License to Kill? (Ashgate, 2008); Kenneth Anderson, ‘Targeted Killing in U.S. Counterterrorism Strategy and Law’ (2009) Working Paper of the Series Counterterrorism and American Statutory Law. Alston, above n 26, 10-11. Gogarty and Hagger, above n 1, 102.

32 33

EAP 8

UVs, Network-centric Operations, and the Challenge for Arms Control

69!

proportionality of the use of force,34 there is at least some public accountability. The current drone strikes are visible to the public, exactly because they cause significant collateral damage and are therefore regularly reported by the press. But once drones and the weapons they carry become smaller and more precise thereby radically reducing collateral damage,35 there may not be any other visible effect than the sudden death of an alleged terrorist. If targeted killing could be carried out with zero collateral damage it would immediately quiet criticism and targeted killing could become even more used as a counterterrorism tactics. At this point military drone strikes would also be indistinguishable from assassination and this is in itself problematic, as it implies killing in a perfidious manner. International humanitarian law explicitly prohibits perfidy36 and the assassination of specific members of the enemy is contrary to customary law as it is considered dishonourable.37 The use of letter bombs and poisons could be much more discriminate methods for attacking the enemy, but because of their perfidious and dishonourable nature these practices are prohibited.38 I would argue that weaponised micro UVs are no less perfidious and dishonourable than letter bombs and poisons, especially if they enable the killing of specific individuals from great distance and with zero risk. The method of killing affects the morality and legality of a targeted killing and this does not necessarily concern the issue of collateral damage and endangering innocent bystanders.

4

Proposed Solutions

The wider issues connected to the emergence of network-centric operations are very difficult to address legally. When it comes to developing strategies for the effective international arms control of UVs, the challenges may even be insurmountable. The technology for UAVs is already easily accessible to !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
34 35

Noel Sharkey, ‘Death Strikes From the Sky: The Calculus of Proportionality’ (2009) 28(1) IEEE Technology and Society Magazine 16, 17, doi: 10.1109/MTS.2009.931865. J Warrick and P Finn, ‘Amid Outrage Over Civilian Deaths in Pakistan, CIA Turns to Smaller Missiles’ (2010) Washington Post (online) 26 April 2010 <http://www.washingtonpost.com/wpdyn/content/article/2010/04/25/AR2010042503114.html?sid=ST2010042503646>. According to the 1949 Geneva Convention, Pr. I, Art. 37: ‘It is prohibited to kill, injure or capture an adversary by resort to perfidy. Acts inviting the confidence of an adversary to lead him to believe that he is entitled to, or is obliged to accord, protection under the rules of international law applicable in armed conflict, with intent to betray that confidence, shall constitute perfidy.’ Leslie Green, The Contemporary Law of Armed Conflict (Manchester University Press, 2000) 144. The Hague Convention (1907) Section IV, Art. 22 and 23 prohibit the use of poisons. According to Larry May, this prohibition has little do with the concern about unnecessary suffering, but is because the use of poisons is dishonourable. See Larry May, War Crimes and Just War (Cambridge University Press, 2007) 137.

36

37 38

EAP 9

70

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

modern armed forces. Moreover, capabilities of UVs will improve over time as a result of making them part of an integrated military network into which various computerised systems and weapons can be plugged in as needed. Thus it may not make much sense to look at a UV as a single weapons system that has a clear and identifiable capability. In the coming age of NCW it is not only important to understand one platform or system as single item, but more importantly to understand the largely enhanced capability that comes with the platform or system forming part of a larger network. This insight is highly relevant for arms control in an age of network-centric operations. Therefore, it may not be an effective solution to seek outlawing armed autonomous UVs as suggested by arms control expert Jürgen Altmann.39 From an arms control perspective it would be very difficult to determine from the outside whether an UV is autonomous or teleoperated. It may even be possible to convert a teleoperated UV into an autonomous UV without any visible change on the outside by upgrading the control software. Weapons inspectors would need to have access to the control software to ascertain the true nature of the UV and no nation would grant access to the inner workings of their weapons systems because this would potentially expose secret military technology. So it might be necessary to prohibit all types of armed UVs in order to ensure effective monitoring of a future arms control treaty on UVs. This would be politically difficult to achieve since several types of armed UVs have already entered service in countries like the US, Britain, Israel, and Russia, while many other countries are developing armed UVs. Existing armed UVs would need to be disarmed and research and design in this field would need to be discontinued, which does not seem very likely. What can be done is to seek limitations on the overall capabilities of armed UVs in terms of their range, payload, and endurance in order to reduce dangers to civilians and to have more crisis stability.40 The maximum range of armed UVs should not exceed several hundred kilometres, they should not carry large amounts of munitions that would enable them to attack numerous targets in a single mission, and their endurance should not vastly exceed the endurance of manned vehicles. These limitations would also reduce the overall damage that could be caused in a future ‘robot war’ that could involve large numbers of UVs and stand-off precision guided munitions. Armed UVs should have some neutralisation mechanism that disables their weapons after a certain amount of time if the UV loses contact to its command post. Similarly, a UV’s weapons should automatically be disabled if it leaves the assigned ‘kill box’ or narrow segment of the battlespace for which the use of armed UVs has been authorised. Such limitations would be much more acceptable to nations that currently have or are developing armed UVs since !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
39 40

Jürgen Altmann, ‘Preventive Arms Control for Uninhabited Military Vehicles’ in R Capurro and M Nagenborg (eds), Ethics and Robotics (IOS Press, 2009) 69. Robert Sparrow, ‘Predators or Plowshares? Arms Control for Robotic Weapons’ (2009) 28(1) IEEE Technology and Society Magazine 25, 27, doi: 10.1109/MTS.2009.931862.

EAP 10

UVs, Network-centric Operations, and the Challenge for Arms Control

71!

the technology for offensive autonomous UVs is still far off and their development is still too expensive. Considering the possibility that unarmed UVs are used for target acquisition and are directly networked to shooters in remote locations, so that they effectively form part of a larger weapons system, it would be very important to consider the particular missions or roles of UVs in such circumstances. Some roles and missions are less problematic than others. Defensive roles such as air and missile defence would be a legitimate mission for highly automated systems and indeed such automated defensive systems have already been deployed.41 The surveillance and defence of borders and certain sensitive sites by UVs can theoretically trigger an accidental war, but so can any border incidents involving human border guards. If the capabilities of such UVs and fixed sentry systems were adequately limited, the severity of incidents would be also limited, as well as the danger of accidental war. Some offensive roles for UVs are also less problematic, such as pre-programmed strike missions against targets that have been selected in advance by human military planners, as such missions are already carried out by unmanned systems (ballistic and cruise missiles). There is no good reason for prohibiting such a use of UVs unless one would want to prohibit ballistic and cruise missiles as well. The suppression of enemy air defences is another role that autonomous UVs are destined to take over as such missions becomes too dangerous for manned aircraft. This would require the UV to have the capability to dynamically respond to suddenly appearing surface-to-air threats and maybe even aerial threats. As speed is critical in such missions, an UV would have to be able to respond on its own without having to wait for a human operator to determine whether the response is appropriate. However, loitering missions where UVs could engage any targets of opportunity should be restricted to smaller segments of the battlespace where all targets are likely to be legitimate targets because the area has been cleared of civilians or there is no presence of civilians. The miniaturisation of UVs is a matter of great concern in terms of arms control, certain missions, and the long-term risk that micro-UVs (and potentially nanobots) could represent to society. Most problematic is the possibility of using insect-size (or smaller) micro-UVs for hunting and eliminating specific individuals. It has been reported in the press that Israel intends to develop a ‘bionic hornet’, which can seek out and kill terrorists.42 This technology may only materialise in ten or 20 years but the prospect of this development is troubling in many respects. It is a perfidious way of killing similar to the use of poisons. A robotic assassination device such as a weapon-carrying a micro-UV goes against the prohibition of perfidy and !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
41 42

Gogarty and Hagger, above n 1, 139. David Crane, ‘Israel Developing “Bionic Hornet” to Target and Kill Enemy Combatants’ (2006) Defense Review <http://www.defensereview.com/israeldeveloping-bionic-hornet-to-target-and-kill-enemy-combatants> (accessed 20 April 2011).

EAP 11

72

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

should clearly be outlawed. Since a micro-UV could affect the killing of an individual by clandestinely following them and by guiding precision weapons to them, it may matter little whether such a micro-UV carries a weapons payload itself. The possibility of a future capability to clandestinely assassinate people over large distances creates much more serious challenges in terms of transparency and accountability than the current practice of missile strikes from drones. It could be difficult to identify whether an attack has occurred or who was responsible for the attack. The best arms control strategy would be to prohibit all micro-UVs of a certain size.43 This, of course, raises the question of how such a prohibition could be monitored. Some have suggested that nanotechnology (NT) should be leveraged for arms control or that NT can provide effective countermeasures to new NT-enabled threats.44 For example, micro- and nano-sensors can also be used for detecting microUVs and nanobots. So there are long-term dangers as well as opportunities with respect to weapons miniaturisation and NT in the field of arms control.

Conclusion
There is still great uncertainty about the future of UVs, network-centric warfare, and the future availability of emerging technologies such as NT and artificial intelligence. It has been pointed out that there is not much experience with network-centric operations and that it is unclear how great the vulnerabilities are of a ‘system-of-systems’ to electromagnetic pulse, highpowered microwaves, computer network attacks, and space warfare. Similarly, autonomous UVs are still at an experimental stage and without the capability of autonomous operation UVs could very well turn out to be a technological dead end once the states that currently operate armed UVs have to face much more technologically sophisticated enemies. For this reason it may be too early to say whether the American vision of warfare, which is largely characterised by network-centrism and robotics, will turn out to be accurate. However, current trends indicate that many modern armed forces are becoming increasingly networked and that new weapons systems and sensors are being designed to easily plug into larger networks with which they can share information and that can also control them. This is particularly important for UVs, especially since they will lack the ability to successfully operate entirely on their own for the foreseeable future. While larger size offers some advantages like greater speed, range, and firepower, the trend in UVs is clearly in the direction of the development of smaller systems that are cheaper, stealthier, and more resilient. Smaller UVs also make more sense for military operations against enemies that do not have any heavy equipment or large infrastructure that could be attacked. Accordingly, there is a desire to have micro-UVs that could be dispersed in large numbers over a city and that could autonomously search for and possibly neutralise specific individuals without causing collateral damage. As long as manhunting remains an !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
43 44

Altmann, above n 16, 74. Storrs Hall, above n 22, 238.

EAP 12

UVs, Network-centric Operations, and the Challenge for Arms Control

73!

important objective and strategy for the US military in fighting their current wars, UVs and other robotic weapons will remain an attractive option for hunting and attacking individuals with high precision. For the sake of transparency and accountability and because of the generally perfidious nature of attacks on individuals with future robotic assassination devices, it will be imperative to prohibit this practice under international law. At the very least there should be legal limitations on the acceptable capabilities of armed UVs and the circumstances under which they can be used, as well as robust mechanisms that ensure a sufficient level of transparency and accountability with respect to the use of lethal force.

EAP 13

Regulating the Use of Unmanned Combat Vehicles: Are General Principles of International Humanitarian Law Sufficient?
COMMENT BY MEREDITH HAGGER AND TIM MCCORMACK* Abstract
Some weapons are prohibited by a specific multilateral treaty regime and others by customary law. Neither source of prohibition applies to unmanned combat vehicles (UCVs). In the absence of a specific legal prohibition, UCVs can lawfully be deployed in armed conflict provided their use is consistent with so-called general principles of international humanitarian law (IHL). These general principles limit or restrict the circumstances in which UCVs can lawfully be deployed. In combat operations militaries utilising UCV technology are closely scrutinised and generally do try to ensure compliance with IHL. The real concerns lie with dubious usage of UCVs in covert operations where the IHL framework seems to provide a conveniently permissive legal regime, there is an apparent absence of any effective review of compliance with IHL and no accountability for alleged violations of the law. In some circumstances it is highly questionable whether IHL is the applicable legal framework.

Introduction
The use of unmanned combat vehicles (UCVs) in armed conflict is not a particularly new phenomenon. Nevertheless, the forms that such weapons now take, as well as the prevalence of their use, have increased exponentially in the past few years. This dramatic increase has, in turn, been accompanied by a variety of opinions as to the legality and the desirability of battlefield use of UCVs. The intention of this paper is to identify and assess the sufficiency of the legal framework applicable to UCVs in armed conflict. The so-called ‘general principles of International Humanitarian Law’ exist to ensure that categories of weapons not otherwise prohibited by conventional or customary international law, are nevertheless subject to laws that aim to alleviate human suffering in armed conflict. Incongruence sometimes exists between theory and practice. We will discuss some of the limitations to the efficacy of these ‘general principles’ to effectively regulate the use of convenient weapons with high degrees of military utility. In the Nuclear Weapons Advisory Opinion, the International Court of Justice (ICJ) considered whether there was a general prohibition on the use of nuclear

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*

Tim McCormack is an Adjunct Professor of Law at the University of Tasmania and is the Special Adviser on International Humanitarian Law to the Prosecutor of the International Criminal Court in The Hague. The views expressed in this article are those of the authors alone.

EAP 1!

Regulating the Use of Unmanned Combat Vehicles

75

weapons under international law.1 Although the Opinion has been criticised by many,2 it provides one of the most detailed judicial considerations of the circumstances in which a category of weapons will be illegal per se under international law. In circumstances where a particular category of weapon is not illegal per se, the Opinion affirms the applicability of the general principles of international humanitarian law (IHL) to the use of such weapons — including nuclear weapons — in armed conflict. The Opinion then, constitutes a useful starting point for identifying the law applicable to UCAVs in armed conflict. This paper commences with a brief overview of the ICJ’s decision in the Nuclear Weapons Advisory Opinion, and then applies the relevant legal principles identified in that case to UCV technology in current use. Given that there is currently no treaty or customary prohibition on UCAV technology, the use of these weapons in armed conflict is regulated by general principles of IHL. In order to explore whether this legal framework is sufficient, this paper will consider the circumstances that led to an international prohibition on the use of cluster munitions and land mines. These two examples demonstrate the way the international community may react when general principles of IHL are deemed insufficient to regulate the effects of a particular weapon. It also enables a comparison between the current level of opposition to UCAVs, and campaigns that lead to the international prohibition of weapons in the past. The final part of the paper analyses criticism of the increasing incidence of UCAV attacks, and concludes that the principal focus of concern is not generally on military utilisation of the technology in the theatre of combat operations. Rather, of much greater concern are covert targeted killing programs which currently appear to operate beyond the law with a complete absence of transparency and accountability.

1

ICJ Advisory Opinion on Nuclear Weapons

In 1994, the General Assembly of the United Nations requested an Advisory Opinion of the International Court of Justice on whether the threat or use of nuclear weapons is permitted in any circumstances under international law.3 The resulting Opinion of the Court implicitly recognised that there are broadly two categories of weapons.4 The first category consists of those weapons that are illegal per se in international law. The use of such weapons is never permitted, irrespective of the circumstances in which they are used. The second category consists of those weapons that are not illegal per se and so,

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
1 2

Legality of the Threat or Use of Nuclear Weapons (Advisory Opinion) [1996] ICJ Rep 226, [51] (‘Nuclear Weapons Advisory Opinion’). See, for example, various articles analysing the Nuclear Weapons Advisory Opinion and discussing its implications in the symposium issue of (1997) 316 International Review of the Red Cross. General and Complete Disarmament, GA Res 49/75, 90th plen mtg, UN Doc A/Res/49/75 (1994). Nuclear Weapons Advisory Opinion, [1996] ICJ Rep 226.

3 4

EAP 2

76

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

consequently, may be used in armed conflict, subject to legal regulation. The basis of the Court’s discussion was the premise that, ‘the illegality of the use of certain weapons as such does not result from an absence of authorization but, on the contrary, is formulated in terms of prohibition.’5

1.1 Specific prohibition on the use of nuclear weapons under international law
The Court thus began by considering whether there was a specific prohibition in international law on the use of nuclear weapons. Clearly, such a prohibition could be derived from a multilateral treaty,6 or from customary law.7 In relation to the former source, the Court found that there was no multilateral treaty specifically prohibiting the threat or use of nuclear weapons in all circumstances.8 The Court acknowledged that various agreements dealing with nuclear weapons ‘certainly point to an increasing concern in the international community with these weapons.’9 However, the Court found that these treaties, by themselves, do not constitute a prohibition on the threat or use of nuclear weapons.10 The leading multilateral treaty dealing with nuclear weapons is the 1968 Nuclear Non-Proliferation Treaty (‘NPT’).11 Unlike the 1972 Biological Weapons Convention12 or the 1993 Chemical Weapons Convention,13 the NPT does not constitute a comprehensive ban on all nuclear weapons. Nevertheless, for 184 of the 189 current States Parties, the NPT does constitute a comprehensive ban on nuclear weapons. Each of those 184 states has either ratified or acceded to the treaty as a ‘non-nuclear weapon State Party’ and, as such, is obliged not to ‘receive, manufacture or acquire’ nuclear weapons, not to possess them and not to ‘transfer’ them to any other state, nor to assist any other state to ‘manufacture or acquire’ nuclear weapons.14 Iran is a non-nuclear weapons

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
5 6 7 8 9 10 11 12

Ibid [52]. Ibid [53]. Ibid [64]. Ibid [61]-[63]. Ibid [62]. Ibid [63]. Treaty on the Non-Proliferation of Nuclear Weapons, opened for signature 1 July 1968, 729 UNTS 161 (entered into force 5 March 1970) (‘NPT’). Convention on the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and on their Destruction, opened for signature 10 April 1972, 1015 UNTS 163 (entered into force 26 March 1975) (‘Biological Weapons Convention’). Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on Their Destruction, opened for signature 13 January 1993, 1974 UNTS 45 (entered into force 29 April 1997) (‘Chemical Weapons Convention’). NPT, Articles I and II. For more detail on the NPT as a comprehensive treaty ban for the overwhelming majority of States Parties, see T McCormack, ‘A Non-Liquet

13

14

!

EAP 3

Regulating the Use of Unmanned Combat Vehicles

77

State Party to the NPT and has been accused of violating its treaty obligations in the development of its nuclear program.15 North Korea, prior to its withdrawal from the NPT in 2003, was similarly accused of violating its obligations as a non-nuclear weapons state under the NPT.16 Since its withdrawal, North Korea has been the subject of Security Council resolutions deploring both its withdrawal from the NPT and its nuclear weapons program, including the nuclear test conducted in 2009, and demanding that North Korea immediately retract its announcement of withdrawal.17 Israel, India and Pakistan, in contrast, have never participated in the NPT and so are not bound by the treaty’s prohibitions. Despite the fact that the overwhelming majority of States Parties to the NPT have subjected themselves to a comprehensive treaty ban on nuclear weapons, five States Parties are exempt from the prohibition and are entitled to retain their nuclear weapons stockpiles as ‘nuclear weapon State Parties’. Those five states had all ‘manufactured and exploded a nuclear weapon or other nuclear explosive device prior to 1 January 1967’18 and, coincidentally, happen to be the five permanent members of the UN Security Council. Because five states are permitted to retain nuclear weapons under the terms

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
on Nuclear Weapons: The ICJ Avoids the Application of General Principles of International Humanitarian Law’ (1997) 316 International Review of the Red Cross 76.
15

See for instance: Implementation of the NPT Safeguards Agreement in the Islamic Republic of Iran, International Atomic Energy Agency Board of Governors, Resolution Adopted 4 February 2006 (Gov/2006/14); Implementation of the NPT Safeguards Agreement and Relevant Provisions of Security Council Resolutions 1737 (2006), 1747 (2007), 1803 (2008) and 1835 (2008) in the Islamic Republic of Iran, International Atomic Energy Agency Board of Governors, Resolution Adopted 27 November 2009 (Gov/2009/82); SC Res 1747, UN SCOR, 5647th mtg, UN Doc S/Res/1747 (24 March 2007); SC Res 1803, UN SCOR, 5848th mtg, UN Doc S/Res/1803 (3 March 2008); SC Res 1835, UN SCOR, 5984th mtg, UN Doc S/Res/1835 (27 September 2008); SC Res 1929, UN SCOR, 6335th mtg, UN Doc S/Res/1929 (9 June 2010). SC Res 825, UN SCOR, 3212th mtg, UN Doc S/Res/825 (11 May 1993); Implementation of the NPT Safeguards Agreement Between the Agency and the Democratic People’s Republic of Korea, International Atomic Energy Agency General Conference Resolution adopted on 20 September 2002 (GC(46)/RES/14); Implementation of the NPT Safeguards Agreement Between the Agency and the Democratic People’s Republic of Korea, International Atomic Energy Agency General Conference Resolution adopted 19 September 2003 (GC(47)/RES/12). For an overview of North Korea’s conduct with respect to the NPT, see: Application of Safeguards in the Democratic People’s Republic of Korea, Report by the Director General of the International Atomic Energy Agency to the Board of Governors and General Conference, (2 September 2011, GOV/2011/53-GC(55)/24). SC Res 1695, UN SCOR, 5490th mtg, UN Doc S/Res/1695 (15 July 2006); SC Res 1718, UN SCOR, 5551st mtg, UN Doc S/Res/1718 (14 October 2006); SC Res 1874, UN SCOR, 1874th mtg, UN Doc S/Res/1874 (12 June 2009). Treaty on the Non-Proliferation of Nuclear Weapons, opened for signature 1 July 1968, 729 UNTS 161 (entered into force 5 March 1970) Article IX(3).

16

17

18

EAP 4

78

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

of the NPT, the ICJ could not declare the threat or use of nuclear weapons illegal in all circumstances by virtue of a multilateral treaty prohibition.19 As for the existence of a customary prohibition on the threat or use of nuclear weapons, the Court acknowledged the argument advanced by some states that there had been a considerable period of time during which states in possession of nuclear weapons had refrained from using them.20 However, the Court concluded that ‘members of the international community are profoundly divided on the matter of whether non-recourse to nuclear weapons over the past 50 years constitutes expression of opinio juris [as to the existence of a prohibition on use].’21 The divided opinion as to the legal effect of non-use of nuclear weapons undermined any evidence of the requisite opinio juris to support the existence of a customary prohibition.22

1.2 Application of other legal regimes to the use of nuclear weapons
Having found no specific prohibition on the use of nuclear weapons under conventional or customary international law, the Court considered whether there were other areas of international law from which a prohibition on the use of nuclear weapons may be derived. In this regard, the Court considered human rights law, the law pertaining to genocide and to environmental protection, and international humanitarian law. The Court found that human rights law and the law dealing with genocide and environmental protection may be used to determine that a particular the use of a weapon is illegal in certain circumstances.23 However, a general prohibition on the use of nuclear weapons, and thus arguably any other weapon, could not be derived from these areas of law.24 Although the Court recognised that general principles of IHL could theoretically render the use of a weapon illegal in all circumstances, it found that there is no general prohibition of nuclear weapons in IHL. Of course, any use of nuclear weapons in armed conflict must conform to the general principles of this body of law — principles which apply to the use of every category of weapon in armed conflict irrespective of whether there is a specific prohibition on the use of that weapon.25 This aspect of the Court’s decision will be considered in more detail below. Suffice to say here that the Court considered that the use of nuclear weapons ‘seems scarcely reconcilable’ with the general principles of IHL.26 Nevertheless, the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
19 20 21 22 23 24 25 26

Nuclear Weapons (Advisory Opinion), [1996] ICJ Rep 226, [61]-[63]. Ibid [67]. Ibid. Ibid. Ibid. See the Court’s decision at [25] (human rights law), [26] (genocide) and [33] (environmental protection law). Ibid [25]-[26], [33]. Ibid [79]. Ibid [95].

!

EAP 5

Regulating the Use of Unmanned Combat Vehicles

79

Court was unable to ‘conclude with certainty’ that the use of nuclear weapons would be at variance with such principles ‘in any circumstance.’27

1.3 The regulation of the use of weapons under international humanitarian law
IHL regulates the conduct of military hostilities in order to limit the effects of armed conflict, by restricting the method and means of war and protecting persons who are not or are no longer participating in hostilities. The principles of IHL are codified in various treaties, principally the Hague and Geneva Conventions.28 In addition, as the ICJ noted in the Nuclear Weapons case, many of these rules also ‘constitute intransgressible principles of international customary law.’29 In relation to the application of IHL to specific weapons, Judge Higgins stated in the Nuclear Weapons Advisory Opinion that: The role of humanitarian law (in contrast to treaties of specific weapon prohibition) is to prescribe legal requirements of conduct rather than to give “generalized” answers about particular weapons. I do not, however, exclude the possibility that such a weapon could be unlawful by reference to the humanitarian law, if its use could never comply with its requirements — no matter what specific type within that class of weapon was being used and no matter where it might be used.30 While Judge Higgins dissented in the case, her position on the applicability of IHL seems to be consistent with that of the majority opinion. Further, the Court noted that general principles of IHL do not solely regulate ‘weaponry of an earlier time’.31 Instead, ‘the intrinsically humanitarian character of the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
27 28

Ibid. For instance, Hague Convention (IV) Respecting the Laws and Customs of War on Land and Its Annex: Regulations Concerning the Laws and Customs of War on Land, adopted 18 October 1907 (entered into force 26 January 1910); Geneva Convention (I) for the Amelioration of the Condition of Wounded and Sick in Armed Forces in the Field, opened for signature 12 August 1949, 75 UNTS 85 (entered into force 21 October 1950); Geneva Convention (II) for the Amelioration of the Condition of Wounded, Sick and Shipwrecked Members of Armed Forces at Sea, opened for signature 12 August 1949, 75 UNTS 85 (entered into force 21 October 1950); Geneva Convention (III) Relative to the Treatment of Prisoners of War, opened for signature 12 August 1949, 75 UNTS 135 (entered into force 21 October 1950); Geneva Convention (IV) Relative to the Protection of Civilian Persons in Time of War, opened for signature 12 August 1949, 75 UNTS 287 (entered into force 21 October 1950). Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts, opened for signature 12 December 1977, 1125 UNTS 3 (entered into force 7 December 1979) (‘Additional Protocol I’). Nuclear Weapons (Advisory Opinion), [1996] ICJ Rep 226, [79]. Ibid [26] (Dissenting Opinion of Judge Higgins). Ibid [86], quoting the written statement from New Zealand. See also Dissenting Opinion of Judge Weeramantry at 444.

29 30 31

EAP 6

80

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

legal principles in question … permeates the entire law of armed conflict and applies to all forms of warfare and to all kinds of weapons, those of the past, those of the present and those of the future.’32 The Court considered that, in the absence of a specific prohibition, the use of a particular weapon could still be illegal if that use always violated one of the general principles of IHL, considered below.

1.4 Prohibition on causing superfluous injury and unnecessary suffering
One of the general principles of IHL considered by the Court was the customary prohibition on the use of weapons which cause superfluous injury or unnecessary suffering to combatants.33 In his dissenting opinion, Judge Shahabuddeen explained the operation of the prohibition as follows: ‘the use of a weapon which caused the kind of suffering that poison gas caused was simply repugnant to the public conscience, and so unacceptable to States whatever might be the military advantage sought to be achieved.’34 He further gave the example of dum-dum bullets, which were ‘deliberately crafted so as to cause unnecessary suffering’, and which fell foul of this prohibition.35 The prohibition is codified in Article 35(2) of the 1977 Additional Protocol I to the Geneva Conventions, and also in Article 23 of the 1907 Hague Regulations Respecting the Laws and Customs of War on Land.36 It is also universally recognised as a customary norm in both international and noninternational armed conflicts.37 The Rome Statute, for example, includes the war crime of ‘employing weapons, projectiles and material and methods of warfare which are of a nature to cause superfluous injury or unnecessary suffering’ in the subject matter jurisdiction of the International Criminal Court.38 Many assume that the prohibition is devoid of practical meaning in the absence of a multilateral agreement amongst states prohibiting a category of weapons.39 No category of weapon has been explicitly prohibited by the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
32 33 34 35 36

Nuclear Weapons (Advisory Opinion), [1996] ICJ Rep 226, [86]. Ibid [78]. Ibid 403 (Dissenting Opinion of Judge Shahabuddeen). Ibid. See also, International Declaration Respecting Expanding Bullets, signed at The Hague, 29 July 1899 (entered into force 4 September 1900). Additional Protocol I; Hague Convention (IV) Respecting the Laws and Customs of War on Land and Its Annex: Regulations Concerning the Laws and Customs of War on Land, adopted 18 October 1907 (entered into force 26 January 1910). Jean-Marie Henckaerts and Louise Doswald-Beck, Customary International Law, Volume I: Rules (Cambridge University Press, 2005) 237. Rome Statute of the International Criminal Court, opened for signature 17 July 1998, 2187 UNTS 90 (entered into force 1 July 2002). See Article 8(2)(b)(xx) in the context of an international armed conflict. See for instance the submissions of France and Russia to the ICJ in the Nuclear Weapons case: ‘Written Statement and Comments of the Russian Federation on the

37 38

39

!

EAP 7

Regulating the Use of Unmanned Combat Vehicles

81

international community on the basis of the customary rule reflected in Article 35(2) of Additional Protocol I. However, anecdotal evidence suggests that some states have unilaterally prohibited the development of certain weapons systems on the basis of legal opinion that such weapons would offend the customary prohibition.40 Details of proposed new weapons systems are usually classified and not in the public domain either for commercial-in-confidence or national security (or both) reasons and so it is impossible to substantiate the application of the customary prohibition to prevent the development of new weapons systems. In any case, for present purposes that impossibility is immaterial. The potential for low-yield tactical nuclear weapons to be used consistently with general principles of IHL is sufficient to find that nuclear weapons are not ipso facto of a nature to cause superfluous injury or unnecessary suffering. Indeed, the Court was unable to find that nuclear weapons automatically fell foul of this customary prohibition, although it stopped short of identifying specific scenarios in which nuclear weapons may be used in ways which do not cause superfluous injury or unnecessary suffering.41

1.5 The Martens Clause
The Court also referred to the so-called Martens Clause as an ‘effective means of addressing the rapid evolution of military technology.’42 The Martens Clause was originally included in the 1899 Hague Convention II with Respect to the Laws and Customs of War on Land,43 and a revised version was subsequently included in the 1977 Additional Protocol I as follows: ‘In cases not covered by this Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience.’44

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Issue of the Legality of the Threat or Use of Nuclear Weapons’, Submission in Nuclear Weapons (Advisory Opinion), [1996] ICJ Rep 226, 19 June 1995, 13; French submission cited in Henckaerts and Doswald-Beck, above n 37, 243.
40

For instance, Colonel Hays Parks, formerly of the US military, has stated several times in public fora that when he was responsible for the legal review of proposed new weapons systems at the Pentagon he personally refused to authorise a number of proposed new systems on the basis that he believed they would violate the customary prohibition. (Tim McCormack’s recollections on Hays Parks’ comments at a number of IHL and weapons conferences). Nuclear Weapons (Advisory Opinion), [1996] ICJ Rep 226, [95]. Ibid [78]. Convention II with Respect to the Laws and Customs of War on Land and Its Annex: Regulation Concerning the Laws and Customs of War on Land, adopted 29 July 1899 (entered into force 4 September 1900), Preamble. See also Hague Convention (IV) Respecting the Laws and Customs of War on Land and Its Annex: Regulations Concerning the Laws and Customs of War on Land, adopted 18 October 1907 (entered into force 26 January 1910) Preamble. Additional Protocol I, Article 1(2).

41 42 43

44

EAP 8

82

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

The Court noted that the Martens Clause, as expressed in Additional Protocol I, was an ‘expression of pre-existing customary law’.45 The legal effects of the Martens Clause have been subject to a number of different interpretations, as demonstrated by the variety of opinions expressed in the oral and written submissions of various states in the Nuclear Weapons case.46 At its widest, the Martens Clause has been interpreted to mean that conduct during an armed conflict will be subject to customary rules and to the requirements of humanity and public conscience, in the absence of a treaty provision covering that particular conduct.47 The Court did not specifically consider how the Martens Clause would apply to nuclear weapons, and thus provided little guidance on the scope of the Clause. In the International Criminal Tribunal for the former Yugoslavia (ICTY) case of Prosecutor v Kupreskic, the Trial Chamber noted that: This Clause may not be taken to mean that the “principles of humanity” and the “dictates of public conscience” have been elevated to the rank of independent sources of international law, for this conclusion is belied by international practice. However, this clause enjoins, as a minimum, reference to those principles and dictates any time a rule of international humanitarian law is not sufficiently rigorous or precise: in those instances the scope and purport of the rule must be defined with reference to those principles and dictates.48 Meron argues that the decision of the ICJ in the Nuclear Weapons case indicates that while the Martens clause ‘should be taken into consideration in evaluating the legality of weapons and methods of war … except in extreme cases, its references to principles of humanity and dictates of public conscience cannot, alone, delegitimize weapons and methods of war, especially in contested cases.’49

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
45 46

Nuclear Weapons (Advisory Opinion), [1996] ICJ Rep 226, [84]. See, for instance ‘Written Statement and Comments of the Russian Federation on the Issue of the Legality of the Threat or Use of Nuclear Weapons’, Submission in Nuclear Weapons (Advisory Opinion), [1996] ICJ Rep 226, 19 June 1995, 13; ‘Letter Dated 16 June 1995 From the Legal Adviser to the Foreign and Commonwealth Office of the United Kingdom of Great Britain and Northern Ireland, together with Written Comments of the United Kingdom’, Submission in Nuclear Weapons (Advisory Opinion), [1996] ICJ Rep 226, 16 June 1995, 3.58. See Rupert Ticehurst, ‘The Martens Clause and the Laws of Armed Conflict’ (1997) 317 International Review of the Red Cross; Antonio Cassese, ‘The Martens Clause: Half a Loaf or Simply Pie in the Sky?’ (2007) 11(1) European Journal of International Law 187, 191, doi: 10.1093/ejil/11.1.187. Prosecutor v Kupreskic et al (Judgment) (International Criminal Tribunal for the Former Yugoslavia, Trial Chamber II, Case No IT-95-16-T-14, 14 January 2000) § 525. Theodor Meron, ‘The Martens Clause, Principles of Humanity, and Dictates of Public Conscience’ (2000) 94(1) American Journal of International Law 78, 88.

47

48

49

!

EAP 9

Regulating the Use of Unmanned Combat Vehicles

83

1.6 Obligation to distinguish and prohibition on indiscriminate attacks
One of the most fundamental principles of IHL — integral to the raison d’être of this body of law and characterised in Additional Protocol I as a ‘basic rule’50 — is the distinction between civilians and combatants. Civilians and civilian objects must never be the object of an attack and indiscriminate attacks are prohibited.51 Indiscriminate attacks include those in which the chosen weapons are incapable of distinguishing between military and civilian objects. The ICJ referred to this rule as a ‘cardinal’ principle of IHL.52 A number of states in the Nuclear Weapons case argued that nuclear weapons are unable to distinguish between civilians and combatants, and that their effects are largely uncontrollable, and cannot be restricted. They thus argued that recourse to nuclear weapons could never be compatible with the principles and rules of humanitarian law and must therefore be considered prohibited.53 This is a readily comprehensible position. Any use of a nuclear weapon against a military objective in close physical proximity to civilians and civilian property will surely constitute an indiscriminate attack. Even if the blast effects of the detonation of a low-yield nuclear weapon could be confined to the military objective, the release of radioactivity could not be. The devastating effects of radioactive fallout from accidents at nuclear power plants in Chernobyl and Fukushima are testament enough to this reality — even without contemplating the effects of the actual use of nuclear weapons against the cities of Hiroshima and Nagasaki. It is presumably this reality that led the majority of the ICJ to opine that any use of nuclear weapons would be ‘scarcely reconcilable’ with the general principles of IHL.54 The characterisation ‘scarcely reconcilable’, however, suggests the possibility, however remote, that some use may be reconcilable with the general principles of IHL. The Court did not identify any such possibility although two scenarios are regularly cited. Both involve low-yield tactical nuclear weapons deployed against targets far removed from the civilian population. The first scenario involves an attack on a naval vessel on the high seas and the second involves an attack on an underground military facility in a physical location remote from civilian residential areas (a desert or other uninhabited location). In both scenarios there may still be issues of radioactive contamination, but the theoretical possibility of conformity to general principles of IHL remains. Intriguingly, the Court did not rely on the possibility of conformity with general principles to reach its majority view that it could not declare the threat or use of nuclear weapons illegal in all circumstances. Instead, after declaring any use ‘scarcely reconcilable’ with

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
50 51 52 53 54

Additional Protocol I, Article 48. Ibid, Article 51(2) and (4). Nuclear Weapons (Advisory Opinion), [1996] ICJ Rep 226, [78]. Ibid [92]. Ibid [95].

EAP 10

84

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

general principles, the Court leaps to the jus ad bellum — to cases of selfdefence in extremis where the very survival of the state is at stake — to justify its non-finding.55 That aspect of the decision has been criticised elsewhere.56 Given the Court’s finding that even nuclear weapons are not inherently incapable of distinguishing between combatants and civilians, it is extremely unlikely that any other category of weapon will cross this threshold of illegality. Thus it may be said that the rule on distinction acts to regulate the use of weapons, rather than constituting a blanket prohibition. It follows that any indiscriminate use of a weapon in armed conflict is a war crime and it is the particular circumstances of the attack, rather than the particular specifics of the weapon of choice, which constitutes the serious violation of IHL.

2

A Specific Prohibition on Unmanned Combat Aerial Vehicles?

There is no specific international law prohibition — conventional or customary — on the use of unmanned combat vehicles (UCVs). Whilst there are a few international treaties that deal with unmanned vehicles, none of these provide anything that remotely resembles a prohibition on their use. The most prominent of these agreements are the non-binding Missile Control Technology Regime (MTCR) and the Wassenaar Arrangement, both of which constitute export control agreements.57 The MTCR seeks to coordinate national export licensing in order to, inter alia, prevent the proliferation of unmanned delivery systems capable of delivering weapons of mass destruction.58 The Wassenaar Arrangement similarly seeks to prevent destabilising accumulations of, inter alia, unmanned vehicle (UV) technology, by promoting ‘transparency and greater responsibility in transfers of conventional arms and dual-use goods and technologies.’59 Some commentators have suggested that unmanned aerial vehicles may also be covered by the 1987 Intermediate-Range Nuclear Forces Treaty,60 due to their

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
55 56 57

Ibid [97]. See, for example, McCormack, above n 14, 91. On these agreements, see Missile Technology Control Regime, Introduction <http://www.mtcr.info/english/index.html>; and Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies (‘Wassenaar Arrangement’), Introduction (3 November 2011) <http://www.wassenaar.org/introduction/index.html>. See generally, Missile Technology Control Regime, Objectives of the MTCR <http://www.mtcr.info/english/index.html>. Wassenaar Arrangement, Introduction (3 November 2011) <http://www.wassenaar.org/introduction/index.html>. Treaty Between the United States of America and the Union of Soviet Socialist Republics on the Elimination of Their Intermediate-Range and Shorter-Range Missiles, signed 8 December 1987, 27 ILM 183 (entered into force 1 June 1988) (‘INF Treaty’). The US Department of Defence suggested in 2001 that the development of UCAVs would need to comply with the INF Treaty. See US Office of the Secretary of Defence,

58 59 60

!

EAP 11

Regulating the Use of Unmanned Combat Vehicles

85

similarity to cruise missiles, or the 1992 Treaty on Conventional Armed Forces in Europe as they fall within the definition of ‘combat aircraft.’61 While both of these treaties seek to limit the overall number of particular weapons possessed by member states, neither impose particularly onerous limits, even assuming unmanned aerial vehicles are covered. In addition, both treaties have only limited participation (US, European states and Russia). The absence of a specific treaty prohibition is congruent with the lack of any evidence that UCVs are prohibited by customary norm. The widespread use of UCVs in armed conflict (there are currently hundreds of UCVs in use in Iraq, Afghanistan, Pakistan and the Occupied Palestinian Territories,62 for example, and more than 40 countries are reportedly developing or purchasing military robots63) suggests that states do not believe that UCVs are prohibited by customary international law.

2.1 General principles of international humanitarian law regulating the use of UCAVs
Unmanned combat aerial vehicles (UCAVs) are capable of carrying a variety of different munitions, including Hellfire missiles; guided bomb units (GBU) such as the Paveway 12; and lighter missiles, including the 33kg fibre-optic guided Rafael Spike-ER.64 In addition, commentators have noted that UCAVs could be used as platforms for launching nuclear weapons, as well as chemical and biological weapons.65 It is thus necessary to distinguish between the weapons carried by the UCAV, and the unmanned platform itself. As Boothby notes, ‘the weapon that is being carried on and used by, or guided

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Unmanned Aerial Vehicles Roadmap 2000-2025, (April 2001) Section 6.4.3 Treaty Considerations; Laurence R Newcome, Unmanned Aviation: A Brief History of Unmanned Aerial Vehicles (American Institute of Aeronautics and Astronautics, 2004) 6; Dennis Gormley and Richard Speier, Controlling Unmanned Air Vehicles: New Challenges (19 March 2003) Paper Commissioned by the Non-Proliferation Education Center <cns.miis.edu/npr/pdfs/102gorm.pdf>.
61

Treaty on Conventional Armed Forces in Europe, signed 19 November 1990, 30 ILM 1 (entered into force 17 July 1992). See Jurgen Altmann, ‘Preventive Arms Control for Uninhabited Military Vehicles’ (2009) Ethics and Robotics 69; Laurence R Newcome, Unmanned Aviation: A Brief History of Unmanned Aerial Vehicles (American Institute of Aeronautics and Astronautics, 2004) 6; Gormley and Speier, above n 60, 7. Peter Singer, ‘Robots at War: The New Battlefield’ (2009) 33(1) Wilson Quarterly 30, 31-32. Noel Sharkey, ‘Saying “No!” to Lethal Autonomous Targeting’ (2010) 9(4) Journal of Military Ethics 369, 370, <http://dx.doi.org/10.1080/15027570.2010.537903>; Ibid. See Roy Braybrook, ‘Strike Drones: Persistent, Precise and Plausible’ (2009) 33(4) Armada International 21, 21; United Kingdom Ministry of Defence, Joint Doctrine Note 2/11, The UK Approach to Unmanned Aircraft Systems (30 March 2011) <http://www.mod.uk/DefenceInternet/MicroSite/DCDC/OurPublications/JDN P/Jdn211TheUkApproachToUnmannedAircraftSystems.htm>. Philip Alston, Interim Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, UN Doc A/65/321 (23 August 2010) [40].

62 63 64

65

EAP 12

86

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

by, the UCV will need to be the subject of a separate legal review.’66 Thus as long as overall operational control and the decision to use lethal force remain with a human controller, UCAVs do not appear to raise any issues separate from those raised by the weapons systems attached. Evidently, there is no basis for arguing that UCAV technology that is currently in use is inherently of a ‘nature to cause superfluous injury or unnecessary suffering’ or that the ‘dictates of public conscience’ militate against the use of UCAVs. In addition, the weapons deployed on UCAVs that have been used in armed conflict to date have generally been considered ‘precision’ weapons, capable of a high degree of accuracy and the ability to home in on the intended target — the quintessential embodiment of the rule of distinction. In 2009, Human Rights Watch, for example, issued a report into the use of drone-launched missiles by Israel in the Gaza strip.67 This report noted that Israeli drones are equipped with high-resolution cameras and advanced sensors, and that the ‘missile launched from a drone carries its own cameras that allow the operator to observe the target from the moment of firing to impact’ and divert the weapon ‘if doubts arise about a target after the missile has been launched.’68 The report continued, ‘with these advanced visual capabilities, drone operators who exercised the proper degree of care should have been able to tell the difference between legitimate targets and civilians [emphasis added].’69 Finally, the report considered that given ‘the weapon’s highly discriminate nature’ authorities should review every mission ‘involving drone-launched missiles in which civilians were wounded or killed.’70 Similarly, the Goldstone Report, which investigated possible breaches of IHL in the Gaza conflict, stated that the UCAVs used by Israel on the 27 December 2008 attack against the Namar wells group ‘are capable of a high degree of precision.’71 There can be no suggestion that the UCAVs currently in use in armed conflict are incapable of discriminating between civilians and combatants. Arguably, the more significant issue is the question of accountability for breaches of IHL when UCAVs are used (or the weapons they are carrying are deployed) in an indiscriminate attack. According to Michael Schmitt:

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
66 67

William H Boothby, Weapons and the Law of Armed Conflict (Oxford University Press, 2009) 230. Human Rights Watch, Precisely Wrong: Gaza Civilians Killed by Israeli DroneLaunched Missiles (30 June 2009), 4 <http://www.hrw.org/en/reports/2009/06/30/precisely-wrong-0>. Ibid. Ibid. Ibid. United Nations Human Rights Council, Human Rights in Palestine and Other Occupied Arab Territories, Report of the United Nations Fact-Finding Mission on the Gaza Conflict, A/HRC/12/48 (24 September 2009) 985.

68 69 70 71

!

EAP 13

Regulating the Use of Unmanned Combat Vehicles

87

Since drones employ precision guided munitions such as laser-guided missiles or the JDAM, they are self-evidently not indiscriminate means of warfare. On the contrary, they are far more capable of being aimed at targets than many other weapons systems commonly employed on the battlefield. However, the indiscriminate use of a discriminate weapon is unlawful.72 We will return to the issue of accountability for breaches of IHL in the use of UCAVs later.

2.2 The rule on proportionality
In addition to distinguishing between combatants and civilians and avoiding indiscriminate attacks, military decision makers must also respect the rule on proportionality. Whenever an attack is directed at a military objective, any expected impact that attack will have on civilians or civilian objects must be proportionate to the expected military gain from the attack.73 The rule on proportionality, another cardinal principle of IHL, is codified in Additional Protocol I, and is also a well-established principle of customary international law.74 According to Boothby: This proportionality rule has, however, no direct applicability to the legitimacy of a weapon. It is not a criterion against which the legitimacy of a weapon can sensibly be considered, because what is proportionate can only meaningfully be determined in relation to an attack on a particular occasion, perhaps at a specific time, using particular weapons and specified attack profiles. The case-specific nature of these factors means that the proportionality rule is not something of direct relevance to weapons law.75 It is most likely for this reason that the Court in the Nuclear Weapons case did not consider the principle of proportionality in any detail in its analysis of whether nuclear weapons are illegal per se. The analysis here is similar to that for the rule on distinction and the prohibition on indiscriminate attacks. Any use of high-yield nuclear weapons against or in close physical proximity to a densely populated civilian residential area would surely constitute disproportionate military force. The expected widespread deaths of civilians from blast and heat, the expected longer-term consequences of exposure of

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
72

Michael N Schmitt, ‘Drone Attacks Under the Jus ad Bellum and Jus in Bello: Clearing the Fog of Law’ (2010) 13 Yearbook of International Humanitarian Law 311, 321 doi: 10.1007/978-90-6704-811-8_9. Nuclear Weapons (Advisory Opinion), [1996] ICJ Rep 226, [20] (Dissenting Opinion of Judge Higgins); HCJ 769/02, The Public Committee against Torture in Israel v The Government of Israel (11 December 11 2005) 42-43. HCJ 769/02, The Public Committee against Torture in Israel v The Government of Israel (11 December 11 2005) 42; Prosecutor v Kupreskic et al (Judgment) (International Criminal Tribunal for the Former Yugoslavia, Trial Chamber II, Case No IT-95-16, 14 January 2000). Boothby, above n 66, 79.

73

74

75

EAP 14

88

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

survivors to radioactive contamination, the expected destruction of civilian property and radioactive contamination of civilian land, infrastructure and buildings would clearly be excessive in relation to any expected military advantage. Although nuclear holocaust is a quintessentially disproportionate scenario, nuclear weapons could also theoretically be used entirely consistently with the rule on proportionality (as per the scenarios outlined above). The relevant test for proportionality is not applied to the technical specifications of the particular weapon, but to the manner in which it is used. That is as true for nuclear weapons as it is for UCAVs and for all the various weapons systems that can be deployed on a UCAV. Any disproportionate use of weapons deployed through the medium of a UCAV would constitute a war crime and those responsible for the disproportionate attack could be held criminally responsible for the violation of IHL. In 2005, in a case commonly referred to as the ‘targeted killings’ case, the Israeli Supreme Court considered the legality of the policy of ‘preventative strikes’ carried out by Israel against Palestinian militants in the Occupied Palestinian Territories.76 Some of these have been carried out using drones.77 The plaintiffs in that case argued that the ‘targeted killings policy does not withstand the proportionality requirement,’ and ‘does not discriminate between terrorists and innocent persons.’78 The Court considered that IHL applied to the killings,79 and concluded ‘we cannot determine that a preventative strike is always legal, just as we cannot determine that it is always illegal.’80 Instead, the Court emphasised that proportionality must be judged on a ‘case by case’ basis.81 This decision indicates drones currently in use by Israel do not, in and of themselves, violate the principle of proportionality. Rather, the relevant consideration is the way they are used in particular circumstances.

2.3 Future developments of UCVs and proportionality
The most remarkable characteristic of UV technology is its potential to be deployed in the place of a human decision maker.82 This aspect of the technology is what distinguishes it from traditional weapons, and perhaps

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
76 77 78 79 80 81 82

HCJ 769/02, The Public Committee against Torture in Israel v The Government of Israel (11 December 2005). Human Rights Watch, above n 67, 4. HCJ 769/02, The Public Committee against Torture in Israel v The Government of Israel (11 December 2005) 8. Ibid 21. Ibid 60. Ibid 46. For an overview of the current state of UV technology, and the potential for autonomy, see Sharkey, above n 63; Singer, above n 62, 36; Alston, above n 65; United Kingdom Ministry of Defence, above n 64, 5-4.

!

EAP 15

Regulating the Use of Unmanned Combat Vehicles

89

what requires a consideration of the principle of proportionality when determining its legality. As various commentators have argued, UCVs will gradually become more autonomous and will one day be capable of operating autonomously ‘to locate their own targets and destroy them without human intervention.’83 Estimates of when this will be possible generally range between 5 and 15 years, although some consider that it will take significantly longer.84 The UK Ministry of Defence has stated that it ‘currently has no intention to develop systems that operate without human intervention in the weapon command and control chain.’85 Similarly, the United States Air Force has stated that the decision to authorise ‘a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions.’86 Nevertheless, if a weapon is designed to make targeting decisions without human oversight, it must be capable of accurately distinguishing between combatants and civilians, and also of determining when an attack will be proportionate. If it is never capable of doing so, then, using the Court’s reasoning in the Nuclear Weapons case, its use must be considered illegal. Sharkey has argued that ‘there is no way for a robot to perform the human subjective balancing act required to make proportionality decisions.’87 He points out that there is no clear, objective method for determining what is proportionate, and that it is difficult, if not impossible, to ‘calculate a value’ for the actual military advantage gained from any particular attack.88 He argues that UVs would face similar problems with the related principle of distinction, discussed above, due to the difficulties in determining who constitutes a civilian in modern armed conflict.89 On the other hand, Alston has pointed out that robots may be able to use lethal force more conservatively than humans, and reduce collateral damage and other mistakes made by humans: ‘they may also be able to avoid mistakes or harm resulting from human emotions or states, such as fear, tiredness, and the desire for revenge.’90 Whether this is the case, remains to be seen. In any event, it is worth recalling that Additional Protocol I contains an obligation on state parties to determine whether the use of a new weapon would ‘in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable.’91

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
83 84 85 86 87 88 89 90 91

Sharkey, above n 63, 376. United Kingdom Ministry of Defence, above n 64, 5-4. Ibid 5-10. United States Air Force, Unmanned Aircraft Systems Flight Plan 2009-2047 (18 May 2009), 41 <http://www.fas.org/irp/program/collect/uas_2009.pdf>. Sharkey, above n 63, 369. Ibid 380. Ibid 379. Alston, above n 65, [30]; See also Singer, above n 62, 36. Additional Protocol I, Article 36.

EAP 16

90

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

4

Limitations to General Principles of IHL

Whatever the future developments in relation to new UCV technologies, there are myriad sources of discomfort in the increasing contemporary utilisation — particularly of UCAVs — to launch lethal attacks. The theoretical possibility that the deployment of lethal force by an UCAV can be undertaken entirely consistently with general principles of IHL will not assuage concerns about the widespread use of UCAV technology. By this we do not mean to imply that compliance with IHL is only a remote theoretical possibility. It may well be the case that many, perhaps even a majority, of killings by pilotless drones are perpetrated lawfully. Rather, our suggestion here is that there is a widely held view that the legality of a substantial proportion of such killings is dubious. The fact that general principles of IHL apply to the use of UCAVs in armed conflict, and the possibilities of criminal accountability for violation of those principles are insufficient in and of themselves to assuage existing concerns. The perceived limitations of general principles of IHL as the primary source of legal regulation of a category of weapons are not restricted to the utilisation of UCAV technology. Many states came to the view that general principles of IHL were inadequate to address the deleterious humanitarian consequences of the widespread use of both anti-personnel landmines and cluster munitions for example.92 There have certainly been instances of ‘responsible’ deployment of anti-personnel landmines and cluster munitions — discriminately, proportionately and with optimum care taken to minimise civilian casualties — that were entirely consistent with general principles of IHL. These examples demonstrate that it is possible to deploy both categories of weapon in strict compliance with IHL. However, for every example of ‘responsible’ use, there seem to be many more examples of widespread deployment of these weapons indiscriminately, disproportionately or otherwise carelessly with diabolical consequences for affected civilian populations. In relation to cluster munitions, for example, in March 2006, the following conclusion was presented to the Group of Governmental Experts (GGE) to the Certain Conventional Weapons Convention discussions on Protocol V and the issue of explosive remnants of war (ERW) at the UN in Geneva: It is our conclusion that Protocol V to the CCW and the existing rules of IHL are specific and comprehensive enough to deal adequately with the problem of ERW provided that those rules are effectively implemented. That proviso is an important one. It is not adequate for States that want to use cluster munitions, for example, simply to assert that their use of such weapons is consistent with general “principles” of IHL without a

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
92

See Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on Their Destruction, opened for signature 3 December 1997, 2056 UNTS 211 (entered into force 1 March 1999); Convention on Cluster Munitions, opened for signature 3 December 2008 (entered into force 1 August 2010).

!

EAP 17

Regulating the Use of Unmanned Combat Vehicles

91

genuine commitment to implement the binding legal rules effectively. Increasingly demands are made for independent scrutiny of choices of weapon, selection of targets and the conduct of military operations. Furthermore, there is a growing international expectation that those responsible for violations of the law will be held criminally accountable and will not be allowed to experience impunity for their crimes. There is much the GGE can do to encourage States Parties to the CCW to take their existing IHL obligations more seriously — including the implementation of effective measures for enforcement of violations. It is surely the case that if, following the adoption of Protocol V, the ERW problem only increases in severity and in its threat to civilian populations affected by armed conflict, many in the international community will argue for a more specific and substantive response — including, perhaps, a treaty ban on cluster munitions. The onus is on user States to demonstrate that such weapons can be used consistently with the binding obligations of IHL.93 Within four months of the presentation of this conclusion in Geneva, violence erupted in Southern Lebanon and Northern Israel between Hezbollah and Israeli military forces. One feature of the conflict was the controversial and widespread use of cluster munitions by Israeli forces across swathes of arable land in Southern Lebanon.94 The unusually high rate of unexploded ordnance (some estimates up to 40% of sub-munitions) constituted a humanitarian crisis as the civilian population sought to return to towns and villages in the affected area post-conflict.95 Borrie claimed that this conflict ‘provided irrefutable evidence that cluster munitions are deeply problematic weapons in terms of their impact on civilians, even when used by a professionally trained army intimately familiar with International Humanitarian Law requirements.’96 The adverse humanitarian consequences of this particular use of cluster munitions also provided a major impetus for successful multilateral negotiations for a comprehensive treaty prohibition on cluster munitions.97

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
93

Tim McCormack, Sarah Finnin and Paramdeep Mtharu, International Humanitarian Law and Explosive Remnants of War: Report on States Parties’ Responses to the Questionnaire, presented to the Group of Governmental Experts’ Working Group on Explosive Remnants of War of the States Parties to the 1980 Certain Conventional Weapons Convention (United Nations, March 2006) 49. See Human Rights Watch, Flooding South Lebanon: Israel’s Use of Cluster Munitions in Lebanon in July and August 2006 (16 February 2008) <http://www.hrw.org/sites/default/files/reports/lebanon0208webwcover.pdf>. Ibid 44-48. John Borrie, ‘The “Long Year”: Emerging International Efforts to Address the Humanitarian Impacts of Cluster Munitions, 2006-2007’ (2007) 10 International Yearbook of International Humanitarian Law 251, 259. See generally, ibid; Nout van Woudenberg, ‘The Long and Winding Road Towards an Instrument on Cluster Munitions’ (2008) 12(3) Journal of Conflict and Security Law 447.

94

95 96

97

EAP 18

92

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

We are not suggesting here that contemporary use of UCAV technology for attacks is directly analogous to the humanitarian impact of the widespread deployment of cluster munitions in the latter half of 2006. Rather, the example of one major catalyst resulting in the negotiation of a multilateral treaty ban on clusters is illustrative of what can happen when the international community develops the view that general principles of IHL are not an adequate constraint on the use (or misuse) of a particular category of weapon. If the current use of UCAVs continues to escalate, those engaged in the use of the technology and its weapons system will likely experience increasing pressure for greater transparency, accountability and possibly specific legal regulation of the use of the technology.

5

Covert Operations

Intriguingly, though, discomfort about the increasing reliance on UCAVs is generally not focused on the military utilisation of the technology in the theatre of operations. It is widely accepted that such use is regulated by IHL; that militaries utilising the technology usually attempt to ensure that each UCAV attack is consistent with IHL; and that many such attacks are scrutinised both internally (within military structures) and externally by NGOs, the media and other observers who raise concerns about the relevant rules of IHL when appropriate. It is also expected that whenever there are allegations of serious violations of IHL in relation to a UCAV-launched attack, the incident will be investigated and, if appropriate in the circumstances, charges laid. At the very least, the causing of incidental civilian deaths (even if technically permitted under IHL) will result in extensive criticism and widespread calls for transparent accountability for those responsible. Much more discomfort emanates from the almost complete lack of transparency in the covert use of UCAV technology. Jane Mayer articulated the problem in her exposé of the CIA drone program.98 Mary Ellen O’Connell elsewhere in this issue explains that ‘[a]t least in Afghanistan … the US use of drones has been justifiable so long as the rules governing battlefield conduct have been observed’.99 In her opinion, the real problem is what she characterises as the ‘seductive’ use of drone technology to inflict death in places geographically far removed from the battlefield in circumstances where previously no lethal force would have been used.100

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
98

Jane Mayer, ‘The Predator War: What Are the Risks of the CIA’s Covert Drone Program?’ The New Yorker (New York) 26 October 2009. For a comprehensive criticism of the CIA’s drone program, see Philip Alston, ‘The CIA and Targeted Killing Beyond Borders’ (Working Paper No. 11-64, New York University School of Law, Public Law and Legal Theory Research Paper Series, 2011). Mary Ellen O’Connell, ‘Seductive Drones: Learning from a Decade of Lethal Operations’ (2011) 21(2) Journal of Law, Information and Science, EAP 10, doi: 10.5778/JLIS.2011.21.OConnell.1. Ibid 26-27.

99

100

!

EAP 19

Regulating the Use of Unmanned Combat Vehicles

93

It seems that the Bush Administration strategically characterised its response to the 9/11 attacks as a (self-declared) ‘Global War on Terror’ in order to benefit from the permissive legal regime of IHL. IHL is permissive in the sense that it allows the deliberate targeting of combatants and also the incidental deaths of some civilians as ‘collateral’ damage. Outside of armed conflict (and the IHL legal regime), such deaths would best be characterised as extrajudicial killings, or murder. The characterisation of the Global War on Terror has continued under the Obama Administration which considers itself to be in an ‘armed conflict with al-Qaeda, as well as the Taliban and associated forces.’101 The IHL legal framework is now generally used to justify drone attacks in geographic locations far removed from theatres of military operation.102 The legal requirement of an armed conflict as the condition precedent to the application of IHL as the relevant regulatory regime is conveniently overlooked.103 Instead, alleged terrorists, wherever they happen to be in the world, are apparently fair game and a hellfire missile fired from a CIA drone seems to be an acceptable means of dealing with them. This appeared to be the position taken by Harold Koh in an oft-cited speech delivered in 2010, in which he explained the legal basis for the US drone attacks, and the applicable legal principles governing such attacks.104 There have, however, been some attempts to interpret Koh’s speech as ‘reaffirming’ and ‘reinvigorating’ the ‘traditional’ US view that national selfdefence is the appropriate legal framework.105 Kenneth Anderson, for instance, has argued that IHL does not apply to targeted killings carried out by UCAVs.106 Indeed, Anderson has stated ‘in my understanding there is not technically speaking an “armed conflict” as such.’107 Rather, he argues that

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
101

See Harold Koh, ‘The Obama Administration and International Law’ (Speech delivered at the Annual Meeting of the American Society of International Law, Washington, DC, 25 March 2010) <http://www.state.gov/s/l/releases/remarks/139119.htm>. The application of IHL to the ‘war on terror’ has been widely criticised. See Gabor Rona, ‘Legal Frameworks to Combat Terrorism: An Abundant Inventory of Existing Tools’ (2004-2005) 5 Chicago Journal of International Law 499, 505; Human Rights Watch, ‘Targeted Killings and Unmanned Combat Aircraft Systems (Drones)’ (7 December 2010) open letter to President Barack Obama <http://www.hrw.org/sites/default/files/related_material/Letter%20to%20Presi dent%20Obama%20-%20Targeted%20Killings%20(1).pdf>; But for an analysis of the application of IHL to the CIA program see Ian Henderson, ‘Civilian Intelligence Agencies and the Use of Armed Drones’ (2010) 13 Yearbook of International Humanitarian Law 133. Codified in, for instance, common article 2 of the four Geneva Conventions. Koh, above n 101. Kenneth Anderson, ‘Drones II’ Testimony submitted to US House of Representatives Committee on Oversight and Government Reform, Subcommittee on National Security and Foreign Affairs, Second Hearing on Drone Warfare, 28 April 2010, 13,15. Ibid 14. Ibid footnote 6.

102

103 104 105

106 107

EAP 20

94

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

these attacks are carried out as ‘self-defence’ and that therefore the customary laws that regulate the use of self-defence apply (these are distinct from, but similar to, the principles at the foundation of IHL).108 Nevertheless, from an accountability perspective, the same problems remain regardless of which legal framework is applied. Because the CIA program is covert, there are no published policy guidelines, no certainty about the criteria for adding a name to a targeting list and no basis for knowing what, if any, scrutiny exists in relation to particular targeting decisions.109 Indeed, the US government still does not confirm or deny CIA targeted killing operations.110 As Gregory has put it, ‘the covert nature of a war conducted by a clandestine agency ensures that most of its victims are wrapped in blankets of secrecy.’111 This state of affairs stands in stark contrast to the Israeli Supreme Court’s 2005 decision on targeted killings, which emphasised the importance of judicial oversight of targeted killings.112 As President Barak stated in that case: It is our duty to preserve the legality of government, even when the decisions are difficult. Even when the cannons roar and the muses are silent, the law exists, and acts, and determines what is permissible and what is forbidden; what is legal and what is illegal. As the law exists, so exists the Court, which determines what is permissible and what is forbidden, what is legal and what is illegal.113 There have been some attempts to secure judicial review of attacks that allegedly violated IHL. At the time of writing, for instance, a campaign led by British human rights lawyer, Clive Stafford Smith, is seeking an international arrest warrant for John Rizzo, who was formerly a general counsel for the CIA, and who recently admitted that he had approved one drone attack a month on targets in Pakistan.114 The warrant is sought for inter alia conspiracy for murder in relation to two attacks in 2009, one of which was carried out in the village of Machi Khel in North Waziristan.115 Lawyers are reportedly also ‘building cases against other individuals, including drone operators interviewed or photographed during organized press facilities.’116

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
108 109 110 111 112 113 114 115 116

Ibid 14. See Human Rights Watch, above n 102, 4. See Anderson, above n 105, 20. Derek Gregory, ‘The Everywhere War’ (2011) 177(3) Geographical Journal 238. HCJ 769/02, The Public Committee against Torture in Israel v The Government of Israel (11 December 2005) 61-64. HCJFH 2161/96, Sharif v GOC Home Front Command, 50(4) PD 485, 491. Cited in ibid, 63. Peter Beaumont, ‘Campaigners Seek Arrest of Former CIA Legal Chief Over Pakistan Drone Attacks’, The Guardian (London) 15 July 2011. Ibid. Ibid.

!

EAP 21

Regulating the Use of Unmanned Combat Vehicles

95

While attempts to initiate judicial scrutiny of the practice may please some, the difficulties involved in prosecuting alleged offences are obvious. In addition, any prosecution of ad hoc attacks carried out in Pakistan will not, in and of themselves, lead to a change in US policy to ensure the application of transparent accountability mechanisms to CIA targeted killing operations. In addition to the concerns relating to the covert nature of drone attacks, the media and/or humanitarian NGO scrutiny that often exists in the theatre of military operations is regularly non-existent because CIA-authorised drone attacks occur in physical territory too remote for media or NGO access,117 or even in territory where such access is prohibited by the complicit territorial government authorities. For instance, in 2010 Rogers reported that the Pakistani government ‘tightly restricts access to most conflict-affected areas’; that access to Federally Administered Tribal Areas (FATA) is generally prohibited for foreigners; and that government permission ‘is required to visit Swat and other areas in Malakand.’118 In addition, militant groups often target ‘foreigners as well as Pakistani journalists and NGO workers.’119 As a result ‘there is very little independent, credible information emerging from areas under their control,’120 and media reporters rely on ‘unnamed Pakistani intelligence officials.’121 The covert nature of the drone attacks has had other ramifications for civilian victims. For instance Rogers, in a 2010 report on drone attacks in Northwest Pakistan, argued that the lack of transparency in the US drone program has also meant that civilian losses are largely ignored.122 He considered that neither US nor Pakistani authorities ‘have any standard, public procedures for investigating civilian losses from drone strikes, acknowledging or recognizing losses, or providing help for victims to recover.’123 Although the Pakistan government provides compensation for civilian casualties, this reportedly does not extend to victims of drone strikes.124

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
117

According to Rogers, the majority of drone strikes in Pakistan occur in North and South Waziristan, areas that are inaccessible to foreigners and most Pakistanis. See Christopher Rogers, ‘Civilian Harm and Conflict in Northwest Pakistan’ (2010) Report for the Campaign for Innocent Victims in Conflict (CIVIC), Civilians in Armed Conflict Series, 14 <http://www.civicworldwide.org/storage/civicdev/documents/civic%20pakista n%202010%20final.pdf>. Ibid 13. See also Mayer, above n 98; and Beaumont, above n 114. Rogers, above n 117, 13. Ibid. Saeed Shah and Peter Beaumont, ‘US Drone Strikes in Pakistan Claiming Many Civilian Victims, Says Campaigner’, The Guardian (London) 17 July 2011. Rogers, above n 117, 62-64. Ibid 63. Ibid.

118 119 120 121 122 123 124

EAP 22

96

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

All these concerns were bought into sharp relief recently following a report in The Times that a US drone strike in Pakistan had killed two boys; 16-year-old Tariq Aziz and his 12-year-old cousin Waheed Khan.125 The boys were reportedly killed 72 hours after attending a tribal meeting on drone attacks, ‘where digital cameras donated by Jemima Khan … were distributed to tribal leaders to enable them to record the effects of drone strikes.’126 According to Stafford Smith, who was present at the meeting, Tariq stepped forward during the meeting and ‘volunteered to gather proof [of the drone strikes] if it would help to protect his family from future harm.’127 Stafford Smith alleged that the boys were subsequently killed while driving to ‘pick up their aunt and bring her home to the village of Norak.’128 The Pakistan media reported that ‘four militants’ were killed in the attack, citing ‘intelligence sources’.129 Aside from concerns over accuracy of attacks and accountability for violations of IHL, the report also contains an underlying suggestion that these boys may have been targeted due to erroneous intelligence, or worse, due to their involvement in the meeting. Of course, the accuracy of these reports is almost impossible to determine, as are the reasons why these boys were targeted; herein lies the source of controversy. Mayer, O’Connell and others lament the apathy and indifference that prevails in response to the policy. The CIA continues its attacks and we know that those attacks are perpetrated beyond the scrutiny of the law, but there is little pressure on political authorities to cease and desist from the practice. Popular opinion supports the killing of ‘terrorists’ by whatever means available — particularly where the choice of means carries no harm to the side possessing the targeting technology. The widespread celebrations in the US following the assassination of Osama bin Laden demonstrate this proposition. Reports that an American citizen had been killed in a targeted killing in 2001 in Yemen, failed to generate significant public outcry.130 Similarly, reports in 2002 that a Predator had followed and killed three Afghan villagers collecting scrap metal, due to a mistaken belief that one of the men was Osama bin Laden,131 received very little media coverage. It will be interesting to see whether the killing by predator drone of US citizen Anwar Al-Awlaki in Yemen in September 2011 draws more public criticism in the US than earlier attacks. A spokesperson for the American Civil Liberties Union was quoted in response

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
125 126 127 128 129 130

Ben MacIntyre, ‘”Boys” Killing Belies US Claim on Drone Strikes’, The Times (London) 5 November 2011. Ibid. Clive Stafford Smith, ‘For Our Allies, Death From Above’, New York Times (New York) 3 November 2011. Ibid. MacIntyre, above n 125. See Mary O’Connell, ‘Unlawful Killing with Combat Drones: A Case Study of Pakistan, 2004-2009’ (Research Paper No 09-43, Notre Dame Law School Legal Studies, 2009) in Simon Bronitt (ed), Shooting To Kill: The Law Governing Lethal Force In Context (Forthcoming) 3. See Mayer, above n 98.

131

!

EAP 23

Regulating the Use of Unmanned Combat Vehicles

97

to the news of Al-Awlaki’s death by drone as claiming that: ‘This is a programme under which American citizens far from any battlefield can be executed by their own government without judicial process, and on the basis of standards and evidence that are kept secret not just from the public, but from the courts.’132 Nevertheless, it is not clear that a sufficient number of US citizens will be outraged enough by the denial of judicial process in these circumstances to agitate for policy change. It is difficult to contemplate the circumstances that might lead to a change in policy. At present there seems no prospect of electoral agitation against the use of drones. Perhaps policy change will only come as the decision-making elite faces the potentially devastating reality that there are no educated Taliban leaders left to negotiate with because a generation of senior and midranking leaders have been eliminated by predators or reapers. Or, as Stafford Smith recently stated in response to the deaths of Tariq Aziz and Waheed Khan, ‘Tariq’s extended family, so recently hoping to be our allies for peace, has now been ripped apart by an American missile — most likely making any effort we make at reconciliation futile.’133

6

Conclusion

The Nuclear Weapons Advisory Opinion indicates that absent a comprehensive treaty or customary prohibition on the use of a particular weapon, it is unlikely that its use will be illegal per se under international law. Whilst fundamental principles of IHL can theoretically render the use of a particular weapon illegal in all circumstances, the ICJ’s Advisory Opinion demonstrates that the threshold is very high. There is currently no treaty or customary prohibition on the use of UCVs, nor do these weapons inherently contravene any of the basic principles of IHL, such as the obligation to distinguish or the prohibition on causing unnecessary suffering. As such, there is no prohibition on UCVs under international law, although their use in armed conflict must always be consistent with general principles of IHL. The 1995 treaty prohibition on blinding laser weapons134 was unusual because the overwhelming majority of states agreed to a comprehensive prohibition before the technology had been deployed to any significant extent

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
132

Martin Chulov and Paul Harris, ‘Anwar al-Awlaki, al-Qaida Cleric and Top US Target, Killed in Yemen’, The Guardian (online), 30 September 2011, quoting ACLU deputy legal director Jameel Jaffer <http://www.guardian.co.uk/world/2011/sep/30/anwar-al-awlaki-killedyemen?intcmp=239>. Clive Stafford Smith, ‘For Our Allies, Death From Above’, New York Times (3 November 2011). Additional Protocol to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which May be Deemed to be Excessively Injurious or to have Indiscriminate Effects (Protocol IV, entitled Protocol on Blinding Laser Weapons), opened for signature 13 October 1995, 1380 UNTS 370 (entered into force 30 July 1998).

133 134

EAP 24

98

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

on the battlefield. Presumably no state wanted to bring home combatants who had been permanently blinded, and so there was a unique level of multilateral commitment to ban the technology. More often, however, weapons are deployed extensively on the battlefield before they become the subject of a specific treaty ban. There will often be some general or particular catalyst for a specific treaty ban — such as deleterious humanitarian consequences from widespread deployment of the weapons in disregard for the affected civilian population. In the case of anti-personnel landmines the catalyst was a general one — the sheer magnitude of the problem of unexploded landmines causing horrific injuries for decades after the cessation of hostilities drove the campaign for a specific treaty ban. In the case of cluster munitions, there had also been growing calls for prohibition from repeated humanitarian disasters,135 but the particular deployment of large numbers of dud submunitions in Southern Lebanon constituted a notable impetus for a specific treaty prohibition. In the case of both anti-personnel landmines and cluster munitions there was substantial public outcry, as well as a growing perception that general rules of IHL were manifestly not sufficient on their own to regulate the use of these weapons. UCAVs have been used by military forces, most notably the US military, in armed conflict situations since 2001. This use has generated little controversy, aside from those who have expressed concern about the future directions of this technology — most notably the prospect of fully autonomous weapons operating in the battlefield. In contrast, the covert drone targeted killings program operated by the CIA gives rise to serious concerns, and has been roundly criticised by a number of commentators. Nevertheless, there has been comparatively little public outcry over these killings. The reasons for this are generally unclear. The lack of outcry may be a result of little public sympathy for ‘terrorists’. Perhaps it also results from the difficulties associated with monitoring and investigating covert operations carried out by drones. The geographic location of drone strikes inevitably means that the media and humanitarian NGOs are unlikely to be able to investigate and report on, let alone witness, individual attacks. In addition, it is unlikely that any military personnel (or CIA operatives) are physically present at the scene to witness the effects of a strike. Thus the only US witnesses are likely to be those from the CIA, watching the strike on screens thousands of miles from the scene of the attack. This, combined with the lack of any judicial body that reviews targeting decisions and publishes its findings, means it is very difficult for the public to accurately assess the effects of these strikes, or their compliance with IHL. The question asked at the outset of this paper was whether general principles of IHL are sufficient to regulate the use of UCAVs. This paper has sought to argue that the weapons themselves can be, and indeed often are, used in

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
135

See, for example, the 2006 report by Landmine Action, The Failure to Protect: A Case for the Prohibition of Cluster Munitions (2011) <http://www.mineaction.org/downloads/1/LMAUK_failure%20to%20protect.p df>.

!

EAP 25

Regulating the Use of Unmanned Combat Vehicles

99

conformity with IHL. However, it is difficult, if not impossible, to ensure that IHL is respected without consistent monitoring of attacks, investigation of potential breaches, some form of judicial review of decisions made and accountability for violations of the law. Indeed, even as early as 1949, the Geneva Conventions recognised the importance of accountability,136 and recent years have seen an increasing push towards transparent processes and accountability for violations of IHL by responsible civilian and military leaders. In this regard at least, the covert CIA targeted killing program represents a retrograde step.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
136

See Geneva Convention (I) for the Amelioration of the Condition of Wounded and Sick in Armed Forces in the Field, opened for signature 12 August 1949, 75 UNTS 85 (entered into force 21 October 1950) Article 49.

EAP 26

Unmanned Naval Vehicles at Sea: USVs, UUVs, and the Adequacy of the Law
COMMENT BY ROB MCLAUGHLIN * Introduction
There can be no doubt that law exercises many forms of limitation and regulation over the planning and conduct of maritime operations. But it is equally important to recognise that law is also an important weapon in the conduct of maritime operations. This is perhaps most eloquently evidenced in the juxtaposition of two highly influential treatises on the factors affecting the conduct of operations at sea. Alfred Thayer Mahan’s classic The Influence of 1 Sea Power upon History (1890) is generally considered to be the naval equivalent of von Clausewitz’s On War — one of the foundational texts of the discipline, and consequently the subject significant reference, critique and exegesis. The other side of this coin is D P O’Connell’s iconic 1975 study The Influence of Law on Sea Power2 — an overt genuflection to Mahan’s classic on maritime strategy. Both scholars — O’Connell as a celebrated scholar of international law who was a Reserve Officer in the RAN, and then the RN; Mahan as an active service US Naval Officer who was also an internationally recognised strategic theorist — were sensitive to the impacts of law upon the projection and use of seapower. O’Connell, for example, describes the role of the Law of the Sea (LOS) and the Law of Naval Warfare (LoNW) in securing the delay imposed upon the German pocket battleship Graf Spee in Montevideo after the battle of the River Plate on 13 December 1939. Clever use of the legal opportunities inherent in the situation allowed for a deception operation and mustering of forces such that Graf Spee was scuttled by her own Ship’s Company on 17 December 1939, rather than steamed out to what they thought would be certain destruction at the hands of a (they had been lead to believe) now much stronger British force laying in wait. This was a battle, O’Connell observed, where ‘the points of law arising in the situation were

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*

Associate Professor, College of Law, Australian National University; mclaughlinr@law.anu.edu.au. My short contribution draws significantly upon the general themes and conclusions set out in the prefatory study by Brendan Gogarty and Meredith Hagger, and the contributions of several of the specialist authors. I come to this project with the following background, which — quite clearly — has influenced my approach: More than 20 years service in the Royal Australian Navy as both a Seaman and Legal Officer; and some experience with advising on legal issues related to unmanned vehicle (UV) technology, in the course of which I have consciously adopted a general tendency to measure against existing principles rather than to identify lacunae. Alfred Thayer Mahan, The Influence of Sea Power upon History 1660-1783 (1890) – reprinted by Dover Publications, New York, 1987. D P O’Connell, The Influence of Law on Sea Power, (Manchester University Press/Naval Institute Press, 1975).

1

2

!

EAP 1!

Unmanned Naval Vehicles at Sea: USVs, UUVs, and the Adequacy of the Law

101!

weapons in the overall armoury, to be used adroitly in combination with naval force to bring the event to the desired end’.3 Mahan, as a member of the US delegation to the Hague Peace Conference in 1899, was a decisive influence in the US decision to cast the only negative vote on the proposal to prohibit use at sea of projectiles designed to spread asphyxiating or deleterious gases, which would — in his view — have limited, for no justifiable reason or outcome, the US Navy’s ability to threaten, fight and win at sea.4 The relationship between law and capability at sea is thus fundamentally bivalent — regulation is also empowerment; limitation can be weaponised. The declaration of a 12 nautical mile Territorial Sea offers a regulatory gain for the coastal state — no foreign power can send its warship through that zone to conduct intelligence collection operations. But by the same token, as technology allows, that 12 nautical mile line in the sea also delineates for other states a line from which they have an unimpeachable right to collect intelligence on that coastal state. Thus the same limitation or regulation is weaponisable in the service of two diametrically opposed interests: 12 nautical miles preserves coastal state security from intelligence collection and surveillance; 12.1 nautical miles assures other states’ access to it.

Outline
In this short contribution to the debate, I will focus briefly on two discrete issues cast up by unmanned vehicle (UV) technology and its use in the maritime domain: one related to definition; and one related to a specific operational issue. But before doing so, it will be necessary that I lay out my fundamental assumption, which is that the power of general principles actually mitigates — substantially — the need to develop detailed legal regimes of governance and regulation (or at least can very adequately serve this role until we develop a fuller grasp of the practical issues that arise). This assumption drives my assessment of the more specific issues then considered. Only with this admission made may I then move to assess two specific UV maritime operations law issues amongst the armadas of such issues worthy of detailed exploration. I will not, for example, examine how UVs might access and employ the right of hot pursuit.5 Nor will I inquire into the extent to which naval warfare may have already entered the autonomous unmanned combat underwater vehicle (UCUV) age with self-propelling smart naval

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
3 4

Ibid 39. A T Mahan, Peace Conference at the Hague 1899: Report of Captain Mahan to the United States Commission to the International Conference to the Hague, on Disarmament, etc, with Reference to Navies, 31 July 1899 (2008) Yale Law Library Avalon Project <http://avalon.law.yale.edu/19th_century/hag99-06.asp>; See also, William E Livezey, Mahan on Sea Power (University of Oklahoma Press, revised ed 1981) 272274. United Nations Convention on the Law of the Sea, opened for signature 10 December 1982, 1833 UNTS 3 (entered into force 16 November 1994) Article 111 (‘LOSC 1982’).

5

EAP 2

102

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

mines and torpedo mines which can travel to a designated site, identify a specific target on the basis of its acoustic signature, assess whether it has achieved the right combination of characteristics to allow prosecution of the target, and then give itself the go/no go order.6 What I do propose to focus upon are two of the more general, contextual issues that UVs highlight in terms of maritime operations law: status; and the poise and positioning of maritime forces.

1

Fundamental Assumptions

Clearly, specific technologies can (and almost always do) get ahead of the discrete legal regimes necessary to provide the more nuanced and detailed governance required by innovation — the Environmental Modification Techniques Convention (ENMOD Convention)7 is perhaps a notable exception to the rule, as is the system established to regulate disposition of the wealth expected to accrue from deep sea bed mining.8 But what can be missed in the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
6

The US Navy’s CAPTOR mine is delivered to its site by torpedo. It then moors and awaits the acoustic signature of a hostile submarine, having been programmed to ignore the acoustic signatures of friendly submarines, and of surface ships. Once it detects the required acoustic signature, it launches another torpedo which targets the hostile submarine. This technology has been deployed since 1979, see Military Analyst Systems, MK 60 Encapsulated Torpedo (CAPTOR) (1998) <http://www.fas.org/man/dod-101/sys/dumb/mk60.htm>. The US Navy has also deployed (since 1983) a Submarine Laid Mobile Mine (SLMM), which will navigate itself to its predetermined location (for example in an area where it is too shallow/exposed/dangerous to attempt to lay the mine by submarine, ship, or aircraft) and then rest on the bottom awaiting a specified surface target, see Military Analyst Systems, MK 67 Submarine Launched Mobile Mine (SLMM) (1998) <http://www.fas.org/man/dod-101/sys/ship/weaps/mk-67.htm>. It is highly likely that related technology has advanced radically in the last three decades. United Nations Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques, opened for signature 18 May 1977, 1108 UNTS 151 (entered into force 5 October 1978) (‘ENMOD Convention’). It is important to distinguish the focus of the ENMOD Convention from the environmentally related provisions in other LOAC instruments — As Adam Roberts and Richard Guelff note in their introduction to the Convention, ‘Articles 35(3) and 55(1) of 1977 Geneva Protocol I prohibit the employment of means or methods of warfare which may be intended or expected to cause ‘widespread, long-term and severe damage to the natural environment. This…provision is worded slightly differently from the ENMOD Convention…and has a different purpose: it is concerned with damage to the environment, whatever the weapons used. This is distinct from the manipulation of the forces of the environment as weapons, which is the central concern of the ENMOD Convention’. See Adam Roberts and Richard Guelff (eds), Documents on the Laws of War (Oxford University Press, 3rd ed, 2000, as amended and reprinted 2003) 407-417. LOSC 1982 Part XI (‘The Area’). See, for example, Article 160(2)(f)(i) relating to the powers and functions of the Assembly: ‘[t]o consider and approve, upon the recommendation of the Council, the rules, regulations, and procedures on the equitable sharing of financial and other economic benefits derived from activities in the Area…taking into particular consideration the interests and needs of

7

8

!

EAP 3

Unmanned Naval Vehicles at Sea: USVs, UUVs, and the Adequacy of the Law

103!

focus upon detail, or the search for analogies, is the power of general principles to offer sufficient governance during the inevitable hiatus that ensues whilst the impacts of a technological development are sorted through. Only in this way can we come close to ensuring that any ultimately developed regime of detailed regulation is practical, empirical, sensible, and experientially based. I make this point because it is sometimes lost in the heat of debate over the issue of, for example, CIA drone strikes against Taliban commanders and fighters in Pakistan. These strikes are governed either by the Law of Armed Conflict (LOAC), or the law enforcement paradigm — the fact that the weapon is a drone, as opposed to a detachment of Special Forces, does not mean that different law is applied, or that new law is required. Both agents are governed by the same legal penumbra: Was it a LOAC or law enforcement based action?; what is the status of the ‘shooter’? — whether the CIA controller is a ‘combatant’ in the same way as the Special Forces shooter is neither a new question, nor one that only arises in relation to UCAV; was there a breach of territorial integrity (in some situations, a legally acceptable and defensible action)? — whether the breach was of airspace by a UCAV, or crossing the border on land by a Special Forces force element, the issue is fundamentally the same in legal terms; was the target was properly identified? — whether the eyes on and pattern of life observations were made by Special Forces eyes on, or by unmanned combat air vehicle (UCAV) high resolution camera, the applicable LOAC tests are the same. None of these fundamental legal questions, and the general principles which govern their resolution, is displaced or nullified in application because the lethal effect was delivered by a UCAV controlled from outside Pakistan, as opposed to a Special Forces sniper in the hills above the qala. Whilst I certainly accept that I may be in the minority on this point, I remain convinced that the general principles of the LOAC and of the LOS, for example, are currently sufficient to provide the required level of governance over use of unmanned aerial vehicles (UAVs), unmanned surface vehicle (USVs), and unmanned underwater vehicles (UUVs) in the context of maritime operations. In all relevant respects, a USV, UUV, or ship launched UAV (and their weaponised variants) is a means or method of warfare (LOAC) or a vessel/aircraft/system (LOS) to be governed in the same way as a warship, warplane, weapons system, and/or a munition. Similarly, its operator is a human to be governed in the same way as the Commanding Officer, pilot, weapons guidance officer, or forward tactical controller who plays a role in its situational disposition and projection of force. So long as there is: (1) a traceable path of control over and responsibility for its employment (as distinct from any requirement for there to be an identifiable individual to be allocated criminal liability for an unlawful consequence, and subject to criminal sanction as a result — this is a different concept); and (2) recognition of the scope for error or mistake; then the law is (and, really, should be) fundamentally the same as for existing and pre-cursor capabilities.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
developing States and peoples who have not attained full independence or other self-governing status’.

EAP 4

104

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

Only when the line of control and/or responsibility becomes uncertain or unidentifiable at law does the governance offered by general principles potentially become fundamentally inadequate. Even in this situation, however, it is not at all clear that because an applicable general principle can not clearly identify the criminally liable human(s), it then necessarily follows that there is no responsible human. Just because — in a future of completely autonomous unmanned combat surface vehicles (UCSV) — there is no individual who physically pushes the required button which launches missiles at a truck ashore carrying refugees, does not mean that there is no line of responsibility. Direct causation is but one way to trace degrees of responsibility. The fact that no court has yet determined the apportionment of responsibility, in a LOAC sense, between (for example) the software and hardware designers and maintainers, the data enterers, the intelligence analysts who set the parameters, and the overall mission commander, does not mean that it can’t do so on the basis of the law we currently have. One component of the LOAC doctrine of command responsibility for war crimes (and similar violations), for example, is the duty to inquire.9 This certainly applies after an incident (such as with a unmanned combat vehicle (UCV), including an autonomous UCV), and requires the commander to make adjustments to, or to take relevant measures in relation to, the command, control, disposition, and administration of his or her force (or a component of it) as a consequence of those inquiries. It is inconceivable (in my experience, at any rate) that the first casualty incident involving a fully autonomous UCV will not be investigated to the nth degree, and that lessons learned will not be consciously developed and incorporated into doctrine, training, and command consideration at the operational and tactical levels. After this, the door to command responsibility will clearly be open. But even before we get to this stage, LOAC prescribes other obligations — carrying with them susceptibility to criminal sanction in the wake of noncompliance — which will also directly affect the employment of autonomous

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
9

See Rome Statute of the International Criminal Court, opened for signature 17 July 1998, 2187 UNTS 90 (entered into force 1 July 2002), Article 28. For discussion, see (for example) UK Ministry of Defence, The Manual of the Law of Armed Conflict, (Oxford University Press, 2004) [16.36.6] – ‘Actual knowledge is clearly sufficient, but it is also sufficient if the commander “had reason to know”. This has been described as “where he had in his possession information of a nature, which at the least, would put him on notice of the risk of such offences by indicating the need for additional investigation in order to ascertain whether such crimes were committed or were about to be committed by his subordinates”.’ The quote is from the Prosecutor v Delalic and Others (Celebici Case) (Judgement) (International Criminal Tribunal for the Former Yugoslavia, Appelas Chamber, Case No IT-96-21-A, 20 February 2001) [223], [239]; Similarly, see Gary D Solis, The Law of Armed Conflict: International Humanitarian Law in War (Cambridge University Press, 2010) 404 – ‘The commander’s liability is not that of an aider and abettor. Instead, it is grounded in his own negligence in acting or not acting…; the commander either failed to anticipate the criminality when she possessed specific facts that should have led her to act, or she failed to prevent criminal acts of which she knew, or under the circumstances, should have known, or she failed to take corrective action as to crimes already committed’.

!

EAP 5

Unmanned Naval Vehicles at Sea: USVs, UUVs, and the Adequacy of the Law

105!

UCVs, and thus potential criminal liability for the consequences of such employment. For example, the autonomous UCV will have been subject to an Additional Protocol I Article 36 weapons review well in advance of any operational employment — a process specifically designed to assess the weapon/weapons system, in the light of test and evaluation data, against both the general principles of LOAC, and any relevant specific prohibition or governance regime within LOAC.10 Two of the key criteria the expected operation of an autonomous UCV will need to be assessed against are its capacity to discriminate, and whether — in situations of expected employment — it creates disproportionate consequences. If an autonomous UCV is brought into service after such a review, but does not behave as anticipated, then this will necessitate further evaluation and review, and a consequent duty (and thus avenue to criminal liability) with respect to the future employment of that system. Similarly, each commander who directly utilises an autonomous UCV will still need to consider, on a case-by-case basis, the employment of that system in a particular targeting mission. To do this, the commander is required to take into account information regarding an autonomous UCV’s known parameters, performance and foibles, and any strategies for risk mitigation, as a component of his or her ‘precautions in attack’.11 The fact that a system is autonomous does not relieve the employing commander from the obligation, for example, to ‘do everything feasible to verify that the objectives to be attacked are neither civilians nor civilian objects’.12 If the commander’s response is that he or she decided to employ that autonomous UCV on the basis of accumulated wisdom as to its operating parameters in terms of capacity to distinguish, its risk mitigation penumbra with respect to this capacity, and the tactical situation, then any unanticipated outcome needs to be assessed against this background: the fact that the weapons system was autonomous does not relieve the commander of their obligation to assess and take precautions in attack. Similarly, the commander is under an obligation to ‘take all feasible precautions in the choice of means and methods of attack with a view to avoiding, and in any event to minimising, incidental loss of civilian life, injury to civilians and damage to civilian objects’.13 All other things being equal (although they rarely are), if the commander has two feasible options available to prosecute a target — an autonomous UCV and a ‘manned’ system — then he or she must still make a reasoned, reasonable,

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
10

Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), opened for signature 8 June 1977, 1125 UNTS 3 (entered into force 7 December 1979) Article 36 (‘Additional Protocol I’) – ‘In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party’. Ibid Article 57. Ibid Article 57(2)(a)(i). Ibid Article 57(2)(a)(ii).

11 12 13

EAP 6

106

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

and (including in a criminal liability sense) accountable decision as to which to employ, attending to the obligation to choose that option which offers the best prospects for no, or minimised, incidental injury or collateral damage. If (for example) the autonomous UCV has a strong record, in similar circumstances, of better discrimination than the manned system, then this must factor into the commander’s decision. Similarly, if the autonomous UCV has a poorer record, in similar situations, than the manned system, then this will likewise weigh heavily in the decision. The fact that one option is autonomous does not negate the commander’s preliminary responsibility — and potential liability — for the decision to employ it in the first place. Finally, it is vitally important to remember that LOAC — in particular — is readily cognisant of the fact that a ‘wrong’ consequence (the missile hits a refugee vehicle which the autonomous UCSV had identified as a military vehicle) does not always mean that criminal liability will be allocated to someone involved in the action. Mistakes in inputs, errors in judgement, flaws in systems, the unpredictability of intervening events, and — importantly — the unanticipated cumulative effect of a series of such events (the concept of holes in a series of slices of Swiss cheese unexpectedly lining up to allow the consequence ‘passage through’ to manifestation) are acknowledged mitigators of criminal liability in LOAC. These are factors which may blur, soften, or even break otherwise clear lines of control or responsibility such as to alleviate criminal liability. If the law already recognises situations where responsibility is so diffuse as to preclude allocation of criminal liability to one or a number of selected humans, then the worst case scenario of a tragic consequence as a result of action by an autonomous USCV, where it is simply not possible to allocate criminal liability to a human agent somewhere in the causative matrix, is already well contemplated at law. The obvious concern is that diffusion of responsibility to the point of inability to impose criminal liability is a recipe for impunity. But two factors should be borne in mind when considering this extremely serious and important caveat. First, in some situations, impunity will clearly be the result — but impunity as a consequence of blurred responsibility is already a consequence known to law,14 not one that will arise for the first time when an autonomous system kills civilians. Second, where the blurring of responsibility was the result of a conscious effort to generate impunity for the act, there are other paths to criminal liability — conspiracy, complicity, aiding and abetting, common purpose, and so on.15 Just because no court has yet adjudicated on the issue, on the basis of currently existing general principles, in relation to an autonomous USCV, it does not automatically follow that it will be unable to do so.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
14

For example, certain proceeds of crime laundering schemes, which carefully and consciously exploit a series of ‘lined-up’ gaps in the law, or banking laws which allow identity obfuscation practices. See, for example, Criminal Code 1995 (Cth) Part 2.4: Extensions of Criminal Liability.

15

!

EAP 7

Unmanned Naval Vehicles at Sea: USVs, UUVs, and the Adequacy of the Law

107!

Finally, there is always an inherent risk that in developing and legislating new law in place of an already applicable general principle, we will see the regulatory regime actually watered down, or (and this is an equally lamentable possible outcome) that the process generates such disagreement that opportunists will seek to exploit the grey area thus created — a backward step from the application of sound general principles. There can be no doubt that the persistent surveillance and time on target of a UCAV, for example, can radically enhance our capacity to apply LOAC — the longer the pattern of life observation we can achieve, the greater the potential for certainty and discrimination in targeting. Capacity to remain on station for significantly longer periods allows for less time sensitivity and/or own force exposure to increased risk levels in that the UCAV can wait for the most opportune moment (with all the possibilities this offers in terms of reduced collateral damage and incidental injury) rather than having to take a more damaging opportunity before the manned weapons system or unit has to come off task. Clearly, this capability means that opportunities to target are increased, but if the ultimate consequence is that more fighters die, but less civilians die, then this is a coarse, but legally, politically, and operationally defensible outcome. If we rush to limit the capacity to utilise UVs, and more particularly UCVs, in LOAC based operations, on the basis of what may currently be determined to be unacceptably high rates of collateral damage and incidental injury, we may actually end up creating a legal anomaly as the technology becomes more capable and discriminating. This is not an unknown result: the 1899 prohibition on the use of flattening and expanding rounds against enemy forces during armed conflict16 was based on the effects of the British Mk IV dum-dum bullet. But the prohibition fundamentally hinges on a technical description (as opposed to an effect). In the intervening century, technology has radically refined both flattening/expanding rounds, and the means to deliver them with greater accuracy, to such a degree that a related round is now the preferred round for policing. This is because it has better immediate stopping power than many full metal jacketed (standard military) rounds, and tends to stay in the body of the target rather than passing through and creating risk to bystanders (as full metal jacketed rounds can do). Yet LOAC, on the basis of a 110 year old proscription based on very different technical capabilities, ensures that such rounds cannot be used by military forces against the enemy. That is, the very type of round used by many police services because it lessens risk to bystanders, is prohibited to military forces in situations where it would serve exactly the same beneficial role — stop the fighter, but reduce risk to civilians. Perhaps a better result may have emerged from a continuing application of the existing principles found in LOAC (such as the prohibition on causing unnecessary suffering to combatants), rather than a rush to detailed technical proscription, so that the international community could have given itself the time to make a more informed assessment as to what sort of limitation might best serve and reflect the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
16

‘1899 Hague Declaration 3 Concerning Expanding Bullets’ in Roberts and Guelff, Documents on the Laws of War (Oxford University Press, 3rd ed, 2000, as amended and reprinted 2003) 63-66.

EAP 8

108

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

myriad principles and considerations at play. Perhaps, for the very same reason, we ought to pause long enough to look at potential unintended consequences before rushing to further, more detailed prohibitions in relation to, for example, UCVs. This is not to say that further, more detailed regimes of governance will not be required; rather it is to say that such regimes should be empirically based — that is, developed once the international community has a better handle upon what new or novel problems the technology actually presents in the course of operational employment.

2

Status

In assessing the capacity of existing law to provide governance over USV, UUV, and maritime UAV capabilities, it is important to first determine their status. The issue of independent UAV status is dealt with elsewhere in this edition, and thus I will focus only upon USVs, UUVs, and their organic systems (which may include UAVs). This is because differently nuanced regimes apply to vessels (and aircraft) entitled to ‘sovereign immunity’. The legal concept of ‘warship’ gained explicit parameters in the 1907 Hague Convention VII Relating to the Conversion of Merchant Ships into Warships, Articles 2-6. In essence, the cumulative effect of these articles require that a warship: ‘bear the external marks which distinguish the warships of their nationality’; be commanded by an officer ‘in the service of the State and duly commissioned by the competent authorities’ and whose ‘name must figure on the list of the officers of the fighting fleet’; ‘the crew must be subject to military discipline’; the vessel ‘must observe in its operations the laws and customs of war’; and — for merchant vessels converted to warships, the state ‘must, as soon as possible, announce such conversion in the list of warships’, thus implying that all warships must appear on such a list.17 This definition is essentially (although not in all respects) now replicated in United Nations Convention on the Law of the Sea (‘LOSC 1982’) Article 29: For the purposes of this Convention, ‘warship’ means a ship belonging to the armed forces of a State bearing the external marks distinguishing such ships of its nationality, under the command of an officer duly commissioned by the government of the State and whose name appears in the appropriate service list or its equivalent, and manned by a crew which is under regular armed forces discipline.18 Before looking to the independent status of USV/UUV, it is important to recognise one important distinction which colours what follows. This distinction is that a ship or submarine launched USV/UUV, which is

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
17

Gabriella Venturini, ‘1907 Hague Convention VII Relating to the Conversion of Merchant Ships into Warships’, in Natalino Ronzitti (ed), The Law of Naval Warfare: A Collection of Agreements and Documents with Commentaries (Martinus Nijhoff, 1988) 111-128. United Nations Convention on the Law of the Sea, opened for signature 10 December 1982, 1833 UNTS 3 (entered into force 16 November 1994) (‘LOSC 1982’).

18

!

EAP 9

Unmanned Naval Vehicles at Sea: USVs, UUVs, and the Adequacy of the Law

109!

controlled from its mother ship, is in fact an extension of that ship in that it is a system of that ship and thus holds a partially reflected status, rather than an entirely independent status. This is not an unusual concept in the LOS, and can equally apply to a manned system. For example, it is clear under the LOSC 1982 that a ship exercising rights of navigation, and indeed an aircraft exercising rights of overflight, in the course of Archipelagic Sealanes Passage (ASLP) under Article 53, must proceed in ‘normal mode solely for the purpose of continuous, expeditious and unobstructed transit’.19 Thus a warship, or an independent aircraft (for example a Maritime Patrol Aircraft (MPA)) must proceed expeditiously through the Archipelagic Sea Lanes (ASL). The implication is that the vessel/aircraft should not (unless incidental to normal navigation or as caused by some form of force majeure) loiter in the ASL, or make radical, delaying alterations in course (for example, in a patrolling pattern) during ASLP. The warship, as with the MPA, should essentially enter the ASL at one point and then proceed to its exit point from the ASL without undue delay. However, a warship’s organic helicopter — whilst it certainly has an independent identity for certain purposes — is for the purposes of ASLP simply a sensor system of the warship. This means that it can be deployed as a sensor out from the warship (for example to visually identify the ship which the warship’s radars detect ahead over the horizon), and then to return to the warship, without offending the requirements of ASLP that apply to the warship itself (or to the fully independent MPA overhead). The organic helicopter (as with the organic UAV) is not the ‘unit’ exercising ASLP — that is the warship. The helicopter is but a sensor system attached to the warship and thus can be used to do what it is designed to do in the interests of safe, continuous, and expeditious navigation.20 With this caveat thus explored, it is now possible to ask whether a USV or UUV can be a ‘warship’? The physical elements present no problem: belonging to the armed forces of a state, and bearing its normal warship markings easily remedied. Is a USV/UUV ‘under the command of an officer’? Certainly, ‘under the command of’ could be stretched to allow remote command, but when read together with the requirement for the warship to be ‘manned’ by a crew subject to regular armed forces discipline, this degree of elasticity can be doubted. In a purely practical sense, it is difficult to see how ‘manned’ could be stretched to include remote management and control, unless there is a (questionable) assertion that the USV/UUV is not the entirety

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
19 20

LOSC 1982, Article 53(3). It is important to recognise that some States would fundamentally disagree with this analysis. However, a number of major maritime powers have asserted that this is the interpretation to be placed on the ASLP regime – see, for example, [2.5.3.1] (relating to international straits, but applied also to ASLP - [2.5.4.1]) in the US Navy’s, Commander’s Handbook on the Law of Naval Operations (NWP 1-14M), July 2007, <http://www.usnwc.edu/getattachment/a9b8e92d-2c8d-4779-99250defea93325c/1-14M_(Jul_2007)_(NWP)> – ‘Surface warships may transit in a manner consistent with sound navigational practices and the security of the force, including the use of their electronic detection and navigational devices such as radar, sonar and depth-sounding devices, formation steaming, and the launching and recovery of aircraft’.

EAP 10

110

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

of the entity in question, and its full physical manifestation includes the controls and controller sitting ashore. However, as noted above, this does not extend to the issue of a USV/UUV controlled by a mother warship: As the German Navy’s Commander’s Handbook asserts, However, the requirement of a vessel for being manned does not mean that unmanned vessels, e.g. drones…are not warships. Apart from the fact that they can be manned in certain situations it must be taken into account that they are controlled by a warship and thus enjoy its legal status and immunity.21 It might be asserted, however, that the US Navy’s equivalent Commander’s Handbook appears to state the reverse: USVs and UUVs engaged exclusively in government, noncommercial service are sovereign immune craft. USV/UUV status is not dependant on the status of its launch platform.22 But this assertion must be read in its context in that it is situated in a section of the Commander’s Handbook which examines ‘Other Naval Craft’ as opposed to warships per se. This is further buttressed by the reference to status as ‘engaged exclusively in government non-commercial service’. The implication is not that warship launched and controlled USVs/UUVs have a completely independent status from that of their mother ship. Rather, the implication is that a government USV/UUV which is embarked on and controlled from a vessel which does not otherwise have sovereign immune status (such as a vessel undertaking a commercial operation, and thus not a state vessel), does not therefore suffer removal of its normal sovereign immune status, thus opening the door to any argument that this status has been replaced by a reflected image of the mother ship’s own lesser, non-sovereign immune, status. In summary, I do not believe that a USV/UUV (either semi or fully autonomous) controlled from ashore (as opposed to from a warship) is itself capable of strict characterisation as a warship. But the reason I see no need to stretch the definition to achieve this aim is that there is an alternative path to assuring the same sovereign immune status for a state’s USV/UUV assets — these vessels are ‘a government ship operated for non-commercial purposes’.23 This concept is not expressly defined in the LOSC 1982, thus there is no essential requirement (as for warships) that it be ‘manned’, ‘commanded by...’ and so on. There is clearly no issue with a USV/UUV enjoying the status of a government vessel on non-commercial service, and because such vessels

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
21

German Navy, Commander’s Handbook: Legal Bases for the Operations of Naval Forces, (2002) s 2.I.1. US Navy, Commander’s Handbook on the Law of Naval Operations (2007) [2.3.6]. See, for example, LOSC 1982, Article 31.

22 23

!

EAP 11

Unmanned Naval Vehicles at Sea: USVs, UUVs, and the Adequacy of the Law

111!

are entitled to the same sovereign immunity as warships24 there is no need to stretch the definition of warship to protect this characterisation. Before turning to the examine a discrete area of maritime operations law which — I believe — provides an example of the sufficiency of existing general principles to govern and regulate the use of USVs/UUVs, it is worth briefly noting two regulatory issues raised in the introductory study. The first is the issue of a proper and sufficient lookout (aimed at collision avoidance) as required under the COLREGS (also known amongst mariners as ‘the Rules of the Road’). In this respect, in my opinion, the combination of technology with existing legal regimes is sufficient to ensure adequate coverage of the field. This is for two reasons. First, it is anticipated that most USVs will travel at higher speeds than many manned vessels, and this means that the ‘lookout’ impost is in a practical sense reduced. The physics of relative velocity solutions (RELVEL), which all mariners utilise in assessing potential collision situations, are such that vessels travelling at much higher speeds than other traffic can only be brought into collision situations within a narrow arc either side of ship’s head. Thus the required lookout — whilst it must be maintained all round — can be much more focussed on the physically possible collision arc ahead. This is, of course, not a complete answer — as more vessels attain higher speeds, the relative speed advantage will reduce and the possible arc of collision situations will correspondingly increase on either side of ship’s head — but it does provide some practical succour for reduced anxiety as to the immediacy of this issue. This then leads us to the second point — USVs that are remote from their controller, or are fully autonomous, will not be ‘dumb’. They will have sensors — such as radar — just as manned vessels do, and will be controlled in accordance with the data these sensors reveal. To my mind, there is little distinction between a manned vessel navigating through restricted visibility under the control of an Officer of the Watch (OOW) standing on the bridge with his or her head buried in the radar, and a controller doing the same by reading the radar picture delivered instantaneously to their physically remote control station by the ships sensor suite. To take the example further, if it is highly restricted visibility, the radar is unserviceable, and the OOW is conducting collision avoidance through squinted eyes and peeled ears listening for the required restricted visibility sound signals of other vessels, there is no reason why a controller ashore who is receiving the same — if not actually enhanced — sensor information from the USV cannot conduct the navigation of the ship with the same degree of accuracy and safety.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
24

LOSC 1982, Article 32: ‘With such exceptions as are contained in subsection A [Innocent Passage rules applicable to all ships] and in articles 30 and 31 [definition of warships, and coastal State right, in certain circumstances, to require warships to leave the territorial sea], nothing in this Convention affects the immunities of warships and other government ships operated for non-commercial purposes’; Article 96: ‘Ships owned or operated by a State and used only on government noncommercial service shall, on the high seas, have complete immunity from the jurisdiction of any State other than the flag state’.

EAP 12

112

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

Once we arrive at fully autonomous USVs, which make entirely their own collision avoidance decisions on the basis of sensor input and data processing via highly contextualised algorithms, the situation will become more — but not irretrievably — complicated. Whilst there is clearly a potential for it all to go very badly wrong, there is also scope for removing certain forms of human error which plague shipping collision incidents. In many ways, collision avoidance is a much simpler thing to automate than decisions as to the correct characterisation and status of a potential target. And — as with targeting gone wrong — full automation does not necessarily imply that there is no longer a traceable line of responsibility, if not criminal liability. Finally, I should also register my initial views on two further issues raised by the editors. First, I absolutely agree that there is no requirement to read into the definition of ‘vessel’ any necessity for transporting someone or something characterisable as ‘separate’ from the vessel. The COLREGs definition is designed to cast the broadest possible net of application, for the very sound reason that the larger the pool of craft upon the sea to which they apply, the easier it is to predict their shiphandling and navigational conduct, and thus to prevent collision between them. USVs clearly transport whatever they contain (be it sensors, weapons or other systems) and proceed about the ocean with some purpose at the core of their passage. The COLREGs don’t apply to a floating log or a fixed platform because neither have the capacity to make way (that is, propel through the water under some form of power) and thus to act in accordance with the COLREGs. A USV, which can make way (as opposed to merely being underway — which means that it is not aground, at anchor, or made fast to the shore), is clearly subject to the COLREGs. The second issue is that of a vessel being described as ‘not under command’ (NUC). The essence of NUC status is the vessel’s inability to take collision avoidance action due to some special circumstance (such as loss of propulsion), not the presence of a human on board to ‘command’. The primary purpose of the NUC identification requirement is to warn other vessels that they must take all necessary collision avoidance action as the NUC vessel cannot take any action. The ‘command’ issue concerns the capacity of the vessel to respond to shiphandling orders, not to presence of a human on board. I believe that ‘command’ — in terms of NUC status, and the vessel’s capacity to respond to shiphandling orders — could readily be exercised via remote link, so long as it is effective in terms of capacity to undertake normal collision avoidance.

3

Poise and Positioning of Maritime Forces

When considering the strategic utilisation of USVs and UUVs in the conduct of maritime operations, utilising existing general principles may, I believe, actually reduce the existing capacity for the conduct of state vessels to provide (as they currently do) such a rich source of dramatic, but legally defensible, consequences. This is because the very capacity for manned warships to act as highly visible ‘provocations’ carries with it a concomitantly high risk of lethal response to transgression. For example, it is clear that under the LOSC 1982 submarines and other underwater vehicles, when exercising the right of innocent passage through a coastal state’s Territorial

!

EAP 13

Unmanned Naval Vehicles at Sea: USVs, UUVs, and the Adequacy of the Law

113!

Sea, must navigate on the surface and show their flag.25 Let us suppose a manned submarine is detected dived in a Territorial Sea. Clearly, there will be a very strong and very reasonable suspicion that it is conducting some operation (such as intelligence collection) which is certainly against the interests of the coastal state (quite apart from the fact that it is a breach of innocent passage).26 Let us then suppose that — because the two states concerned differ as to their assessment of baselines — the submarine considers that it is legitimately dived because it is outside the coastal state’s territorial sea. The coastal state, on the other hand, is of the view that the submarine is dived within their Territorial Sea — precisely the situation faced in the 1992 collision between USS Baton Rouge and a Russian submarine.27 As the ICJ has indicated in the Oil Platforms Case28 an attack on a warship can be readily characterised as an ‘armed attack’ for the purposes of invoking the UN Charter Article 51 right of national self-defence. That is, an attack on a warship is so serious an affront that it can be characterised as more than a mere use of force (which does not necessarily permit an immediate resort to use of force in response) such that it may constitute an armed attack on the flag state, allowing immediate recourse to use of force in response. One of the unstated reasons for this attribution of such a high level of gravity, and thus such a significant level of potential response, is that warships so clearly represent their sovereign, and thus an affront by them and to them carries with it high offence. But another reason is that a warship contains many sailors — human agents of the sovereignty. If state A sank state B’s submarine, with loss of all hands, the international political consequences would be most grave indeed. Let us now suppose that the detected vessel is a UUV — for, let us face it, once it becomes cheaper, less risky, and more effective to use a UUV for such

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
25 26

LOSC 1982, Article 20. See, for example, D P O’Connell, The International Law of the Sea: Volume I (I A Shearer, ed) (Clarendon Press, 1982) 297 – ‘On 26 October 1961, the Soviet Union issued a press release in which it ‘charged foreign submarines’ with violating Soviet territorial waters, and announced that they would be destroyed. The Swedish and Norwegian Navies have launched depth-charges at submerged contacts in the Swedish and Norwegian territorial sea, and the Argentinian Navy has done likewise’. This collision between the dived USS Baton Rouge and a dived Russian Sierra class submarine occurred off Murmansk on 11 February 1992. See, for example, Eugene Miasnikov, ‘Submarine Collision off Murmansk: A Look from Afar’ <http://www.armscontrol.ru/subs/collisions/db080693.htm>; John H Cushman Jr, ‘Two Subs Collide off Russian Port’, New York Times (online), 19 February 1992 <http://www.nytimes.com/1992/02/19/world/two-subs-collide-off-russianport.html>. Oil Platforms (Islamic Republic of Iran v United States of America) (majority judgement) [2003] ICJ Rep 161, [72] – ‘The Court does not exclude the possibility that the mining of a single military vessel might be sufficient to bring into play the “inherent right of self-defence”’ <http://www.icj-cij.org/docket/> (judgments). For an analysis of the case, see Pieter H K Bekker, ‘Oil Platforms (Iran v United States)’ (2004) 98 American Journal of International Law 550.

27

28

EAP 14

114

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

a task, most navies would seek to do so. It is clearly arguable that the offence created by its submerged presence in a Territorial Sea is equal in terms of the sovereign affront. There is no doubt that the coastal state may thus seek to respond with an immediate use of force aimed at destroying the UUV. But the consequences related to sinking the UUV are vastly less deleterious than those associated with sinking a much more expensive, manned submarine, and the loss of life that would attend that act. If a UUV is lost, the raw asset cost, but more importantly the domestic outcry over loss of life, which would drive governmental action towards further escalation, will be markedly less. Leaving aside the significant and admirable restraint shown by the South Korean government in the wake of the Cheonan sinking on 26 March 2010, by a North Korean submarine launched torpedo, with the loss of 46 lives,29 there can be no doubt that the sinking of a USV or UUV in the same circumstances would not have generated anything like the possibilities for escalated conflict, belligerent military action, and further loss of life that were so closely potential and readily available in that situation. Thus whilst increasing use of USVs or UUVs to conduct maritime surveillance could, of course, actually increase the frequency of incidents permitting use of force against these vessels, the significantly lesser consequences that result — in that no crew are killed — also radically reduces the inherent risks of escalation involved with an armed response against the platform. The use of UUVs/USVs in such politically and strategically sensitive and contested maritime operations will serve to transfer risk away from human crew, and thus lessen the ‘stakes’ inherent (for both sides) in the application of legitimate use of force countermeasures. Thus applying the existing general principles of law — in this case the relevant provisions of the LOS, and the law relating to use of force in accordance with the UN Charter — could actually return a significantly less dangerous, and this politically and legally explosive, result.

Conclusion
There is little doubt that the emergence of UV issues is challenging, and will continue to challenge, the law applicable to maritime operations. But — and I hope it is evident that my view is not simply that of a Luddite, but is also one based on principle — I firmly believe that an incrementalist approach is the only viable means of addressing these challenges. It is certainly not defensible to say that we already have all the law we need — but this is because we cannot know for certain what innovations in capability, or public opinion, will emerge in the medium term, not because our current LOAC and LOS regimes are incapable of adequately and sensibly responding. We are not confronted (in my view) with a significant legal vacuum. Nor are we confronted with the imminent consequence that failure to immediately

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
29

See the Joint Civilian-Military Investigation Group statement, ‘Investigation Result on the Sinking of ROKS “Cheonan”’, 20 May 2010, <http://www.globalsecurity.org/military/library/report/2010/100520_jcmigroks-cheonan/100520_jcmig-roks-cheonan.htm>; Victor Cha, ‘The Aftermath of the Cheonan’, Centre for Strategic and International Studies, 25 May 2010, <http://csis.org/publication/aftermath-cheonan>.

!

EAP 15

Unmanned Naval Vehicles at Sea: USVs, UUVs, and the Adequacy of the Law

115!

develop new law will result in unregulated, uncontrolled conduct. As I have attempted to indicate, there is great strength, stability, and sense in evolving existing regimes of governance, as opposed to rushing to create new law. The application of existing general principles, rather than the development of untested regimes of detailed governance, should thus be encouraged to the fullest extent possible — at least until we have learned, from practical experience, a little more about the realities that attend UV use in maritime operations.

EAP 16

Seductive)Drones:)) Learning)from)a)Decade)of)Lethal)Operations))
COMMENT)BY)MARY)ELLEN)O’CONNELL*) Introduction*
Late on 1 May 2011, in Washington, DC, President Barack Obama announced that United States Special Forces had killed Osama bin Laden in his home in the quiet Pakistani town of Abbottabad.1 As details emerged,2 human rights experts began to question the legality of the killing, specifically whether Bin Laden could have been captured, rather than killed.3 Just a few days later, the New York Times reported that the US had carried out drone strikes in Pakistan and Yemen, killing as many as 15 persons in Pakistan and two in Yemen.4 Plainly, no person killed in the drone strikes in the days following events in Abbottabad was as dangerous as bin Laden. 5 Yet, few questioned those

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*

Robert and Marion Short Chair in Law and Research Professor of International Dispute Resolution – Kroc Institute, University of Notre Dame. With thanks for excellent research assistance to Conor McGuinness, Notre Dame JD expected 2012. See The White House, Office of the Press Secretary, ‘Remarks by the President on Osama Bin Laden’ (Press Release, 2 May 2011) <http://www.whitehouse.gov/the-press-office/2011/05/02/remarks-presidentosama-bin-laden>. National Public Radio (NPR) Staff and Wires, ‘Officials: Bin Laden Was Unarmed When He Was Shot’ (3 May 2011) NPR <http://www.npr.org/2011/05/03/135948047/u-s-considers-whether-to-releasebin-laden-photos>. See Stephanie Nebehay, ‘UN Rights Boss Asks US for Facts on bin Laden Killing’, Reuters (online), 3 May 2011 <http://www.reuters.com/article/2011/05/03/us-binladen-un-rightsidUSTRE7425PR20110503>. Pir Zubair Shah, ‘Drone Strike Said to Kill At Least 8 In Pakistan’, New York Times (online), 7 May 2011, A9 <http://www.nytimes.com/2011/05/07/world/asia/07drone.html>; Mark Mazzetti, ‘American Drone Strike in Yemen Was Aimed at Awlaki’, New York Times (online), 6 May 2011 <http://www.nytimes.com/2011/05/07/world/middleeast/07yemen.html>; Tom Finn, ‘I Fear for my Son, Says Father of Anwar al-Awlaki, Tipped as New Bin Laden’, The Guardian (online), 8 May 2011 <http://www.guardian.co.uk/world/2011/may/08/anwar-awlaki-yemen-alqaida>. Mazzetti reports that the target of the drone strike in Yemen was Anwar al-Awlaki, above, n 4. Al-Awlaki is generally accepted to be a spiritual leader of the small group of al Qaeda members known as ‘Al Qaeda in the Arabian Peninsula’ or ‘AQAP’ and to run a Jihadist website. Yet, he is not considered one of AQAP’s ‘100

1

2

3

4

5

!

EAP$1!!

Seductive$Drones:$Learning$from$a$Decade$of$Lethal$Operations$

117$

killings, in contrast to bin Laden’s. Could the targeted persons have been captured rather than killed? A decade after the United States first began using drones for lethal operations complacency about their use may be taking hold. Many Americans, as well as citizens of other countries, may now accept that killing with drones far from armed conflict hostilities is both a legally and morally acceptable practice. Our growing complacency with respect to drone attacks is the concern of this contribution to the JILS special issue. Most legal experts would conclude today that unmanned combat vehicles (UCVs), including unmanned aerial systems or ‘drones’, fit the legal regime already in place for manned weapons systems. The United Kingdom Ministry of Defence’s Joint Doctrine Note 2/11, The UK Approach to Unmanned Aircraft Systems states: ‘Most of the legal issues surrounding the use of existing and planned systems are well understood and are simply a variation of those associated with manned systems.’6 Yet, is this really the case? Is it true that existing law regulating manned systems is adequate to regulate killing with UCVs? As will be discussed below, after a decade of killing, we have indications that the availability of drones is resulting in resort to military force that would not otherwise occur. Depending on what is behind this phenomenon, legal scholars may need to take a fresh look at the law governing the use of armed UCVs. This contribution aims only at raising awareness of the issue, to invite more investigation, and to urge caution with respect to killing with UCVs based on the information we currently possess. At least two sets of data indicate a problem: First, we have evidence from psychological studies that killing at a distance using unmanned launch vehicles may lower the inhibition to kill on the part of operators.7 Second, we have a decade of evidence of US presidents deploying military force where such force was unlikely to be used prior to the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
to 200 hardcore fighters’. See Finn, above n 4. See also, Al-Aulaqui v Obama et al, US District Court D.D.C., No 10-cv-01469 (JDB) (2010), especially Expert Declaration of Bernard Haykel.
6

United Kingdom Ministry of Defence, Joint Doctrine Note 2/11, The UK Approach to Unmanned Aircraft Systems, (30 March 2011) Ministry of Defence, [502] <http://www.mod.uk/NR/rdonlyres/F9335CB2-73FC-4761-A428DB7DF4BEC02C/0/20110505JDN_211_UAS_v2U.pdf>, citing Tony Gillespie and Robin West, Requirements of Autonomous Unmanned Air Systems set by Legal Issues, (14 December 2010) Defence Science and Technology Laboratories <http://www.dodccrp.org/html4/journal_v4n2.html>; See also, Philip Alston, Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Addendum, Study on Targeted Killings, (28 May 2010) UN Doc A/HRC/14/24/Add.6, 24 (‘Alston Report’). Mary Ellen O’Connell, ‘Unlawful Killing with Combat Drones: A Case Study of Pakistan, 2004-2009’ (Research Paper No 09-43, Notre Dame Law School Legal Studies, (2009)) in Simon Bronitt (ed), Shooting To Kill: The Law Governing Lethal Force in Context (forthcoming), 8-9 <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1501144>.

7

EAP$2! !

!

118

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

development of UCVs.8 This evidence indicates that the availability of UCVs lowers political and psychological barriers to killing. At the same time, an increasing number of international law specialists are arguing that it is lawful to kill terrorism suspects wherever they are found or to kill them if they are found in ‘weak states.’9 These arguments seem intended to support policy decisions already taken, rather than providing rigorous analysis of the relevant international law. International law establishes a high bar to lawful resort to lethal force. That high bar is derived from the Just War Doctrine and so reflects not just a legal norm, but a moral norm as well. Much policy on resort to lethal force, by contrast, appears to be related to Realist power politics ideology rather than international legal authority. Within Realism, resort to lethal force, killing, is acceptable to send a message of strength or to promote the perception of power in the form of military power. Even among policy makers not committed to Realist power projection there may be a belief in the utility of military force to suppress terrorism that is not warranted by the record. Part 1 below describes the past decade of lethal operations with drones. Part 2 contrasts legal analysis based on solid international legal authority with scholarship taking a permissive view of the right to kill. Part 3 considers the evidence that the availability of UCVs is lowering political and psychological barriers to killing. Given this evidence, the contribution concludes that, at the least, international law experts should be demanding strict compliance by states in the case of killing with UCVs. Indeed, far from relaxing the rules, consideration should be given to raising the standards when it comes to deploying armed UCVs.

1*

A*Decade*of*Lethal*UCV*Operations*

Unmanned systems are found in myriad forms today, performing innumerable functions.10 The focus here is on unmanned weapons systems, especially the systems that began to come online by the year 2000. It was in that year that the United States apparently adapted the Predator, an unmanned aerial surveillance vehicle, for use in lethal operations. 11 The

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
8 9 10

See below, at EAP 4-9. See below, at EAP 15, 17-22. For an excellent overview of UCVs, especially respecting legal issues, see Brendan Gogarty and Meredith Hagger, ‘The Law of Man over Vehicles Unmanned: The Legal Response to Robotic Revolution on Sea, Land, and Air’ (2008) 19 Journal of Law, Information and Science 73. See also, Peter W Singer, Wired for War: The Robotics Revolution and Conflict in the Twenty-First Century (Penguin, 2009) (provides the standard descriptive work on UCVs). Eighth Public Hearing of the National Commission on Terrorist Attacks Upon the United States: Counterterrorism Policy (written statement of George Tenet, 24 March 2004) National Commission on Terrorist Attacks Upon the United States, 16

11

!

EAP$3!

Seductive$Drones:$Learning$from$a$Decade$of$Lethal$Operations$

119$

Predator was re-engineered to carry two Hellfire missiles and subsequently deployed in the Afghanistan War that began 7 October 2001.12 In 2007, the US began buying the Reaper or MQ9, which is a drone designed from the start to be an attack vehicle.13 The Reaper is similar in design and function to the Predator but, among other things, can deploy more firepower.14 The Reaper can carry up to 14 Hellfire missiles, as well as bombs that can weigh up to 500 pounds. In late 2010, the Air Force announced it would stop buying Predators to focus solely on Reapers.15 It currently has dozens of Reapers with plans to buy hundreds more during 2011.16 While there is some evidence that the administration of President Bill Clinton had been prepared to use a Predator to kill Osama bin Laden in 2000,17 the media reports that Predators were first used to kill during the Afghanistan War. In November 2001, journalists reported that Mohammed Atef had been killed in a targeted killing mission that likely involved CIA and US Air Force personnel. Atef was considered to be the military head of al Qaeda at the time.18 The US used a drone to launch missiles at his home near Kabul. In addition to Atef, seven other persons were killed in the attack.19 The US has continued to use drones in combat operations in Afghanistan, where the conflict evolved from a war of self-defense in 2001 to a counter-insurgency or civil war by mid-2002.20

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
<http://govinfo.library.unt.edu/911/hearings/hearing8/tenet_statement.pdf> (‘Tenet Statement’).
12 13

Ibid 15. Spencer Ackerman, Air Force is Through with Predator Drones (14 December 2010) Wired <http://www.wired.com/dangerroom/2010/12/air-force-is-through-withpredator-drones/>. Christopher Drew, ‘Drones Are Weapons of Choice in Fighting Qaeda’, New York Times (online), 16 March 2009, <http://www.nytimes.com/2009/03/17/business/17uav.html?_r=1&hp>. Ackerman, above n 13. Ibid. Tenet Statement, above, n 11, 15-16. Eric Schmitt, ‘Threats and Responses: The Battlefield; US Would Use Drones to Attack Iraqi Targets’, New York Times (online), 6 November 2002, <http://www.nytimes.com/2002/11/06/world/threats-responses-battlefield-uswould-use-drones-attack-iraqi-targets.html>; See also, Peter Bergen and Katherine Tiedemann, The Drone War, Are Predators our Best Weapon or Worst Enemy? (3 June 2009) The New Republic, 22, <http://www.tnr.com/article/the-drone-war>. Reports Suggests Al Qaeda Military Chief Killed, (17 November 2011) CNN, <http://articles.cnn.com/2001-11-17/world/ret.atef.reports_1_qaeda-airstriketerrorist-network?_s=PM:asiapcf>. See below n 48 and accompanying text.

14

15 16 17 18

19

20

EAP$4! !

!

120

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

In November 2002, the CIA, operating from the tiny African country of Djibouti used a drone to kill Abu Ali Al Harithi in Yemen.21 The US had evidence tying Harithi to the 2000 attack on the USS Cole in the Port of Aden. The drone carried laser-guided Hellfire missiles, which struck a passenger vehicle carrying Harithi and five others. All six passengers in the vehicle were killed, including a 23-year-old American citizen from near Buffalo, New York. The CIA confirmed the identity of the victims by sending agents to the scene by helicopter right after the attack. The agents repelled to the ground to collect DNA samples from the bodies.22 US officials said one of six fatalities was Harithi, the suspect in the Cole attack.23 The US has carried out a number of other military attacks in Yemen since 2002, but it only resorted to drones for a second time in May 2011, during the turmoil of the pro-democracy demonstrations against President Saleh. The target of the May attack was Anwar al-Awlaki, a US citizen and Muslim cleric, considered to be a member of al Qaeda. He is thought to be a propagandist for the group, not a fighter. The drone attack missed Awlaki but killed two other persons.24 The US also used armed drones in the invasion of Iraq that began in March 2003.25 It continued to do so until the end of major combat operations in 2010. In 2004, the US began using drones to attack individuals in Pakistan. During the course of that year the US carried out three attacks. In 2008, the last year of the Bush Administration, there were about 30 drone attacks in Pakistan.26

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
21

Doyle McManus, ‘A US License to Kill, a New Policy Permits the CIA to Assassinate Terrorists, and Officials Say a Yemen Hit Went Perfectly. Others Worry About Next Time,’ Los Angeles Times (online), 11 January 2003, A1, <http://articles.latimes.com/2003/jan/11/world/fg-predator11>. Dina Temple-Raston, The Jihad Next Door: The Lackawanna Six and Rough Justice in the Age of Terror (Public Affairs, 2007) 196-197. McManus, above n 21; Jack Kelly, ‘US Kills Al-Qaeda Suspects in Yemen; One Planned Attack on USS Cole, Officials Say’, USA Today (online), 5 November 2002, A1 <http://www.usatoday.com/news/world/2002-11-04-yemen-explosion_x.htm>; John J Lumpkin, ‘Administration Says That Bush Has, in Effect, a License to Kill; Anyone Designated by the President as an Enemy Combatant, Including US Citizens Can Be Killed Outright, Officials Argue’, St Louis Post-Dispatch (St Louis) 4 December 2002, A12. Margaret Coker, Adam Entous and Julian E Barnes, ‘Drone Targets Yemini Cleric’, The Wall Street Journal (online), 7 May 2011 <http://online.wsj.com/article/SB10001424052748703992704576307594129219756. html>. See also, above n 5 and accompanying text. Brian M Carney, ‘Air Combat by Remote Control’, The Wall Street Journal (online), 12 May 2008 <http://online.wsj.com/article/SB121055519404984109.html#printMode>. Phil Stewart and Robert Birsel, ‘Under Obama, Drone Attacks on the Rise in Pakistan’, Reuters (online), 12 October 2009 <http://www.reuters.com/article/2009/10/12/idUSN11520882>. (‘There have been 39 drone strikes in Pakistan since Obama took office not quite nine months ago, according to a Reuters tally of reports from Pakistani security officials, local

22 23

24

25

26

!

EAP$5!

Seductive$Drones:$Learning$from$a$Decade$of$Lethal$Operations$

121$

President Obama quickly authorised an escalation. Within three days of his inauguration, he authorised CIA drone strikes that killed an estimated 20 people, including the residents of a house that was hit accidently: ‘The blast killed [a] tribal leader’s entire family, including three children, one of them five years old.’27 Despite this tragedy, Obama allowed nearly twice as many attacks in Pakistan in 2009 than there were in 2008. The number doubled again in 2010.28 In 2009, about thirty persons were killed on average in each attack. In 2010, the number killed per attack dropped to 6-7 persons. Nevertheless, of the hundreds of persons killed in 2010, only two were on the CIA’s target list.29 The US has used combat drones in Somalia, probably starting in late 2006, during the Ethiopian invasion when the US assisted Ethiopia in its failed attempt to install a new government in that volatile country. The US has also killed fleeing terrorist suspects using helicopter gunships. Following Ethiopia’s withdrawal, the US has continued to carry out lethal operations in Somalia, but it is unclear how many of these have involved drones as opposed to those that involved manned aircraft.30 The press reported attacks by Special Forces from helicopters in September 2009 that killed four.31 With respect to Somali pirates, the US has used law enforcement methods, even

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
government officials and residents. That compares with 33 strikes in the 12 months before Obama was sworn in on Jan. 20’ [paragraph break omitted]). See also, Bergen and Teidemann, above n 18.
27

Jane Mayer, ‘The Predator War, What are the Risks of the C.I.A.’s Covert Drone Program?’, The New Yorker (online), 26 October 2009 <http://www.newyorker.com/reporting/2009/10/26/091026fa_fact_mayer>. Peter Bergen, An Analysis of US Drone Strikes in Pakistan, 2004-2011 (10 January 2011) New America Foundation: Counterterrorism Strategy Initiative <http://counterterrorism.newamerica.net/drones>. Greg Miller, ‘Increased US Drone Strikes in Pakistan Killing Few High Value Militants’, Washington Post (online), 21 February 2011 <http://www.washingtonpost.com/wpyn/content/article/2011/02/20/AR2011022002975.html>. S Bloomfield, Somalia: The World’s Forgotten Catastrophe (9 February 2008) The Independent <http://www.independent.co.uk/news/world/africa/somalia-theworlds-forgotten-catastrophe-778225.html>; See also, US Missile Strike Hits Town in Somalia (3 March 2008) CBS News <http://www.cbsnews.com/stories/2008/03/03/world/main3898799.shtml>; A Strike Against Al-Qaeda’s Hornet’s Nest (1 September 2007) Spiegel Online <http://www.spiegel.de/international/0,1518,458597,00.html>. Karen de Young, ‘Special Forces Raid in Somalia Killed Terrorist With Al-Qaeda Links, US Says’, Washington Post (online) 15 September 2009 <http://www.washingtonpost.com/wpdyn/content/article/2009/09/14/AR2009091403522.html>. Apparently the US is planning to escalate drone attacks in Somalia. See Mark Mazzetti and Eric Schmitt, ‘U.S. Expands Its Drone War Into Somalia’ New York Times (New York), 2 July 2011, A1.

28

29

30

31

EAP$6! !

!

122

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

sending FBI agents to arrest individuals involved in the killing of American citizens.32 In these operations, drones have been used for surveillance only. The US has also deployed armed drones in Libya in the civil war that began in mid-February 2011. Almost a month after the United Nations Security Council authorised military intervention to protect civilians, the US decided to deploy combat drones.33 President Obama was looking for a way to remain engaged with NATO partners in Libya, while appearing not to be part of yet another armed conflict in a Muslim country. The solution was to send armed drones.34 In sum, during the last decade, we know from media reports that the US has used UCVs in lethal operations in the following countries: Afghanistan, Iraq, Libya, Pakistan, Somalia, and Yemen. The killings were part of armed conflict hostilities in Afghanistan, Iraq, and Libya, but generally not in Pakistan, Somalia, or Yemen. The total number of persons killed in these attacks is unknown.

2*

Law*on*UCV*Lethal*Operations*

In international law, the use of lethal force is governed by two separate legal regimes: The law of peace and the law of armed conflict. In peace, international law, especially international human rights law, limits the amount of lethal force national authorities may use in responding to violent crime. 35 During armed conflict hostilities, authorised persons face fewer restrictions to engage in lethal force than authorised persons beyond such hostilities. Acts of terrorism do sometimes occur during armed conflict, permitting the use of lethal force against terrorists under armed conflict rules.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
32

Carol Cratty, Sources: FBI agents in Somalia for arrest of alleged pirate (14 April 2011) CNN <http://articles.cnn.com/2011-04-14/us/somali.pirate.leader.indicted_1_somalipirate-macay-and-bob-riggle-pirate-leader?_s=PM:US>; Pirate who ‘wanted to kill Americans’ gets 33 years for hijacking US ship (2 February 2011) msnbc.com <http://www.msnbc.msn.com/id/41615693/ns/us_newscrime_and_courts/t/pirate-who-wanted-kill-americans-gets-years-hijacking-usship/>. ‘US Sends Drones to Libya, Battle Rages for Misrata’, Reuters (online), 21 April 2011 <http://www.reuters.com/article/2011/04/21/us-libyaidUSTRE7270JP20110421>. Between April and mid-June 2011, the US had attacked about 30 times in Libya with drones. See Charlie Savage and Thom Shanker, ‘Scores of U.S. Strikes in Libya Followed Handoff to NATO’, New York Times (online), 20 June 2011, <http://www.nytimes.com/2011/06/21/world/africa/21powers.html>. Ibid. Note, for example, the protests over excessive force used by governments against pro-democracy demonstrators in Bahrain, Egypt, Libya, Syria, Tunisia, and Yemen.

33

34 35

!

EAP$7!

Seductive$Drones:$Learning$from$a$Decade$of$Lethal$Operations$

123$

In general, however, acts of terrorism are criminal acts, subject to peacetime rules on the use of lethal force.36 Until 9/11, the United States observed this line, still respected by most states, between terrorist crimes and armed conflict hostilities. Indeed, just a few months before 9/11, the US Ambassador to Israel, Martin Indyk, stated on Israeli television in connection with Israeli targeted killing of suspected terrorists: ‘The United States government is very clearly on the record as against targeted assassinations. They are extrajudicial killings, and we do not support that.’ 37 The US position has generally been to treat terrorists as criminals.38 Following attacks by al Qaeda on American targets in 1993, 1998, and 2000, the US used the criminal law and law enforcement measures to investigate, extradite, prosecute and punish persons linked to the attacks.39

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
36

See, Final Report of the Use of Force Committee, The Meaning of Armed Conflict in International Law (August 2010) International Law Association <http://www.ilahq.org/en/committees/index.cfm/cid/1022>; Study on Targeted Killings, UN Doc A/HRC/14/24/Add.6, 3, 54, 85-86 (‘Alston Report’); Report of the Eminent Jurists Panel on Terrorism Counter-Terrorism and Human Rights, Assessing Damage, Urging Action (16 February 2009) International Commission of Jurists, 15 <http://www.icj.org/dwn/database/EJP-Report.pdf>; European Commission for Democracy Through Law (Venice Commission), Opinion: On the International Legal Obligations of Council of Europe Member States in Respect of Secret Detention Facilities and Inter-State Transport of Persons (18 March 2006) adopted by the Venice Commission at its 66th Plenary Session (Venice, 17-18 March 2006) Op no 363/2005, CDL-AD (2006)009 <http://www.venice.coe.int/docs/2006/CDL-AD%282006%29009-e.asp>; International Summit on Democracy, Terrorism, and Security, Towards a New Consensus (8-11 March 2005) Club of Madrid, 9-10 <http://www.clubmadrid.org/img/secciones/new_consensus.pdf>. Joel Greenberg, ‘Israel Affirms Policy of Assassinating Militants’, New York Times (online), 5 July 2001, A5, <http://www.nytimes.com/2001/07/05/world/israelaffirms-policy-of-assassinating-militants.html>. The US made an exception to this position when it discovered that Libyan agents bombed a Berlin disco where American service personnel often went. The US view was that such attacks and indications of future attacks led to a right to use force in self-defence under Article 51 of the UN Charter. For the facts, see Christopher Greenwood, ‘International Law and the United States’ Air Operation Against Libya’ (1987) 89 West Virginia Law Review 933. Such a claim has always been controversial because of the low-level nature of the terrorist attack. Even being sponsored by a state, it is unclear after the decision in the Nicaragua case that bombing or other significant military responses are lawful against more minor attacks. The ICJ indicated in Nicaragua that countermeasures are the appropriate response. Military and Paramilitary Activities in and Against Nicaragua (Nicaragua v United States of America) (Judgement of June 27) [1986] ICJ Rep 14, 103-104 [195], 119 [230] (‘Nicaragua’). After attacks on the US embassies in Kenya and Tanzania in 1998, the US used law enforcement techniques but also bombed sites in Sudan and Afghanistan. These bombings, like the Libya bombing discussed in note 38 above, were controversial. See, eg, Jules Lobel and George Loewenstein, ‘Emote Control: The Substitution of

37

38

39

EAP$8! !

!

124

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

The British, Germans, Italians, Kenyans, Spanish, Indonesians, Indians, and others have all faced terrorist challenges that they dealt with using law enforcement methods. 40 When becoming a party to the 1977 Additional Protocols to the 1949 Geneva Conventions, 41 the British appended the following understanding to their acceptance: It is the understanding of the United Kingdom that the term “armed conflict” of itself and in its context denotes a situation of a kind which is not constituted by the commission of ordinary crimes including acts of terrorism whether concerted or in isolation.42 France made a similar statement on becoming a party to the Protocol.43 In a dramatic policy shift that has not yet been fully explained, the US responded to the 9/11 terrorist attacks with a war in Afghanistan but also the use of military force and detention without trial of persons with no link to any armed conflict hostilities.44 The United States and the UK justified their initial use of force in Afghanistan on the classic international law doctrines of self-defence and state responsibility — the Taliban government of Afghanistan was linked to al

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Symbol for Substance in Foreign Policy and International Law’ (2005) 80 ChicagoKent Law Review 1045, 1071.
40

For a detailed account of the British struggle against the IRA and other counterterrorism efforts, see Louise Richardson, What Terrorists Want: Understanding the Enemy, Containing the Threat (Random House, 2006). International Committee of the Red Cross (ICRC), Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protections of Victims of International Armed Conflicts (Protocol I), opened for signature 8 June 1977, 1125 UNTS 3 (entered into force 7 December 1979) (‘AP I’) <http://www.unhcr.org/refworld/docid/3ae6b36b4.html>; ICRC, Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protections of Victims of Non-International Armed Conflicts (Protocol II), opened for signature 8 June 1977, 1125 UNTS 609, (entered into force 7 December 1978) (‘AP II’) <http://www.unhcr.org/refworld/docid/3ae6b37f40.html>. See AP I, Reservation/Declaration (2 July 2002) ICRC <http://www.icrc.org/ihl.nsf/NORM/0A9E03F0F2EE757CC1256402003FB6D2?O penDocument>. See AP I, Reservation/Declaration (11 April 2001) ICRC (in French) <http://www.icrc.org/ihl.nsf/NORM/D8041036B40EBC44C1256A34004897B2?O penDocument>. For a general discussion of the US position on suppressing terrorism before and after 9/11 and, see, Mary Ellen O’Connell, ‘The Choice of Law Against Terrorism’ (2010) 4 Journal of National Security Law & Policy 343, <http://ssrn.com/abstract=1654049>. Many assumed the expansive claims to belligerent rights would end with the presidency of George W Bush. See Christopher Coker, Ethics and War in the 21st Century (Routledge, 2008) x. With respect to killing with drones, the Obama administration has gone further than his predecessor. See O’Connell, above n 44, 343-344.

41

42

43

44

!

EAP$9!

Seductive$Drones:$Learning$from$a$Decade$of$Lethal$Operations$

125$

Qaeda and to al Qaeda’s attacks in the US on 9/11. Weaknesses in this argument have emerged subsequently but as a matter of the international law governing lethal force, it is generally sound. On the evidence available on October 7, the resort to major military force by the United States and the United Kingdom in Afghanistan was based on the right of self-defence under Article 51 of the United Nations Charter.45 Article 51 permits the use of major military force on the territory of another state if that state is responsible for a significant armed attack. 46 The 9/11 attacks were found to be significant enough to trigger a right to respond under Article 51 by the UN Security Council in its Resolution 1368, but the Security Council did not specify against what state the use of force in self-defense could be carried out. After some days, the US and UK determined that Afghanistan was responsible for the 9/11 attacks because of its support and cooperation with al Qaeda.47 The war of self-defence in Afghanistan began on 7 October 2001 and ended in 2002 when Hamid Karzai became Afghanistan’s leader following his selection at a loya jurga of prominent Afghans.48 Today, the US, UK, Australia, and other international forces are in Afghanistan at the invitation of President Karzai in an attempt to repress an insurrection. The fighting in Afghanistan, whether initially as an international armed conflict or today as a civil war, has remained significant enough to justify the use of battlefield weapons by all sides in the parts of Afghanistan where that fighting has taken place.49 At least in Afghanistan, therefore, the US use of drones has been justifiable so long as the rules governing battlefield conduct have been observed.50 The radical departure from accepted law occurred a year later in Yemen. There, in 2002, the US killed six persons through the use of a drone to launch

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
45

For a detailed discussion of treaties, customary rules, general principles, as well as International Court of Justice decisions relevant to this law, see Mary Ellen O’Connell, ‘Preserving the Peace: The Continuing Ban on War Between States’ (2007) 38 California Western International Law Journal 41; and Mary Ellen O’Connell, ‘Lawful Self-Defense to Terrorism’ (2002) 63 University of Pittsburgh Law Review 889, 889-904. See eg, Nicaragua, [1986] ICJ Rep 14, 102 [193]. O’Connell, ‘Lawful Self-Defense to Terrorism’, above n 45, 901-902. See President Hamid Karzai (2006) The Embassy of Afghanistan, Washington DC, <http://www.embassyofafghanistan.org/president.html>. This has certainly been the case through mid-2011, but the situation may be changing as the media reports of at least preliminary peace talks. See, eg, Quil Lawrence, Talk of Peace in Afghanistan is a Matter of Trust (17 April 2011) NPR, <http://www.npr.org/2011/04/17/135486374/negotiating-afghan-peace-withthe-taliban-quietly>. See the investigation into a drone strike in February 2010 that resulted in the deaths of a number of civilians. Christopher Drew, ‘Study Cites Drone Crew in Attack on Afghans’, New York Times (online), 10 September 2010, <http://www.nytimes.com/2010/09/11/world/asia/11drone.html>. (The Pentagon report found the Predator crew exercised ‘poor judgment’).

46 47 48 49

50

EAP$10! !

!

126

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

Hellfire missiles, despite the fact that the US was not engaged in armed conflict hostilities in that country. Prior to the strike, the FBI had been working with Yemeni authorities to apprehend and prosecute the al Qaeda members in Yemen suspected of attacking the Cole in 2000. The US had employed law enforcement techniques cooperatively with Yemeni authorities with considerable success.51 The drone attack signalled a shift to the use of military force. The US Air Force reportedly had legal concerns with the attack, which was carried out by the CIA.52 In January 2003, the United Nations Commission on Human Rights received a report on the Yemen strike from its special rapporteur on extrajudicial, summary, or arbitrary killing. The rapporteur concluded that the strike constituted ‘a clear case of extrajudicial killing.’ 53 US State Department lawyer, Michael Dennis, published an article in the American Journal of International Law taking issue with the UN finding and defending the ‘global war on terror.’ Dennis wrote: ‘The United States’ response to the … Yemen allegations has been that its actions were appropriate under the international law of armed conflict and that the Commission and its special procedures have no mandate to address the matter.’54 Dennis’s position was based on his argument that the United States could treat persons far from any battlefield as combatants if they could be tied in some way to al Qaeda. US National Security Adviser Condoleeza Rice explained in a television interview that the Yemen attack was justified because the United States was in a ‘new kind of war’ to be fought on ‘different battlefields.’55 The Deputy General Counsel of the Department of Defense for International Affairs at the time, Charles Allen, also explained that the US was in a global war against certain persons, wherever found, and may target ‘Al Qaeda and other international terrorists around the world and those who support such terrorists without warning.’56 He emphasised that the existence of the ‘war’ depends on the person targeted, not the existence of armed hostilities. Thus, for Allen the US has the legal

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
51

Ali H Soufan, ‘Scenes from the War on Terrorism in Yemen’, New York Times (online), 2 January 2010, <http://www.nytimes.com/2010/01/03/opinion/03soufan.html>. McManus, above n 21; See also, Jeremy Scahill, The Dangerous US Game in Yemen, (18 April 2011) The Nation, <http://www.thenation.com/article/159578/dangerous-us-game-yemen>. Asma Jahangir, Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Civil and Political Rights, Including the Questions of Disappearances and Summary Executions, UN Doc. E/CN.4/2003/3, (13 January 2003) [39]. Michael J Dennis, ‘Human Rights in 2002: The Annual Sessions of the UN Commission on Human Rights and the Economics and Social Council’ (2003) 97 American Journal of International Law 364, 367, n 17. Fox Broadcasting Company, Fox News Sunday, 10 November 2002 (Condoleezza Rice) <http://www.foxnews.com/story/0,2933,69783,00.html>. Anthony Dworkin, Official Examines Legal Aspects of Terror War, (on file with author).

52

53

54

55 56

!

EAP$11!

Seductive$Drones:$Learning$from$a$Decade$of$Lethal$Operations$

127$

right to target and kill an al Qaeda suspect on the streets of Hamburg, Germany, or in any place where such suspects are located.57 Just a few months after the UN report on Yemen, the US, UK, Australia, and Poland invaded Iraq. This was a conventional use of military force. The countries involved did not try to establish a formal link between their perceived right to use force and the ‘global war on terror.’ US Vice President Cheney did speak frequently of a link between Saddam Hussein and al Qaeda but the US, UK, and Australia all sent letters to the UN Security Council explaining that they were enforcing Security Council resolutions on Iraqi weapons of mass destruction that had been adopted at the end of the Gulf War in 1991.58 International law scholars generally agree that the coalition required fresh authority to use force in 2003 to enforce the Council’s resolutions. Without that authorisation, the use of force was unlawful. Once in an armed conflict, however, resort to weapons is governed by the jus in bello. Even states that have resorted to force unlawfully are bound to fight such conflicts according to in bello rules. Because the invading coalition faced a well-armed and organised Iraqi military, it was permitted to resort to battlefield weapons, such as drones carrying Hellfire missiles. As armed conflict raged in Iraq and Afghanistan during 2004, the Bush administration began a covert program of drone strikes in Pakistan. When President Obama came to office in 2009 and dramatically increased the drone attacks in Pakistan, the pressure to provide a legal justification became intense. People such as counterterrorism expert David Kilcullen were openly discussing the drone campaign in Pakistan and strongly criticising it, pointing to the number of persons being killed who were not on the CIA’s target list. President Obama had campaigned against the war on terrorism, so it remained to be seen throughout 2009 how the new administration would justify the use of military force in a country that had not attacked the US and had only occasionally requested US assistance in military operations against militant groups. The legal justification given by the US for its escalating drone attacks came in March 2010, in a major speech at the American Society of International Law (ASIL) by the Legal Adviser to the State Department, Harold Koh. Koh explained that while the Obama administration had renamed the war on terror, the policy would basically continue as under the Bush administration with respect to detention without trial. Covert targeted killing would increase, however.59 The new label for the effort became the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
57 58

Anthony Dworkin, Law and the Campaign against Terrorism: The View from the Pentagon (16 December 2002), 6 (on file with author). See, eg, Letter dated 20 March 2003 from the Permanent Representative of the United States of America to the United Nations addressed to the President of the Security Council, UN Doc. S/2003/351 (21 March 2003). Scahill, above n 52; See also, Charlie Savage, ‘Obama’s War on Terror May Resemble Bush’s in Some Areas’, New York Times (online), 18 February 2009 <http://www.nytimes.com/2009/02/18/us/politics/18policy.html?r=2&hp=&pagewanted=print>.

59

EAP$12! !

!

128

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

‘armed conflict with al-Qaeda, as well as the Taliban and associated forces…’60 Koh cited only one case to defend the killing of named individuals far from armed conflict hostilities: The killing of Japan’s Naval General Yamamoto during World War II.61 Koh went on to indicate, however, that targeted killing would not be carried out everywhere that the US might find members of these groups. Rather, the US would only carry out attacks in weak states, not places like the United States or Germany where, presumably, law enforcement efforts could be used.62 Koh cited no authority for the right to use military force on the territory of a state because it is ‘weak’.63 The UN Charter does not have an exception to the Article 2(4) prohibition on the use of force for ‘weak states.’ The International Court of Justice has never found such an exception in its many cases dealing with the use of force. The only support comes from authors, who typically look at a handful of examples that usually lack opinio juris — evidence that the practice is undertaken as a matter of legal right.64 By contrast, the ICJ has consistently ruled that force used in self-defence may only be carried out on the territory of a state responsible for a significant armed attack if that state ordered the attack or controls the group that carried it out. This is the clear conclusion from ICJ decisions in the 1949 Corfu Channel case, 65 the 1986 Nicaragua case, 66 the 1996 Nuclear Weapons case,67 the 2003 Oil Platforms case,68 the 2004 Wall case,69 the 2005 Congo case,70

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
60

Harold Hongju Koh, The Obama Administration and International Law (25 March 2010) Annual Meeting of the ASIL, US Department of State <http://www.state.gov/s/l/releases/remarks/139119.htm>. At the time of writing, July 2010, the hostilities in Iraq had so subsided that US military were following peacetime rules of engagement. Dean Koh made clear that new terminology was being used in an answer to a question from the author at the ASIL Annual Meeting on 26 March 26 2010. It is not clear, however, that the new terms refer to a substantive change. The exchange was recorded and broadcast on NPR. See Ari Shapiro, US Drone Strikes Are Justified, Legal Adviser Says (26 March 2010) NPR <www.npr.org/templates/story/story.php?storyId=125206000>.

61

The Yamamoto case was not uncontroversial at the time. Diane Amann relates that at least one intelligence officer aware of the attack at the time it occurred, former US Supreme Court Justice John Paul Stevens, today has doubts as to whether it was lawful. See Diane Marie Amann, ‘John Paul Stevens, Human Rights Judge’ (2006) 74 Fordham Law Review 1569, 1582-1583. Today it would be in conflict with the basic treaties that form the present law on the use of force, namely the 1945 United Nations Charter and the 1949 Geneva Conventions. Koh, above n 60. For additional authority against the Koh position, see above n 36 and EAP 18-21. See, eg, Theresa Reinold, ‘State Weakness, Irregular Warfare, and the Right to SelfDefense Post-9/11’ (2011) 105 American Journal of International Law 244, 284. Corfu Channel (United Kingdom v Albania) (Merits) [1949] ICJ Rep 4. Nicaragua, [1986] ICJ Rep 14. Legality of the Threat or Use of Nuclear Weapons (Advisory Opinion) [1996] ICJ Rep 226, 245 (‘Nuclear Weapons’).

62 63 64 65 66 67

!

EAP$13!

Seductive$Drones:$Learning$from$a$Decade$of$Lethal$Operations$

129$

and the 2007 Bosnia v Serbia case.71 In the Congo case the ICJ ruled against Uganda, which had claimed the right to use armed force on the territory of the Congo against non-state actor armed groups after years of cross-border incursions. The ICJ found that the Congo was not legally responsible for the armed groups — it did not control them. Even the Congo’s failure to take action against the armed groups did not give rise to any right by Uganda to cross into the Congo to attack.72 The ICJ stated it was not deciding a case of ‘large scale attacks’ on Uganda. Such attacks would, indeed, have constituted a different case, one where, presumably, the militant group controlled territory as a de facto government or had ties of responsibility to the de facto government. Such factors would create a situation like the Taliban’s control of most of Afghanistan in 2001 or the Kurds control of Northern Iraq. In late 2007, both Turkey and Iran justified incursions into Northern Iraq because of attacks by Kurds for which the Kurdish de facto government was responsible.73 In contrast, Pakistan and Yemen are in as much or more control of their territory as Congo at the time of Uganda’s attacks. Somalia basically lacks a controlling group meaning that no significant armed attacks are likely to be launched from that country — and none have been. The only type of permissible force that may be waged on the territory of Somalia would be law enforcement measures of the type the US is employing against pirates.74 One year after Koh’s speech, the American Journal of International Law again published an article supportive of the US’s targeted killing policy.75 Unlike the article by Dennis arguing in support of the ‘global war on terror’, the new article finds a right to use military force on the territory of ‘weak’ states. The

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
68 69

Oil Platforms (Islamic Republic of Iran v United States of America) (Judgment) [2003] ICJ Rep 161, 190-191 [61]-[64]. Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory (Advisory Opinion) [2004] ICJ Rep 207, 215 [33]-[34] (Separate Opinion of Judge Higgins). Armed Activities on the Territory of the Congo (Democratic Republic of the Congo v Uganda) (Judgment) [2005] ICJ Rep 168, 222-223 [146], 268 [301] (‘Congo’). Application of the Convention on the Prevention and Punishment of the Crime of Genocide (Bosnia and Herzegovina v Serbia and Montenegro) (Judgment) [2007] ICJ Rep 43, 204205 [391] (‘Bosnia v Serbia’). Congo [2005] ICJ Rep 168, 222-223 [146], 268 [301]. See also, James Thuo Gathii, ‘Irregular Forces and Self-Defense Under the UN Charter’ in Mary Ellen O’Connell (ed), What is War? An Investigation in the Wake of 9/11 (Martinus Nijhof/Brill, forthcoming, 2011). See Mary Ellen O’Connell, The Power and Purpose of International Law: Insights from the Theory and Practice of Enforcement (Oxford University Press, 2008) 183-184. Cratty, above n 32. See Theresa Reinold, ‘State Weakness, Irregular Warfare, and the Right to SelfDefense Post-9/11’ (2011) 105 American Journal of International Law 244, 284.

70 71

72

73 74 75

EAP$14! !

!

130

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

author, Theresa Reinold, asserts that there is a ‘consensus’ that the United Nations Charter rules on self-defence are ‘inadequate.’ 76 She provides no citation for this consensus but goes on to examine uses of force by the US, Russia, Turkey, Colombia, Israel, and Uganda against weak states. With respect to the Congo case, she seeks to minimise its clear holding against attacking states not responsible for low-level armed incursions by emphasising the one sentence of obiter dictum discussed above — that the decision does not reach ‘large-scale attacks’ by irregulars. She also emphasises the separate opinions of Judges Kooijman and Simma for their criticism of the majority’s control test for attributing the acts of militant groups to the territorial state.77 As a matter of legal authority in international law, however, the separate opinions of judges do not outweigh the majority. This is especially true respecting the control test for attributing acts to sovereign states given that the test was reconfirmed two years later in the ICJ’s 2007 Bosnia v Serbia case.78 In addition to the lack of affirmative authority for her position, Reinold omits consideration of the principles of necessity and proportionality. The general principle of necessity requires some showing that the use of military force is a last resort and can accomplish a defensive purpose. The counterterrorism literature casts considerable doubt on the effectiveness of military force to suppress terrorism.79 Even President Obama apparently knows the limits of

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
76+

Reinold, ibid, 246, 275. Reinold does not consider that in 2005, after the US declared a ‘global war on terror’, that the international community reiterated that the rules of the UN Charter remain binding as written. The World Summit Outcome document includes these paragraphs never mentioned by Reinold: 77. We reiterate the obligation of all Member States to refrain in their international relations from the threat or use of force in any manner inconsistent with the Charter. … [W]e are determined to take effective collective measures for the prevention and removal of threats to the peace and for the suppression of acts of aggression or other breaches of the peace, and to bring about by peaceful means, in conformity with the principles of justice and international law, the adjustment or settlement of international disputes or situations that might lead to a breach of the peace. 78. We reiterate the importance of promoting and strengthening the multilateral process and of addressing international challenges and problems by strictly abiding by the Charter and the principles of international law, and further stress our commitment to multilateralism. See 2005 World Summit Outcome, GA Res 60/1, UN GAOR, 60th Sess, Agenda items 46 and 120, UN Doc A/60/L.1 (15 September 2005) 21-22.

77

Ibid, 260-262. In other words, the ICJ stated clearly that before a state could be held legally responsible for a violation of international law committed by a non-state actor, the state would have to control the actor. [2007] ICJ Rep 43, 204-205 [391]-[393]. See eg, John Mueller, How Dangerous are the Taliban? (15 April 2009) Foreign Affairs <http://www.foreignaffairs.com/print/64932>; Seth G Jones and Martin C Libicki, How Terrorist Groups End, Lessons for Countering al Qa’ida (2008) Rand

78 79

!

EAP$15!

Seductive$Drones:$Learning$from$a$Decade$of$Lethal$Operations$

131$

drone attacks: ‘Despite the CIA’s love affair with unmanned aerial vehicles such as Predators, Obama understood with increasing clarity that the United States would not get a lasting, durable effect with drone attacks.’ Reinold’s effort raises the question why she and other authors search to find exceptions to the prohibition on force? Why, in the words of Louis Henkin, search for loopholes in the Charter and blurring of its bright lines? Why is it in the interest of the international community to dilute the UN Charter prohibition on the use of force in our violent age? Henkin demanded leaving the Charter as invulnerable as can be to self-serving interpretations and to temptations to conceal, distort, or mischaracterize events. Extending the meaning of “armed attack” and of “self-defense,” multiplying exceptions to the prohibition on the use of force and the occasions that would permit military intervention, would undermine the law of the Charter and the international order established in the wake of world war.80 The authors who search for exceptions may be motivated to do so by a different philosophical position than the one upon which the Charter was based. The drafters of the Charter inherited an understanding that lawful and moral resort to armed force is restricted to true situations of necessity, where the use of military force will accomplish a lawful military objective.81 The alternative view asserts a right to use force against non-state actors regardless of the territorial state’s responsibility or evidence that military force is necessary. This view may be based on the Realist philosophy of power projection. Realists advocate resorting to force to send a message of strength.82 For many Realists, state sponsored killing beyond the nation’s

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Corporation <http://www.rand.org/pubs/monographs/2008/RAND_MG7411.pdf>; Richardson, above n 40; Mary Ellen O’Connell, ‘Enhancing the Status of Non-State Actors Through a Global War on Terror’ (2005) 43 Columbia Journal of Transnational Law 435. See also, Dennis C Blair, ‘Drones Alone Are Not the Answer’, New York Times (online), 14 August 2011 <http://www.nytimes.com/2011/08/15/opinion/drones-alone-are-not-theanswer.html?_r=2&ref=opinionv>.
80

Louis Henkin, ‘Use of Force: Law and US Policy’ in Louis Henkin et al (eds), Might v. Right, International Law and the Use of Force (Council on Foreign Relations Press, 1989) 37, 69. O’Connell, above n 73, 215-216. See, eg, Charles Krauthammer, ‘Democratic Realism: An American Foreign Policy for a Unipolar World’ (Paper presented at the 2004 Irving Kristol Lecture, Washington, DC, 12 February 2004) <http://www.aei.org/docLib/20040227_book755text.pdf> (Krauthammer asserts his view that American military power is what keeps the ‘international system from degenerating into total anarchy,’ at 10). The resilience of the Realist philosophy, can be partially explained — or at least demonstrated — by the fact that two of the most widely read scholars in the field of international relations in the United States are Hans Morgenthau and Kenneth

81 82

EAP$16! !

!

132

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

borders may be justified to signal national power, which is, of course, defined as military power.83 If a terrorist group attacks, proponents of Realism argue that a demonstration of military power must be made to counter any perception of weakness by the victim. In the days after 9/11, before it was confirmed that al Qaeda was indeed responsible for the attacks, Deputy Secretary of Defense Paul Wolfowitz argued for ‘ending’ states that support terrorism.84 Former Secretary of State, Lawrence Eagleburger, said, ‘You have to kill some of these people; even if they were not directly involved, they need to be hit.’ 85 American leaders and academics, imbued with this way of thinking have, especially since the end of the Cold War, chafed against international law limits on the use of force. 86 Realist ideology may be clouding understanding of the limited utility of military force 87 and the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Waltz. See, generally, Kenneth Walz, Man, the State, and War (Columbia University Press, 1959); and Hans Morgenthau, Politics Among Nations: The Struggle for Power and Peace (Knopf, 1948). See also, O’Connell, ibid, 62-78.
83

But see John Mearsheimer, a leading Realist scholar at the University of Chicago: Now, the final issue that you raised is the question of what I think of about how the Bush administration is waging the war on terrorism. My basic view, which may sound somewhat odd coming from a Realist, is that the Bush administration's policy is wrong-headed because it places too much emphasis on using military force to deal with the problem, and not enough emphasis on diplomacy. I think that if we hope to win the war on terrorism, or to put it in more modest terms, to ameliorate the problem, what we have to do is win hearts and minds in the Arab and Islamic world. Harry Kreisler, Through the Realist Lens: Conversation with John Mearsheimer, Professor of Political Science, University of Chicago (8 April 2002) <http://globetrotter.berkeley.edu/people2/Mearsheimer/mearsheimercon5.html>.

84 85 86

Roy Eccleston, ‘Iraq the Next Target for US Hawks – War on Terror’, Australian Financial Review (Melbourne), 25 September 2001, 9. Ziauddin Sardar, ‘Where is the Hand of my God in This Horror?’ New Statesmen (online), 17 September 2001 <http://www.newstatesman.com/200109170006>. The Israeli scholar David Kretzmer, for example, calls for ‘realistic standards of conduct for states involved in armed conflicts with terrorist groups’ in David Kretzmer, ‘Targeted Killing of Suspected Terrorists: Extra-Judicial Executions or Legitimate Means of Defence?’ (2005) 16(2) European Journal of International Law 171, 212. For him ‘realistic standards’ are the legal right to kill in situations beyond a strict application of the international law rules. Robert Chesney divides the scholars writing on the use of force in the terrorism context into two groups: ‘strict’ and ‘broad’ constructionists of the rules. See Robert Chesney, ‘Who May Be Killed? Anwar Al-Awlaki as a Case Study in the International Legal Regulation of Lethal Force’ (forthcoming, 2011) Yearbook of International Humanitarian Law <http:ssrn.com/abstract=1754223>. Chesney, like Kretzmer, clearly favours ‘broad’ or relaxed standards for killing but provides a detailed discussion of the work of those who support a strict reading of the law on killing. See citations to the work of both categories in Chesney.

87

See on this topic generally, General Rupert Smith, The Utility of Military Force: The Art of War in the Modern World (Penguin, 2007).

!

EAP$17!

Seductive$Drones:$Learning$from$a$Decade$of$Lethal$Operations$

133$

wisdom of rules based on the moral principle of necessity. Further, realist power politics has become so robust in foreign policy thinking in the United States that Americans seem to pay little attention to the moral arguments against killing. Indeed, the arguments favouring the right to attack non-state actor groups rarely if ever address the necessity issue. To that extent, the argument for killing with drones is not even put to the test that many demanded for torture: John Rizzo, who served as the CIA’s top lawyer during the Bush administration, said he found it odd that while Bush-era interrogation methods like waterboarding came under sharp scrutiny, “all the while, of course, there were lethal operations going on, and think about it, there was never, as far as I could discern, ever, any debate, discussion, questioning…[of] the United States targeting and killing terrorists.88 As just indicated, apparently President Obama knows what counter-terrorism experts have been saying consistently since 9/11: military force such as drone attacks does not suppress terrorism. But the use of drones may not be intended for that exact purpose. They may be intended for retribution or intimidation; not suppression. Regardless, ten years after 9/11 and the constant use of drones, the challenge of terrorism for the US appears to have grown stronger and no one speaks any longer of the US as the ‘sole superpower.’

3*

The*Ease*of*Killing*with*Drones*

What might explain the growing use of UCVs beyond armed conflict hostilities despite the law and limited results against terrorism? Is the very possession of the technology leading to decisions to kill in situations where, without it, a non-lethal approach would be taken? It was suggested by an audience member at the US Air Force’s Air University during a discussion of drones in September 2010 that Presidents Bush and Obama might not consider the use of drones to actually amount to the use of military force. Certainly we have numerous indications after a decade of lethal operations with UCVs that many in the United States do not view killing with drones to be as serious a matter as killing carried out by ground troops, piloted planes, manned naval vessels, or a CIA agent using a knife. One former CIA lawyer has observed: ‘People are a lot more comfortable with a Predator strike that kills many people than with a throat-slitting that kills one.’89 This section will consider various indications that the possession of UCV technology is lowering the political and psychological barriers to killing, making it easier to overlook the legal, policy and ethical limits as well.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
88

Adam Entous, ‘How the White House Learned to Love the Drone’, Reuters (online), 18 May 2010 <http://www.reuters.com/article/2010/05/18/us-pakistan-dronesidUSTRE64H5SL20100518>. Mayer, above n 27.

89

EAP$18! !

!

134

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

In the UK report on UAVs, mentioned above, the authors raise the concern that UAV technology may be weakening the barriers to killing: [O]ne of the contributory factors in controlling and limiting aggressive policy is the risk to one’s own forces. … For example, the recent extensive use of unmanned aircraft over Pakistan and Yemen may already herald a new era. That these activities are exclusively carried out by unmanned aircraft, even though very capable manned aircraft are available, and that the use of ground troops in harm’s way has been avoided, suggests that the use of force is totally a function of the existence of an unmanned capability—it is unlikely a similar scale of force would be used if this capability were not available.90 A national leader knows he can deploy drones without his own citizens coming home in body bags. This fact plainly makes the decision to kill easier for political reasons, especially in the United States where the body bag count that went on for years during the Vietnam War still haunts politicians and citizens alike. One proof of this was the decision in the Bush Administration to ban photographs at Dover Air Force Base where service personnel killed overseas arrive back in the US. A recent, dramatic example of political leaders seeing drones as different from other forms of military force is the fact that President Obama moved away from manned aircraft to drones in Libya. President Obama had promised that the US would only be using military force in Libya for a few days. As those few days stretched into weeks, he shifted from manned aircraft to drones. With drones he could still assure US Allies that the US was making a major military commitment while at the same time assuring the American people that the US was not really involved in another armed conflict. In a debate about whether Mr Obama was exceeding his legal authority in Libya by not consulting with Congress after 60 days of military involvement, one Congressman asked: ‘“Could one argue that periodic drone strikes do not constitute introducing forces into hostilities since the strikes are infrequent” and “there are no boots on the ground?”’91 This last point, no boots on the ground, has been a significant political factor in the use of drones in Pakistan. Pakistan’s government has restricted the US

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
90 91

United Kingdom Ministry of Defence, above n 6, 5-9. Charlie Savage, ‘Libya Effort is Called a Violation of War Act’, New York Times (online), 25 May 2011 <http://www.nytimes.com/2011/05/26/world/middleeast/26powers.html?_r=1 &emc=eta1>. US State Department Legal Adviser Harold Koh testified before Congress that the US was not involved in ‘hostilities’ in Libya for purposes of the War Powers Act because the US was not using ground forces but rather primarily unmanned drones in its attacks. See Jennifer Steinhauer, ‘Obama Adviser Defends Libya Policy to Senate’, New York Times (online), 28 June 2011, <http://www.nytimes.com/2011/06/29/us/politics/29powers.html?_r=1>.

!

EAP$19!

Seductive$Drones:$Learning$from$a$Decade$of$Lethal$Operations$

135$

military presence on its territory to a small number of military trainers.92 To comply with this mandate, the US’s drone attacks in Pakistan have been the responsibility of the CIA, operating from CIA headquarters in Langley, Virginia and in Pakistan itself. Until the Obama administration radically increased the number of attacks in 2009, the CIA attempted to keep the attacks quiet, something it could do more effectively than the military. Keeping the drone program covert has also allowed the US to hint that Pakistani officials have secretly given consent to the strikes.93 As relations with Pakistan have deteriorated, the US has seemed less interested in honouring Pakistan’s wishes and has conducted strikes in Pakistan with helicopter gunships. 94 In the Abbottabad operation, President Obama authorised the Navy Seals to fight against any attempt by Pakistan to prevent their helicopters from leaving.95 Ironically, in Yemen, the political calculation has been different. In reporting on the 2002 CIA drone strike that killed six persons, the media related that Yemen’s President Saleh had acquiesced in the strike. We now know from Wikileaks, Saleh banned further drone attacks but permitted manned vehicle strikes. He would then claim that Yemen itself had carried out the attacks.96 The choice of launch vehicle seems tied to the fact that Yemen does not possess armed drones. In early 2011, Saleh faced serious challenges to his presidency from both peaceful demonstrators and armed militants. In the midst of this chaos, the US launched the first drone strike since 2002.97 There

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
92

‘Pakistan Trims US Military Training Mission’, Reuters (online), 26 May 2011 <http://in.reuters.com/article/2011/05/25/idINIndia-57285220110525> (the number of US personnel involved with training has ranged between 200 and 300 individuals). The reality is more complicated as Pakistan’s military, intelligence services, and elected officials have likely taken different positions at different times. Of the hundreds of strikes, very few, if any, would have had the express permission of Pakistan’s elected officials for use in armed conflict hostilities in Pakistan and carried out by uniformed members of the United States’ armed forces. Scott Shane, ‘C.I.A. to Expand Use of Drones in Pakistan’, New York Times (online), 3 December 2009 <http://www.nytimes.com/2009/12/04/world/asia/04drones.html?ref=unmann edaerialvehicles>. Issam Ahmed, ‘NATO Helicopter Strike on Pakistan Shows New Strategy of “Hot Pursuit”’, Reuters (online), 30 September 2010, <http://www.csmonitor.com/World/Asia-South-Central/2010/0930/NATOhelicopter-strike-on-Pakistan-shows-new-strategy-of-hot-pursuit>. Eric Schmitt, Thom Shanker and David E Sanger, ‘US was Braced for Fight with Pakistanis in bin Laden Raid’, New York Times (online), 9 May 2011 <http://www.nytimes.com/2011/05/10/world/asia/10intel.html>. Scahill, above n 52. Jeb Boone, ‘US Drone Strike in Yemen is First Since 2002’, New York Times (online), 5 May 2011 <http://www.washingtonpost.com/world/middle-east/yemeniofficial-us-drone-strike-kills-2-al-qaedaoperatives/2011/05/05/AF7HrzxF_story.html>.

93

94

95

96 97

EAP$20! !

!

136

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

was no point in trying to help Saleh save face any longer, plus, the rapidly changing situation made it safer for US personnel to use a drone. By May 2011, following the killing of bin Laden, US officials may also have come to believe that they have made their political, legal, and moral case for killing terrorism suspects with drone attacks in weak states. Related to the political reasons for killing with drones are the psychological factors. We know that technological distance from a victim makes the decision to kill easier for the person actually controlling the weapon. It may make the decision to kill easier for those in the operator’s chain of command, as well, if they know they are not risking their own nationals’ lives along with the enemies’. Proponents of combat drones also argue that drones are far more precise than alternatives such as ship launched cruise missiles or bomber aircraft attacking from high altitude.98 John Aquilla, Executive Director of the Information Operations Center at the Naval Post Graduate School, has said, ‘I will stand my artificial intelligence against your human any day of the week and tell you that my A.I. will pay more attention to the rules of engagement and create fewer ethical lapses than a human force.’99 Kenneth Anderson has also argued that robots will make better, more accurate decisions in life and death matters whether the context is health care or war fighting. 100 The message is that it would be immoral not to use UCVs. In the simple world of politics that message quickly morphs into the idea that UCVs must be used. CIA Director Leon Panetta demonstrates the slippage in thinking. He has emphasised that using drones is lawful because he asserts they are ‘precise’.101 He has not apparently spoken of why it is lawful to resort to them in the first place, but his thinking was revealed in May 2009, when he said drone attacks are ‘precise’ and cause only ‘limited collateral damage.’ 102

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
98

Kenneth Anderson, Am I Arguing a Strawman about Drones, Civilian Collateral Damage, and Discrimination? (27 April 2011) Opinion Juris <http://opiniojuris.org/2011/04/27/am-i-arguing-a-strawman-about-dronescivilian-collateral-damage-and-discrimination/>. Anderson asserts that the greater precision of drones is now so widely accepted that ‘civilian’ casualties are not the big issue. However, cf Scott Shane, ‘C.I.A. Is Disputed on Civilian Toll in Drone Strikes’, New York Times (online), 11 August 2011 <http://www.nytimes.com/2011/08/12/world/asia/12drones.html?ref=unmann edaerialvehicles>. John Markoff, ‘War Machines: Recruiting Robots for Combat’, New York Times (online), 27 November 2010 <http://www.nytimes.com/2010/11/28/science/28robot.html>. Remarks of Kenneth Anderson, Texas International Law Journal Symposium (10-11 February 2011, University of Texas School of Law). Mary Louise Kelly, Officials: Bin Laden Running Out of Space to Hide (5 June 2009) NPR <http://www.npr.org/templates/story/story.php?storyId=104938490>. See also Shane, above n 98. Ibid; Noah Shachtman, CIA Chief: Drones ‘Only Game in Town’ for Stopping Al Qaeda (19 May 2009) Wired

99

100 101

102

!

EAP$21!

Seductive$Drones:$Learning$from$a$Decade$of$Lethal$Operations$

137$

‘“And very frankly,” he said, “it’s the only game in town in terms of confronting and trying to disrupt the al-Qaida leadership.”’ 103 The US administration apparently measures the game’s success by the number of ‘militants’ killed with each drone strike. This feature of the drone wars is reminiscent of America’s experience in Vietnam. Year after year, US officials provided statistics of the number of enemy killed. The war, however, was never won. The wide acceptance of using drones to kill in far off countries tracks the findings of Lieutenant Colonel Dave Grossman in his 1996 book, On Killing, that distance from a victim makes the decision to kill easier or more acceptable. 104 For Grossman, ‘distance from the victim’ includes various concepts of distance, including physical, emotional, social, cultural, moral, and mechanical.105 These factors seemed to be at play in a tragic incident that occurred in Afghanistan in February 2010, in which 23 Afghan civilians were killed and another 12 injured. The casualties were the result of an attack from a helicopter but the vital information leading to the decision to attack came from drone operators controlling a surveillance drone in Afghanistan from their base in Nevada. A New York Times article explained the incident: A Predator drone pilot played down two warnings about the presence of children before military commanders ordered a helicopter attack that killed 23 Afghan civilians traveling down a road in February [2010], an Air Force investigation has found. … The Predator’s video cameras were trained on the vehicles — a pickup truck and two sport-utility vehicles — to try to determine if they were carrying insurgents seeking to outflank the American forces. … [A]n officer on the ground had told the Predator crew, which was based in Nevada, that his commander intended to attack the vehicles if their passengers were carrying weapons. … [I]n his desire to support the ground forces, General Otto wrote, the [drone] pilot “had a strong desire to find weapons,” and this “colored —

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
<http://www.wired.com/dangerroom/2009/05/cia-chief-drones-only-game-intown-for-stopping-al-qaeda/>.
103 104

Shachtman, above n 102. Lt Col Dave Grossman, On Killing, The Psychological Cost of Learning to Kill in War and Society (Back Bay Books, 1996) 187. See also Noel Sharkey, ‘Automating Warfare: Lessons Learned from the Drones’ (2011) 21(1) Journal of Law, Information and Science, EAP 6, DOI: 10.5778/JLIS.2011.21.Sharkey.1. Ibid 188-189. See also, CBS, 60 Minutes, episode 29, 10 May 2009 <http://www.metacafe.com/watch/cbz3fr4WgBt40PaRwY6oPAmp6CkZVBlNbE/60_minutes_05_10_09_season_41_epis ode_29/>.

105

EAP$22! !

!

138

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

both consciously and unconsciously — his reporting of weapons and children.”106 Grossman focuses on the person who actually pulls the trigger or uses the joystick to fire the missile. Yet, the distance factor could impact everyone involved in the kill decision, including a whole society that supports such killing. In the United States there are many indications that killing with drones is at a high level of acceptance. Indeed, the acceptance is so high that Americans joke about killing with drones in a way they would not, presumably, about killing with a bayonet or a cruise missile. In May 2010, at an annual dinner for journalists, politicians, and celebrities in which invited guests are expected to tell jokes, President Obama quipped about his two young daughters being fans of a band called The Jonas Brothers. Members of the group were in the audience and the President said, ‘“Boys, don’t get any ideas. I have two words for you – Predator drones…You will never see [them] coming.”’107 The Bush and Obama administrations present killing with drones as precise and imperative, and, therefore, so morally unproblematic we can joke about it. Yet, moral philosophers teach that the taking of human life may only be justified to protect human life.108 In other words, the exceptional right to resort to lethal force rests squarely on a moral justification of necessity. In armed conflict hostilities, the necessity to kill is presumed. Away from such hostilities the necessity to kill must be related to an immediate threat to life. International law on killing remains closely tied to these fundamental moral insights, as discussed above.109 Given the political and psychological lures to killing with drones, international law specialists should be alert to whether the current law on lethal operations is adequate. Rather than loosening the rules as appears to be the trend today, the argument here is that the rules should be strictly applied to drone use to counter-balance their seductive attraction.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
106 107 108

Drew, above n 50. Entous, above n 88. See Germain G Grisez, ‘Toward a Consistent Natural Law Ethics of Killing’ (1970) 15 American Journal of Jurisprudence 64, 76 cited in David Hollenbach, S J, Nuclear Ethics, A Christian Moral Argument (Paulist Press, 1983) 18-19. Hollenbach describes how the Just War Tradition evolved from Aquinas’s position presuming that war is sinful to one presuming war is just so long as it is waged by legitimate authorities. Hollenbach argues in favour of returning to the presumption that violent warfare is presumed to be morally wrong and that resort to war is justifiable only in exceptional situations, at 14-16. Hollenbach’s position is consistent with current international law on the use of force, as reviewed above. See O’Connell, above n 73, chapter 1; See also Geoffrey Best, Law and War Since 1945 (Oxford University Press, 1994): ‘[I]t must never be forgotten that the law of war, wherever it began at all, began mainly as a matter of religion and ethics… It began in ethics and it has kept one foot in ethics ever since.’ At 289.

109

!

EAP$23!

Seductive$Drones:$Learning$from$a$Decade$of$Lethal$Operations$

139$

Returning to close compliance with the rules will likely require a rejection of the Realist motive of killing to project an image.110 Killing to send a message of strength or for retribution is neither moral nor lawful.

Conclusion*
The post-9/11 decade has shown general acceptance among American officials when it comes to killing persons with drones far from armed conflict hostilities. Such killing risks no American lives and the price tag is considerably lower than when using manned systems.111 The embrace of the drone does not follow simply from cost calculations, however. Since 2002, Americans have heard that it is legal to kill persons far from hostilities. Yet, this conclusion respecting legality may have more to do with the projection of power or campaign politics than with the sources of international law or morality. The restrictive rules on killing derived from the actual sources of international law are related to the moral principle that killing is only justifiable in actual armed conflict hostilities or when necessary to save a human life immediately. And in the case of killing with drones far from battlefields it turns out, just as with the use of torture, that the effective means of dealing with the challenge of non-state actor terrorist or militant groups is also the lawful and moral approach.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
110

We have some indications that torture was used after 9/11 for the same reason — to indicate America’s willingness to be ruthless toward a brutal foe. Trained interrogators spoke out consistently against the use of torture as a reliable technique for information gathering. The persistent use of torture in the face of such expert views began to seem like something other than information gathering. See, Mary Ellen O’Connell, ‘Affirming the Ban on Harsh Interrogation’ (2005) 66 Ohio State Law Journal 1231. P W Singer, Robots at War: The New Battlefield (Winter 2009) The Wilson Quarterly <http://www.wilsoncenter.org/index.cfm?fuseaction=wq.essay&essay_id=496613 >.

111

EAP$24! !

!

!

Automating Warfare: Lessons Learned from the Drones
COMMENT BY NOEL SHARKEY* Abstract
War fighting is currently undergoing a revolution. The use of robotics platforms for carrying weapons is coming on track at an increasing rate. US plans from all of the armed forces indicate a massive build up of military robots and at least 50 other countries have either bought them or have military robotics programmes.1 Currently all armed robots in the theatre of war are remotely controlled by humans; so called man-in-the-loop systems. Humans are responsible for both target selection and decisions about lethal force. But this is set to change. The role of the person in the loop will shrink and eventually vanish. But are we ready for this step? Do we understand the limits of the technology and how massive increases in the pace of battle will leave human responsibility in the dark? Before moving to autonomous operation we need to consider the lessons learned from the application of the current remotely piloted armed robots. Four areas are considered here: (i) moral disengagement; (ii) targeted killings in covert operations; (iii) expansion of the battle space; (iv) the illusion of accuracy.

Introduction
Since 2004, all of the Roadmaps and plans of the US forces have discussed the requirements for the development and deployment of autonomous battlefield robots.2 The UK Ministry of Defence Joint Doctrine Note3 follows suit.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
* 1

University of Sheffield, UK. I have personally read valid robotics reports for each of the following countries and there may be several more: Australia, Austria, Brazil, Bulgaria, Canada, Chile, China, Columbia, Croatia, Czech Republic, Ecuador, Finland, France, Germany, Greece, Hungary, India, Indonesia, Iran, Israel, Italy, Japan, Jordan, Lebanon, Malaysia, Mexico, Netherlands, New Zealand, Norway, Pakistan, Peru, Philippines, Poland, Romania, Russia, Serbia, Singapore, South Africa, South Korea, Spain, Sweden, Switzerland, Thailand, Taiwan, Tunisia, Turkey, United Arab Emirates, United Kingdom, USA, Vietnam. US Department of the Navy, The Navy Unmanned Undersea Vehicle (UUV) Master Plan (9 November 2004); US Department of Defense, Office of the Secretary of Defence, Unmanned Aircraft Systems Roadmap 2005-2030 (2005); US Office of the Undersecretary of Defense, Joint Robotics Program Master Plan FY 2005, LSD (AT&L) Defense Systems/Land Warfare and Munitions; US Department of Defense, Office of the Secretary of Defense, Unmanned Systems Roadmap 2007-2032 (2007); United States Air Force, Unmanned Aircraft Systems Flight Plan 2009-2047 (18 May 2009). United Kingdom Ministry of Defence (MoD), Joint Doctrine Note 2/11, The UK Approach to Unmanned Aircraft Systems (30 March 2011) (‘Joint Doctrine Note’).

2

3

EAP 1!

Automating Warfare: Lessons Learned from the Drones

141!

Fulfilment of these plans to take humans out of the loop is well underway. There will be a staged progression towards autonomous operation; first for flight (take-off, navigation, obstacle avoidance etc) then for target selection. The end goal is that robots will operate autonomously to locate their own targets and destroy them without human intervention.4 The term autonomy can be very confusing for those not working in robotics. It has the flavour of robots thinking for themselves. But this is just part of the cultural myth of robotics created by science fiction. Autonomy in robotics is more related to the term automatic than it is to individual freedom. An automatic robot carries out a pre-programmed sequence of operations or moves in a structured environment. A good example is a robot arm painting a car. An autonomous robot is similar to an automatic machine except that it operates in open or unstructured environments. The robot is still controlled by a program but now receives information from its sensors that enable it to adjust the speed and direction of its motors (and actuators) as specified by the program. For example, an autonomous robot may be programmed to avoid obstacles in its path. When the sensors detect an object, the program simply adjusts the motors so that the robot moves to avoid it; if the left hand sensors detect the object the robot would move right and if the right hand sensors detect the object, the robot would move left. Even those who should know better can confuse the issue, for example the UK Ministry of Defence (MoD) Joint Doctrine Note begins its definition of autonomy as follows: ‘An autonomous system is capable of understanding higher level intent and direction.’5 The problem with this statement is that, apart from metaphorical use, no system is capable of ‘understanding’ never mind ‘understanding higher level intent’. This would mean that autonomous robots may not be possible in the foreseeable future. This is not just pickiness about language on my part. Correctly defining what is meant by ‘autonomous’ has very important consequences for the way that the military, policy makers and manufacturers think about the development of military robots. This confusion also shows up later in the MoD document in a discussion about artificial intelligence (AI): ‘Estimates of when artificial intelligence will be achieved (as opposed to complex and clever automated systems) vary, but the consensus seems to lie between more than 5 years and less than 15 years, with some outliers far later than this.’ But it is ludicrous to say that ‘artificial intelligence will be achieved’. AI is a field of inquiry that began in the 1950s and is a term used to describe work in that field, so an AI program is a program that uses artificial intelligence methods. In that sense, it was achieved more than 50 years ago. Perhaps what the MoD is trying to suggest

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4 5

Noel Sharkey, ‘Cassandra or the False Prophet of Doom: AI Robots and War’ (2008) 23(4) IEEE Intelligent Systems 14. MoD, above n 3, 2-3.

EAP 2!

142

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

by this statement is that AI programs will become as intelligent as humans or more so within this timeframe. If that is the case, then where does the consensus of 5 to 15 years come from unless it is the consensus from a few outlier scientists? In Chapter 6 of the Joint Doctrine Note, it states that, ‘True artificial intelligence, whereby a machine has a similar or greater capacity to think like a human will undoubtedly be a complete game changer, not only in the military environment, but in all aspects of modern life’.6 The Note continues, ‘The development of artificial intelligence is uncertain and unlikely before the next 2 epochs.’ However, there is no way of knowing how long an epoch is and one cannot help but wonder how this relates to the 5 to 15 years mentioned earlier. It is worth repeating here that autonomy is not about thinking robots. This is particularly important when it comes to discussions about robots making life and death decisions. The often-misunderstood robot decision process should not be confused with human decision making except by weak analogy. A computer decision process can be as simple as, IF object on left, THEN turn right OR IF object on right, THEN turn left, ELSE continue. Alternatively, the activity on a sensor may activate a different sub-program to help with the decision. For example, to get smoother passage through a field laden with objects, a sub-program could be called in to calculate if a turn to the left would result in having to negotiate more obstacles than a turn to the right. Programs can become complex through the management of several subprograms by processes set up to make decisions about which sub-program should be initiated in particular circumstances. But the bottom line for decision making by machine, whether is it using mathematical decision spaces or AI reasoning programs, is the humble IF/THEN statement. Another misunderstanding about autonomy is that it is all or nothing. A system does not have to be exclusively autonomous or exclusively remote operated. There is a continuum from fully controlled to fully autonomous and different groups slice the pie differently. The US Army, Navy and Air Force all discuss the classification of military robots on a continuum from totally human operated to fully autonomous.7 Each has separate development programmes and each has its own operational definitions of the different levels of robot autonomy. The Army has ten levels while the Air Force has four. The Navy characterises autonomy in terms of mission complexity but points to three different classes of autonomous robot vehicle: (i) scripted; (ii) supervised; and (iii) intelligent. The US National Institute of Standards and

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
6 7

Ibid 6-12. US Office of the Undersecretary of Defense, above n 2.

!

EAP 3!

Automating Warfare: Lessons Learned from the Drones

143!

Technology (NIST) has been attempting to develop a generic framework for describing levels of autonomy for some time.8 Despite the gloss and discussion about what exactly constitutes each level of autonomy, there is an inexorable move toward the development of autonomous systems that carry weapons. It is perhaps said too often that for the time being there will be a person somewhere in the loop. But the role of that person is seen as shrinking to vanishingly small: ‘humans will no longer be “in the loop” but rather “on the loop” – monitoring the execution of certain decisions. Simultaneously, advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.’9 So essentially a person will be on the loop to send in the swarm and possibly call them off if there is radio or satellite contact. What type of autonomy this is, what it’s called and what level is ascribed to it seems largely irrelevant to the overarching goal of automating warfare. Autonomous systems that can select targets and possibly kill them are likely to pose a number of ethical and legal problems as I have pointed out elsewhere.10 In brief, the main ethical problem is that no autonomous robots or artificial intelligence systems are able to discriminate between combatants and noncombatants. Allowing them to make decisions about who to kill would fall foul of the fundamental ethical precepts of the laws of war under jus in bello and the various protocols set up to protect civilians, wounded soldiers, the sick, the mentally ill, and captives. There are no visual or sensing systems up to the challenge of competently making such decisions. A computer can compute any given procedure that can be written down in a programming language. We could, for example, give the computer on a robot an instruction such as, ‘if civilian, do not shoot’. This would be fine if and only if there was some way to give the computer a clear definition of what a civilian is. The laws of war certainly do not offer a definition which can be used to provide a machine with the necessary information. The 1944 Geneva Convention requires the use of common sense while the 1977 Protocol 1 essentially defines a civilian in the negative sense as someone who is not a combatant. However, even if there was a clear computational definition of a civilian, robots do not have the sensing capabilities to differentiate between civilians and combatants. Current sensing apparatus and processing can just about tell

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
8

H Huang, J Albus, E Messina, R Wade and W English, ‘Specifying autonomy levels for unmanned systems: Interim report’ (SPIE Defense and Security Symposium, Orlando, Florida, 2004). United States Air Force, above n 2, 41. Noel Sharkey, ‘Automated Killers and the Computer Profession’ (2007) 40(11) IEEE Computer 124; Noel Sharkey, ‘Grounds for Discrimination: Autonomous Robot Weapons’ (2008) 11(2) RUSI Defence Systems 86; Noel Sharkey, ‘The Ethical Frontiers of Robotics’ (2008) 322(5909) Science 1800; Noel Sharkey, ‘Weapons of Indiscriminate Lethality’ (2009) 1/09 FIfF Kommunikation 26.

9 10

EAP 4!

144

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

us that something resembles a human, but little else. Moreover, it is not always appropriate to kill all enemy combatants. Both discrimination and appropriateness require reasoning. There are no AI systems that could be used for such real world inferences. There is also the principle of proportionality and again there is no sensing or computational capability that would allow a robot to be able to make such a determination, nor is there any known metric to objectively measure needless, superfluous or disproportionate suffering.11 This requires human judgment. Yes, humans do make errors and can behave unethically but they can be held accountable. Who is to be held accountable for the lethal mishaps of a robot? Certainly not the machine itself. There is a long causal chain associated with robots: the manufacturer, the programmer, the designer, the department of defence, the generals or admirals in charge of the operation and the operator. Despite these ethical and legal problems there is an inexorable drive towards the development of autonomous systems. As early as 2005, the Committee on Autonomous Vehicles in Support of Naval Operations wrote, The Navy and Marine Corps should aggressively exploit the considerable warfighting benefits offered by autonomous vehicles (AVs) by acquiring operational experience with current systems and using lessons learned from that experience to develop future AV technologies, operational requirements, and systems concepts. 12

1

Lessons Learned from the Drone Wars

Unfortunately, the lessons alluded to in the paragraph above are really about the weaknesses of remote piloted robots and how military advantage can be increased by getting closer to autonomy. These include: (i) remote operated systems are more expensive to manufacture and require many support personnel to run them; (ii) it is possible to jam either the satellite or radio link or take control of the system; (iii) one of the military goals is to use robots as force multipliers so that one human can be a nexus for initiating a large scale robot attack from the ground and the air; (iv) the delay time in remote piloting a craft via satellite (approximately 1.5 seconds) means that it could not be used for interactive combat with another aircraft. At a press briefing in December 2007, Dyke Weatherington, deputy director of the US DoD’s Unmanned Aerial Systems Task Force, said, Certainly the roadmap projects an increasing level of autonomy …to fulfill many of the most stressing requirements. Let me just pick one for example. Air-to-air combat — there’s really no way that a system that’s

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
11 12

Noel Sharkey, ‘Death Strikes from the Sky: The Calculus of Proportionality’ (2009) 28 IEEE Science and Society 16. Committee on Autonomous Vehicles in Support of Naval Operations National Research Council, Autonomous Vehicles in Support of Naval Operations (National Academies Press, 2005).

!

EAP 5!

Automating Warfare: Lessons Learned from the Drones

145!

remotely controlled can effectively operate in an offensive or defensive air combat environment. That has to be — the requirement of that is a fully autonomous system.13 We can only hope that the ‘lessons learned’ will include ethical and international humanitarian law (IHL) issues. Otherwise the use of autonomous robots will amplify and extend the ethical problems already being encountered. In the following we examine four of the areas where ethical and legal lessons should be learned before there are moves to autonomous operation.

1.1 Moral disengagement
Remote pilots of armed attack planes like the Reaper MQ-9 and the Predator MQ-1 have no need to worry about their personal safety. Sitting in cubicles thousands of miles away from the action they can give a mission their full attention without worrying about being shot at. They are on highly secure home ground where no pilot has ever been safer. It can be argued that this alleviates two of the fundamental obstacles that warfighters must face:14 fear of being killed15 and resistance to killing.16 This does not create legal problems for the use of remotely piloted aircraft (RPA) any more than the use of any other distance weapons such as artillery or missiles. But what about the moral problems? Royakkers and van Est argue that sitting in cubicles controlling planes from several thousand miles from the battlefield encourages a ‘Playstation’ mentality. The so called ‘cubicle warriors’, are both emotionally and morally disengaged from the consequences of their actions. Royakkers and van Est suggest that new recruits may have been playing videogames for many years and may not see a huge contrast between playing video games and being a cubicle warrior.17 They provide examples from Peter Singer’s book Wired for War18 in which young cubicle warriors are reported saying how easy it is to kill. The counter argument is that because remote pilots often get to see the aftermath of their actions on high-resolution monitors, they are less morally disengaged than the crews of high altitude bombers or fighter pilots. It is also

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
13

US Department of Defense, ‘DoD Press Briefing with Mr Weatherington from the Pentagon Briefing Room’ (News Transcript, 18 December 2007) <http://www.defense.gov/transcripts/transcript.aspx?transcriptid=4108>. Noel Sharkey, ‘Saying No! to Lethal Autonomous Targeting’ (2010) 9(4) Journal of Military Ethics 369. A Daddis, ‘Understanding Fear’s Effect on Unit Effectiveness’ (July-August, 2004) Military Review 22. D Grossman, On Killing: The Psychological Cost of Learning to Kill in War and Society (Little, Brown and Co, 1995). L Royakkers, and R van Est, ‘The cubicle warrior: the marionette of digitalized warfare’ (2010) 12 Ethics and Information Technology 289. P W Singer, Wired for War: The Robotics Revolution and Conflict in the 21st Century (Penguin Press, 2009).

14 15 16 17 18

EAP 6!

146

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

argued that remote pilots undergo a new kind of stress caused by going home to their families in the evening after a day on the battlefields of Afghanistan. There is currently no scientific research on this issue and no way to resolve the arguments. In an interview for the Air Force Times,19 Col Chris Chambliss, a commander of the 432nd Wing at Creech, said that on only four or five occasions had sensor operators gone to see the Chaplain or their supervisors and that this was a very small proportion of the total number of remote operators. Other pilots interviewed said that they had not been particularly troubled by their missions although they could sometimes make for a strange existence. But the legal issue here is not whether there is a new kind of stress for remote combatants or whether they are morally buffered by distance. The legal issues revolve around whether or not the new stresses or moral disengagement impacts on targeting decisions. Are remote pilots more careless about taking the lives of those who should be immune from attack than other military forces? Currently, targeting decisions for conventional forces are not the sole responsibility of the pilots themselves; there is a chain of command where decisions about lethal targeting involves others such as a commander and a legal representative from the Judge Advocate General’s office. Matt Martin, a veteran drone pilot who served in both Iraq and Afghanistan, tells of many of the frustrations of dealing with commanders and lawyers taking time over decisions while he watched legitimate targets escape.20 The big worry is that even if this chain of command for the appropriate application of lethal force works now, it is difficult to see how the same control could be carried forward as the number of armed robots dramatically increases. There are barely enough pilots and sensor operators now never mind commanders and lawyers. The lesson is that the number of remote piloted operations should not exceed a chain of command capable of supporting a tightly controlled ethical and legal decision structure. For fully autonomous armed systems, by definition, all of the checking for the legality of targets and intelligence would have to be carried out prior to launching the systems. There is a strong lesson to be learned from current RPA use. Since so many mishaps already occur with humans firmly in the loop, it is unlikely that autonomous lethal targeting, with its lack of discriminative apparatus and human decision-making (as discussed in the introduction), will meet legal requirements.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
19

Scott Lindlaw, ‘UAV operators suffer war stress’, Air Force Times (online), 8 August 2008 http://www.airforcetimes.com/news/2008/08/ap_remote_stress_080708>. M Martin and C W Sasser, Predator: The Remote-Control Air War over Iraq and Afghanistan (Zenith Press, 2010).

20

!

EAP 7!

Automating Warfare: Lessons Learned from the Drones

147!

1.2 Targeted killings in covert operations
The second lesson to be learned concerns the covert use of RPA by the intelligence services. The CIA effectively now has an armed remote piloted ‘Air Force’ controlled, possibly by civilian contractors, from Langley in Virginia, USA. The CIA were the first in the US to use armed drones when in 2002 they killed five men travelling in a Sport Utility Vehicle in Yemen.21 Department of Defense lawyers considered this to be a legitimate defensive pre-emptive strike against al-Qaeda. Since then, the use of drones for targeted killings or ‘decapitation strikes’ in states that are not at war with the US has become commonplace. The Asia Times has called the CIA drone strikes ‘the most public “secret” war of modern times’.22 Estimates of the number of drone strikes in Pakistan have been published on the websites of both the New America Foundation23 and The Brookings Institute24 and are shown in Table 1. The number of civilian deaths has been very difficult to estimate and has ranged from as few as 20 to more than a thousand. Table 1: High and low estimates of drone strike deaths in Pakistan 20042011
NUMBER OF DRONE STRIKES YEARS ESTIMATES OF DRONE KILLS

high NMA 109 296 709 993 BI 112 313 724 993 NMA 86 263 413 607

low BI 89 273 368 607

leaders NMA 3 11 7 12 1

NMA 9 34 53 118

BI 9 35 53 117

2004-07 2008 2009 2010

2011* 31 21 199 177 138 122 *Up until 27 May 2011. NMA=New Foundation of America; BI=Brookings Institute

Are these targeted killings legal under international humanitarian law? Their legality is at best questionable. ‘Decapitation’ is used to mean cutting off the leaders of an organisation or nation fighting a war from the body of their warfighters. The stated goal of the aerial decapitation strikes was to target alQaeda and Taliban leaders without risk to US military personnel. Eventually, so the story goes, this would leave only replacement leaders from the shallowest end of the talent pool and so render the insurgents ineffective and

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
21 22 23 24

Israel may have been using armed drones for longer but they denied this for several years despite eyewitness testimony. It cannot be verified here. Nick Turse, ‘Drone surge: Today, tomorrow and 2047’, Asia Times (online), 26 January 2010 <http://www.atimes.com/atimes/South_Asia/LA26Df01.html>. The Year of the Drones (31 May 2011) New <http://counterterrorism.newamerica.net/drones>. America Foundation

Ian S Livingston and Michael O’Hanlon, Pakistan Index (31 May 2011) Brookings <http://www.brookings.edu/~/media/Files/Programs/FP/pakistan%20index/i ndex.pdf>.

EAP 8!

148

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

easy to defeat. However, if this was the genuine goal, it is not working well. The Table shows clearly that the proportion of leaders killed (estimates of which are provided by the New America Foundation only) to others is extremely low even by the low estimates of both the New American Foundation and the Brookings Institute with the figures showing that less than 1 in 50 killed were leaders. These individually targeted killings are taking place despite the banning in the US of all politically motivated killing of individuals since the famous Church Commission report on CIA political assassinations in 1975. In 1976, President Ford issued a presidential executive order that ‘no person employed by or acting on behalf of the United States Government shall engage in, or conspire to engage in, assassination.’ This became Executive Order (EO) 12333 under the Reagan administration and all subsequent presidents have kept it on the books. The pro-decapitation argument is that EO 12333 does not limit lawful self-defense options against legitimate threats to the national security of US citizens.25 During wartime, a combatant is considered to be a legitimate target at all times. If a selected individual is sought out and killed it is not termed an assassination. According to a Memorandum on EO 12333, which is said to be consistent with Article 51 of the Charter of the United Nations (the ‘Charter’), a decision by the President to employ clandestine, low-visibility, or overt military force would not constitute assassination if US military forces were employed against the combatant forces of another nation, a guerrilla force, or a terrorist or other organization whose actions pose a threat to the security of the United States.26 But an insurgent war with no state actors involved complicates the picture. The legal question is now, do the US intelligence services have a right to assassinate alleged insurgent combatants without due process? Seymour Hersh, whose writings were one of the main motivations for the Church Commission, complained that, ‘the targeting and killing of individual AlQaeda members without juridical process has come to be seen within the Bush Administration as justifiable military action in a new kind of war, involving international terrorist organizations and unstable states’.27 The insurgents have been redefined as combatants, but without receiving the rights of prisoners of war (because they do not wear uniforms) and without being given the chance to surrender or to face trial. This move, in combination

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
25 26 27

H W Parks, Memorandum on Executive Order 12333 (Reproduction Department of the Army, office of the Judge Advocate General of the Army, 1989). Ibid. S M Hersh, ‘Manhunt: The Bush administration’s new strategy in the war against terrorism’, New Yorker, 23 December 2002, 66.

!

EAP 9!

Automating Warfare: Lessons Learned from the Drones

149!

with an appeal to Article 5128 has been used to provide legal cover for the right to assassinate insurgent combatants. Philip Alston, UN Special Rapporteur on extrajudicial killings, challenged the legality of the targeted killings at a UN General Assembly meeting in October 2009. A request was issued for the US to provide legal justification for the CIA’s targeting and killing of suspects and further asked who was accountable. The US refused to comment on what they said were covert operations and a matter of national security. US Department of State legal advisor Harold Koh rebutted Alston indirectly, stating that ‘US targeting practices including lethal operations conducted by UAVs comply with all applicable law including the laws of war.’29 However, there are no independent means of determining how the targeting decisions are being made. It remains unclear as to what type and level of evidence is being used to reach conclusions that effectively amount to death sentences by Hellfire for non-state actors without right to appeal or right to surrender. It is also unclear as to what other methods, if any, were exhausted or attempted to bring the suspects to justice. The whole process is taking place behind a convenient cloak of national secrecy. US law Professor Kenneth Anderson also questioned the CIA’s use of drones in a prepared statement to a US Senate hearing: [Koh] nowhere mentions the CIA by name in his defense of drone operations. It is, of course, what is plainly intended when speaking of self-defense separate from armed conflict. One understands the hesitation of senior lawyers to name the CIA’s use of drones as lawful when the official position of the US government, despite everything, is still not to confirm or deny the CIA’s operations.30 However, the former Director of the CIA, Leon Panetta has been more vocal about the operations. In 2008, he told the Pacific Council on International Policy that, ‘it’s the only game in town in terms of confronting and trying to

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
28

Article 51 of the United Nations (UN) Charter reads: ‘Nothing in the present Charter shall impair the inherent right of individual or collective self-defence if an armed attack occurs against a Member of the United Nations, until the Security Council has taken measures necessary to maintain international peace and security. Measures taken by Members in the exercise of this right of self-defence shall be immediately reported to the Security Council and shall not in any way affect the authority and responsibility of the Security Council under the present Charter to take at any time such action as it deems necessary in order to maintain or restore international peace and security. Harold Koh, ‘The Obama Administration and International Law’ (Speech delivered at the Annual Meeting of the American Society of International Law, Washington DC, 25 March 2010). Kenneth Anderson, Submission to US House of Representatives Committee on Oversight and Government Reform Subcommittee on National Security and Foreign Affairs, Subcommittee Hearing, Drones II, 28 April 2010, [20].

29

30

EAP 10!

150

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

disrupt the al-Qaeda leadership.’31 Revealing the CIA’s intentions on the expansion of targeted drone kills, Panetta went on to say of al-Qaeda that, ‘If they’re going to go to Somalia, if they’re going to go to Yemen, if they’re going to go to other countries in the Middle East, we’ve got to be there and be ready to confront them there as well. We can’t let them escape. We can’t let them find hiding places.’32 This proposed expansion of targeted killing is just what was concerning the UN Special Rapporteur on extrajudicial killings. A subsequent report by Alston in 2010 to the UN General Assembly,33 discusses drone strikes as violating international and human rights law because both require transparency about the procedures and safeguards in place to ensure that killings are lawful and justified: ‘a lack of disclosure gives States a virtual and impermissible license to kill.’ Some of Alston’s arguments also revolve around the notion of ‘the right to self-defence’ and whether the drone strikes are legal under Article 51. Given the CIA’s enthusiasm for armed drones, it is likely that they will be among the first to use autonomous drones to kill. Deep questions need to be answered about any covert and unaccountable use of such indiscriminate weapons. As it is impossible for those not directly involved to determine whether a fully autonomous drone or a remotely controlled aircraft is being used to carry out a particular mission, the onus rests with the CIA to honestly identify the nature of the weapon being used. Thinking further ahead to the use of autonomous drones for targeted killings, imagine for a moment that autonomous drones could be even more discriminate than any living human. Given the current hit record of the CIA and the dubious legality and secrecy surrounding the accountability of targeted killings, would we really want to automate the assassination of those alleged to be working against US interests without recourse to any other legal process?

1.3 Expansion of the battle space
Attacking with remote piloted vehicles (RPV) is not much different under the Laws of War than attacking with a manned helicopter gunship or even with artillery. The worry is that the nature of an unmanned vehicle with no risk to military personnel, an ability to hover over an area for very many hours, and its perceived accuracy is leading to considerable expansion of potential targets. RPVs are seen as the best weapons system for fighting combatants in

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
31

Leon Panetta, Director’s Remarks at the Pacific Council on International Policy (18 May 2009) Central Intelligence Agency <https://www.cia.gov/newsinformation/speeches-testimony/directors-remarks-at-pacific-council.html>. Ibid. Philip Alston, Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Addendum, Study on Targeted Killings, (28 May 2010) UN Doc A/HRC/14/24/Add.6.

32 33

!

EAP 11!

Automating Warfare: Lessons Learned from the Drones

151!

a city environment where it is either too risky, inappropriate or unacceptable to employ ground forces. The Libyan uprising in 2011 provides an example of the latter situation, where predators were deployed to protect civilians. It is too early to tell whether this was simply PR or was part of a much larger agenda. Another example of expansion of the battle space is the use of RPA to conduct covert ‘targeted killings’ by the CIA in countries that are not at war with the US as mentioned in the previous section — eg Yemen, Somalia and Pakistan. It would not be acceptable to bomb these countries from high altitude or to attack them with helicopter gunships. For example, Pakistan’s reaction to the killing of bin-Laden by Navy Seals contrasts markedly to their less strident reaction to targeted killings carried out by drones on their territory. This reveals that it is somehow more palatable to use unmanned systems (that are touted as having a high degree of accuracy) in built up areas. Panetta, amongst others, has argued that armed UAVs are more accurate and will kill less civilians than a B-52 bomber when attacking the tribal regions in Pakistan. But as a former CIA operative told me, there is no way that Pakistan or other state actors not at war with the US could ‘turn a blind eye’ to the bomber strikes as they do now for drones. It can be argued that it is their perceived precision and accuracy that allows them to penetrate areas and kill people in ways that would not previously have been available without major political and legal obstacles. The battlefield is also being expanded by the persistence of drones. The Predator and Reaper can fly for up to approximately 26 hours while the unarmed Global Hawk holds the flight endurance record of 33.1 hours. More recently, Qinetiq developed the Zepher, a much smaller and less powerful solar drone, which has stayed aloft for 336 hours, 22 minutes and 8 seconds. Qinetiq have now teamed up with Boeing as part of the Defense Advanced Research Projects Agency’s (DARPA) Vulture project which is aiming to achieve uninterrupted flight for a period of five years using a heavier-than-air platform. What this means is that autonomous drones could be left over any area that might possibly have military or defensive interests for very extended periods of time. This makes it possible to maintain armed vigilance with little cost and possibly little evidence of risk to the country employing the drones. Autonomous robots will only lead to greater expansion of the battle space.

1.4 The illusion of accuracy
Both the US Predator and Reaper RPA are equipped with high-resolution cameras that provide visualisation for remote pilots, their commanders, the legal team and other commanders on the ground near the action. However, if the estimated numbers of civilian deaths resulting from drone attacks is to be believed, accuracy is an illusion. It is easy to mistake targets from the air. For example, on 23 June 2009, as many as 60 people attending the funeral of a

EAP 12!

152

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

Taliban fighter were killed in South Waziristan when CIA drones struck.34 In February 2010, a US military drone was involved in an attack in Oruzgan (Afghanistan) in which 23 innocents, including women and children, were killed. The civilians, travelling by convoy, had been misidentified as insurgents.35 Another reason why the accuracy is not as good as it says on the tin is that many of the strikes are conducted on buildings or at night where the inhabitants are not visible except as temperature signatures picked up by infrared sensors.36 In these instances, unreliable ground intelligence is often responsible for mishaps. In a recent example (April, 2011) where infrared imaging was used, two images were seen moving towards coalition troops. A drone strike was initiated that took the lives of what turned out to be a US marine staff sergeant and a Navy Seaman on their way to reinforce the troops. One of the oft-cited targeting methods of the CIA is to locate people through their cell phones; switch on your phone and you receive a hellfire missile delivered from a drone. But a recent copyright lawsuit between two companies sheds doubt on the accuracy of this targeting method.37 A small company called Intelligent Integration Systems alleges that one of its client companies, Netezza, reverse engineered their software, Geospatial, on a tight deadline for the CIA. The court heard that the illegal version of the software could produce locations that were out by as much as 40 feet and that the CIA had knowingly accepted the software. But even if targeting was 100% accurate, how can we be sure that alleged insurgents are ‘guilty as charged’? Information about target identity and their role and position is heavily dependent on the reliability of the intelligence on which it is based. There are lessons that should have been learned from the Vietnam War investigations of Operation Phoenix in which thousands were assassinated. It turned out that many of those on the assassination list had been put there by South Vietnamese officials for personal reasons such as erasing gambling debts or resolving family quarrels. This was one of the main reasons why the findings of the Church report resulted in Presidential Executive Order 12333.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
34 35

Pir Aubair Shah and Salman Masood, New York Times (online), 23 June 2009 <http://www.nytimes.com/2009/06/24/world/asia/24pstan.html>. Karin Brulliard, The Washington Post (online), <http://www.washingtonpost.com/wpdyn/content/article/2010/05/29/AR2010052901390.html>. 30 May 2010

36 37

Noel Sharkey, ‘A matter of precision’ (2009) (December) Defence Management Journal 126. Jeff Stein, ‘CIA drones could be grounded by software suit’, Washington Post (online), 11 October 2010, SpyTalk <http://voices.washingtonpost.com/spytalk/2010/10/cia_drones_could_be_grounded_b.html>.

!

EAP 13!

Automating Warfare: Lessons Learned from the Drones

153!

Things do not seem to have changed greatly since then. Philip Alston reports that during a mission to Afghanistan he found out how hard it was for forces on the ground to obtain accurate information. ‘Testimony from witnesses and victims’ family members showed that international forces were often too uninformed of local practices, or too credulous in interpreting information, to be able to arrive at a reliable understanding of a situation.’38 He suggests that, ‘States must, therefore, ensure that they have in place the procedural safeguards necessary to ensure that intelligence on which targeting decisions are made is accurate and verifiable.’39 It could be argued that if the precision and visualisation afforded by RPA is so much greater than that afforded to conventional fighter pilots or high altitude bombers, then remote pilots or their commanders should be more accountable for civilian casualties. In fact Human Rights Watch made that case in their report, Precisely Wrong in 2009 about six Israeli drone strikes in Gaza that resulted in 26 civilian deaths including 8 children.40 Whichever way you look at it, there is absolutely no reason to believe that there will be greater accuracy if we make targeting autonomous and every reason to believe that it will be considerably worse. Given the number of civilian deaths that occur on a regular basis from drone strikes with humans watching high resolution monitors, why would we believe that machines could do it as well or better without humans? To do so is to exhibit an unrealistic blind faith in automation.

Conclusion
We started out by discussing some of the limitations of autonomous technology in terms of sensing, discrimination, reasoning and calculating proportionality. These suggest that it is premature to initiate the deployment of autonomous target selection and to automate the application of lethal force. The suggested military advantages of autonomous over remotely controlled robots make them appear considerably superior. However, the four lessons (and there are many more) from the current use of drones in terms of moral disengagement, targeted killings, the expansion of the battle space and the illusion of accuracy suggest that this could be at the cost of sacrificing or stretching international humanitarian law. Moreover, the military advantages may be very short term. The technology is proliferating rapidly with more than 50 countries already having access to military robotics and it may not be long before we see fast paced warfare with autonomous craft loaded with multiple weapons systems.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
38 39 40

Alston, above n 33. Ibid. Human Rights Watch, Precisely Wrong: Gaza Civilians Killed by Israeli DroneLaunched Missiles (June, 2009) <http://www.hrw.org/en/reports/2009/06/30/precisely-wrong-0>.

EAP 14!

154

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

In such circumstances of military necessity will countries disadvantage the pace of battle by having a human in the loop to make decisions about who to kill? There has been no international discussion about these issues and no discussion about arms control. Yet no one knows how all the complex algorithms working at faster than human speed will interact and what devastation it may cause.

!

EAP 15!

Taking Humans Out of the Loop: Implications for International Humanitarian Law
COMMENT BY MARKUS WAGNER* 1 Introduction

In their 2008 article entitled The Laws of Man over Vehicles Unmanned: The Legal Response to Robotic Revolution on Sea, Land and Air, Brendan Gogarty and Meredith Hagger have pointed to a wide range of challenges posed by the introduction of unmanned vehicles (UVs).1 Their multi-faceted analysis of these challenges provides a blueprint for the debate in this area in the future. Technological advances have allowed for armed conflicts to take place over ever greater distances. While early combat took place face to face, inventions made it possible to develop weapon systems that allowed for increased separation from the actual location of combat. Early inventions only allowed for relatively small distances — think bow and arrow — while the introduction of, eg, black powder increased that distance considerably. Inventions such as airplanes as well as rockets and missiles have driven this development even further. Technological advances have continuously increased that distance, yet the very large majority of today’s weapon systems continue to be characterised by one common element: human input is still — by and large — a common denominator in some form or another. In the case of rockets the firing decision is still in the hands of humans, whereas pilots not only make decisions over where to fly and what route to take, but also whether, and if so, what weapons to deploy. Keeping humans in the loop has not yet changed even with the onset of unmanned vehicles (UVs). The current generation of UVs is remotely operated, sometimes from a close distance, sometimes over long distances. And while the use of fully autonomous weapons is still a decade or more away,2 there has been considerable discussion as to when this goal is to be

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*

Associate Professor of Law, University of Miami School of Law. I would like to thank the editors for inviting me as a commentator in this special issue of the Journal of Law, Information and Science. Brendan Gogarty and Meredith Hagger, ‘The Laws of Man over Vehicles Unmanned: The Legal Response to Robotic Revolution on Sea, Land and Air’ (2008) 19 Journal of Law, Information and Science 73. Elizabeth Quintana, The Ethics and Legal Implications of Military Unmanned Vehicles (2008) Royal United Services Institute for Defence and Security Studies, 5 <http://www.rusi.org/downloads/assets/RUSI_ethics.pdf>. A recent White Paper designated only guard, support and medical duties to UMS as being feasible at this point and precluded any tasks that are combat-related. See Army Capabilities Integration Center – Tank-Automotive Research and Development Engineering Center, Robotics Strategy White Paper (2009), 28-33,

1

2

EAP 1

156

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

reached. Until a few years ago, it was commonplace for defense officials to consider retaining humans in the loop as an essential component of warfare even in the future.3 However, a US Department of Defense (DoD) report in 2009, predicted that the technological challenges regarding fully autonomous systems will be overcome by the middle of the century.4 Technological development has been particularly rapid regarding unmanned aerial vehicles, followed by a vigorous and concomitant public debate.5 Focusing largely on the legality of targeted killing,6 this debate has also brought to light the increasing extent to which UAVs have been used in prosecuting armed conflict in Afghanistan and Pakistan, as well as Iraq. And while the numbers are — as such — inconclusive, the trend is rather unambiguous. Figures released by the Air Force indicate that Predators and Reapers deployed 219 missiles and bombs in Afghanistan in 2009, compared to 183 in Afghanistan in 2008 and 74 in 2007.7 Congressional hearings confirm the large number of missions carried out by remotely controlled aircraft, citing that at any given moment ‘40 Predator-series aircraft are airborne

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
<http://futurefastforward.com/images/stories/military/RoboticsStrategyWhiteP aper_19Mar09.pdf>.
3

See Department of Defense, Unmanned Systems Safety Guide for DoD Acquisition, (1st ed (Version .96) 27 June 2007) <https://acc.dau.mil/adl/enUS/269574/file/41532/Unmanned_Guide_DOD_Acq_2007.pdf>; P W Singer, Wired for War: The Robotics Revolution and Conflict in the Twenty-first Century (Penguin Press, 2009) 123-124. United States Air Force, United States Air Force Unmanned Aircraft Systems Flight Plan 2009-2047 (18 May 2009) <http://www.aviationweek.com/media/pdf/UnmannedHorizons/17312080United-States-Air-Force-Unmanned-Aircraft-Systems-Flight-Plan-20092047Unclassified.pdf>. See John Markoff, ‘War Machines: Recruiting Robots for Combat’, New York Times (New York), 28 November 2010, A1. Nils Melzer, Targeted Killing in International Law (Oxford University Press, 2008); David Kretzmer, ‘Targeted Killing of Suspected Terrorists: Extra-Judicial Executions or Legitimate Means of Defence?’ (2005) 16 European Journal of International Law 171; Orna Ben-Naftali and Keren R Michaeli, ‘”We Must Not Make a Scarecrow of the Law”: A Legal Analysis of the Israeli Policy of Targeted Killings’ (2003) 36 Cornell International Law Journal 233. For a philosophical inquiry arguing that targeted killing cannot be objected to if one accepts large-scale killing in war, see Daniel Statman, ‘Targeted Killing’ (2004) 5 Theoretical Inquiries in Law 179. For a similar view from a legal and policy perspective, see Kenneth Anderson, ‘Targeted Killing in US Counterterrorism Strategy and Law’ (Working Paper of the Series on Counterterrorism and American Statutory Law) Brookings Institution, Georgetown University Law Center and the Hoover Institution (11 May 2009) <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1415070>. For an analysis under US constitutional law, see Richard Murphy and Afsheen John Radsan, ‘Due Process and Targeted Killing of Terrorists’ (2009-2010) 31 Cardozo Law Review 405. Christopher Drew, ‘Drones Are Playing a Growing Role in Afghanistan’ The New York Times (New York) 19 February 2010, A6. See also Gogarty and Hagger, above n 1, 85 et seq.

4

5 6

7

EAP 2

Taking Humans Out of the Loop: Implications for International Humanitarian Law

157!

worldwide, while the hours that various UAVs by the Air Force are in operation has more than tripled between 2006 and 2009, then standing at 295,000 hours per year’.8 Fiscally, there has been a similarly marked increase. In 2010, the US Department of Defense allocated approximately US$5.4 billion to the development, procurement and operation of UAVs.9 This number has risen markedly from 1990, when the figure stood at US$165 million, to 2001 when this investment totaled US$667 million, after which it rose considerably.10 This comment first addresses the difference between the current weapon systems and the next generation of truly autonomous weaponry (Part 2), followed by an overview of the applicable rules of armed conflict (Part 3) before offering some concluding remarks (Part 4).

2

The Next Generation of Weapon Systems: True Robotics

As is the case in many areas of the law, technological advances generally outpace the generation of rules pertaining to particular social phenomena. International humanitarian law is no exception in this regard. From the very beginning, weapons and methods of warfare have created challenges to fundamental assumptions, eg to what extent certain weapons that created excessive injuries may be banned.11 Similar arguments can be made over the issue of targeted killing by UAVs that are not part of the traditional military arsenal, but that are commandeered by non-military governmental institutions.12

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
8

See eg report during a Congressional hearing by Michael S Fagan, Rise of the Drones: Unmanned Systems and the Future of War, US House of Representatives, Committee on Oversight and Government Reform, Subcommittee on National Security and Foreign Affairs, 111th Congress, 2nd Sess., 49 (23 March 2010). See John Keller, Unmanned vehicle spending in the 2010 DOD budget to reach $5.4 billion, (28 May 2009) Military and Aerospace Electronics, <http://www.militaryaerospace.com/index/display/articledisplay/363553/articles/military-aerospace-electronics/executivewatch/unmanned-vehicle-spending-in-the-2010-dod-budget-to-reach-54billion.html>. The number is a combination of various line items from the fiscal year 2010 budget. See National Defense Authorization Act for Fiscal Year 2010, PL 11184, 123 Stat. 2190 (2009). More recent numbers are available at Department of Defense, Office of the Under Secretary of Defense (Comptroller), <http://comptroller.defense.gov/Budget2012.html>. Harlan Gear and Christopher Bolkcom, Unmanned Aerial Vehicles: Background and Issues for Congress (21 November 2005) Congressional Research Service, 11, <http://www.fas.org/irp/crs/RL31872.pdf>. Declaration Renouncing the Use, in Time of War, of Explosive Projectiles Under 400 Grammes Weight, Saint Petersburg, 29 November (11 December) 1868. See Philip Alston, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Addendum – Study on Targeted Killings, UN Doc A/HRC/14/24/Add.6; Mary Ellen O’Connell, ‘Unlawful Killing with Combat

9

10

11 12

!

EAP 3

158

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

The prospective introduction of autonomous weapon systems poses a similar challenge in this regard. One of the central questions is whether the technological advances in the area of robotics is such that it threatens to leave the existing framework of international humanitarian law inadequate.13 Before conducting this analysis however, it is important to briefly outline what distinguishes the current generation of machinery such as aerial drones from future generations of autonomous weapon systems. Attempts to produce remotely operated weapons have been undertaken since the end of the 19th century, when Nikola Tesla tested a remote-controlled weapon.14 Further attempts were made in WW I and WW II, some operated by wire and some by radio.15 Development continued throughout the 20th century, but UAVs did not gain prominence until shortly before the millennium.16 While it is not possible to describe the debate about the use of UAVs in great detail, their usage appears uncontroversial as long as a person remains in the loop. Notwithstanding the debate over whether or not the amount of information that is relayed by way of remotely-operated drones leads to better targeting decisions,17 the use of such weapon systems appears generally unproblematic under international humanitarian law. This remains presumptively the case even in scenarios where an operator no longer actively manages detection and targeting, but also in cases of more advanced autonomy. This is the case, for example, where an operator has to actively intervene in order to stop an attack. Situations like this are not characterised by full autonomy, as an operator remains in the loop. Arguably however, the control that an operator exercises in these situations is far less detailed than is the case today. Instead of actively operating a UV, the situation is characterised by managing UVs through oversight, intervening only when necessary. Future systems are predicted to be able to function fully autonomously. This differs considerably from the current generation of remotely-operated vehicles. Autonomy in this context can thus be understood as an unmanned

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Drones: A Case Study of Pakistan, 2004-2009’ (Research Paper No 09-43, Notre Dame Law School Legal Studies, (2009)) in Simon Bronitt (ed), Shooting to Kill: The Law Governing Lethal Force in Context, (forthcoming) 1, <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1501144>.
13 14

For a closer analysis, see Part 3, Autonomy and International Humanitarian Law, below. Mark Edward Peterson, ‘The UAV and the Current and Future Regulatory Construct for Integration into the National Airspace System’ (2006) 71 Journal of Air Law and Commerce 521, 537. Nick T Spark, ‘Unmanned Precision Weapons Aren’t New’ (2005) 131 US Naval Institute Proceedings 66. For a brief overview, see Peterson, above n 14, 535 et seq. Jack M Beard, ‘Law and War in the Virtual Era’ (2009) 103 American Journal of International Law 409.

15 16 17

EAP 4

Taking Humans Out of the Loop: Implications for International Humanitarian Law

159!

system that prosecutes an attack based on a code that enables an independent (ie not pre-determined) decision-making process without direct human input. This includes the detection and targeting as well as the firing decision,18 wholly independent from immediate human intervention. These characteristics differentiate autonomous systems not only from their current predecessors which are commanded from a distance, but also from weapons which have been pre-programmed to follow a certain flight path and attack one or more targets without making independent decisions.

3

Autonomy and International Humanitarian Law: The Need to Ensure Correct Quantitative and Qualitative Assessments

Based on the foregoing, this section will sketch some of the legal problems that a move towards a higher degree of autonomy will bring about. This brief comment will only deal cursorily with two cornerstones of international humanitarian law: distinction and proportionality.19 Other legal challenges that autonomous weapon systems pose, such as individual criminal responsibility, state responsibility and compliance with the testing requirement in Article 36 Additional Protocol I, quite apart from ethical and political challenges, will have to be dealt with in other forums. At the outset, it should be borne in mind that the requirements for autonomous weapon systems are considerable. Such systems must not only have the ability to quantitatively assess whether a particular human being or object is a military target, but once the decision has been made whether to engage a target, UVs must also be able to qualitatively determine whether the requirements of precaution have been met. Good arguments can be made that this is possible for the former; there are however strong indications that qualitative assessments (or at combination of qualitative and quantitative

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
18 19

If available, an autonomous system may also be programmed to choose among different weapons at its disposal. An overwhelming majority of countries have ratified Additional Protocol I to the Geneva Conventions (Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), opened for signature 8 June 1977, 1125 UNTS 3 (entered into force 7 December 1979) (‘Additional Protocol I’)). Both principles form part of customary international law. Prosecutor v Kupreskic et al (Trial Judgement) (International Criminal Tribunal for the former Yugoslavia (ICTY) Case No IT-95-16-T, 14 January 2000) [524]. See also William H Taft, ‘The Law of Armed Conflict after 9/11: Some Salient Features’ (2003) 28 Yale Journal of International Law 319, 323; Michael J Matheson, ‘Session One: The United States Position on the Relation of Customary International Law to the 1977 Protocols Additional to the 1949 Geneva Conventions’ (1987) 2 American University Journal of International Law and Policy 419, 426.

!

EAP 5

160

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

assessments) are — at least as of now and technically speaking — difficult if not impossible for computers to perform.20 International humanitarian law is characterised by a constant tension between two competing elements: military necessity on one hand, and the requirement to carry out combat in a humane fashion on the other.21 Being essential elements influencing this tension, the interpretation of both the principle of distinction and the principle of proportionality shape the outcome of any legal analysis. Needless to say there is considerable disagreement over the degree to which humane behavior in combat may trump military necessity or vice versa.22

3.1 Principle of distinction
In a simplified form, the principle of distinction — laid down in general terms in Article 48 Additional Protocol I, — requires that an attacker distinguish between combatants and civilians as well as objects, in the case of which attackers must distinguish between those that possess a military quality and those that are of a civilian nature.23 Difficulties arise for obvious reasons in cases where, for example, a target can be classified as both being civilian in nature as well as possessing a military purpose. Classic examples include infrastructure such as bridges by which an army could be supplied. More problematic is the targeting of objects that are useful not only for military purposes, but also have fundamental value for the civilian population. Subsequent rules in Additional Protocol I, refine this general concept, namely by prohibiting the targeting of individual civilians24 (unless they take a direct part in hostilities25) historic monuments, works of art or places of worship.26 Furthermore, Additional Protocol I contains prohibitions against certain types of attacks that have an indirect effect on the civilian population. Specifically, Additional Protocol I forbids attacks that target objects that are considered to be

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
20

Tony Gillespie and Robin West, ‘Requirements for Autonomous Unmanned Air Systems set by Legal Issues’ (2010) 4(2) The International C2 Journal 1, 4. Whether, as the authors claim there ‘will always need to be human intervention’ is far from clear (at 23). Michael N Schmitt, ‘Military Necessity and Humanity in International Humanitarian Law: Preserving the Delicate Balance’ (2010) 50 Virginia Journal of International Law 795, 795. Ibid; Theodor Meron, ‘The Martens Clause, Principles of Humanity, and Dictates of Public Conscience’ (2000) 94 American Journal of International Law 78. Additional Protocol I, Article 48 states: ‘In order to ensure respect for and protection of the civilian population and civilian objects, the Parties to the conflict shall at all times distinguish between the civilian population and combatants and between civilian objects and military objectives and accordingly shall direct their operations only against military objectives.’ Ibid Article 51(2). Ibid Article 52(3). Ibid Article 53.

21

22 23

24 25 26

EAP 6

Taking Humans Out of the Loop: Implications for International Humanitarian Law

161!

‘indispensable to the survival of the civilian population’, the natural environment and ‘installations containing dangerous forces’.27 Moreover, certain methods of attack are also prohibited, namely those that are indiscriminate.28 This means that in addition to being able to properly distinguish between legitimate targets and those that are not, the principle of distinction requires that an attack be carried out with weapons that are capable of prosecuting the attack in a discriminatory manner. This could mean that a pilot, left only with a large ordinance the kill radius of which would not allow for a distinction between combatants and civilians, could not attack a target because the use of this particular weapon would not satisfy the rule of distinction. UVs will have to be able to distinguish between civilian and military targets. As pointed out above, while the textual basis for the distinction appears clear, realities on the ground oftentimes leave ambiguous whether a target is legitimate or not. In the case of UVs this means that the underlying software would have to be able to determine whether a particular target is civilian or military in nature.29 Moreover, the UV would have to be programmed so that it also takes account of the requirement that in cases of uncertainty it would abort the attack.30 A number of weapons today are capable of determining — based on pre-programmed characteristics, such as shape and dimensions — a target’s military nature.31 Once a sufficient number of characteristics of the target have been reconciled with the pre-programmed version, the weapon system can initiate an attack. This type of matching is mechanical, based on quantitative data and even if one were to argue that there is still an unacceptable amount of ambiguity, it appears that the recent advances

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
27 28

See ibid, Articles 54, 55 and 56, respectively. Ibid Article 51(4) states: ‘Indiscriminate attacks are prohibited. Indiscriminate attacks are: (a) those which are not directed at a specific military objective; (b) those which employ a method or means of combat which cannot be directed at a specific military objective; or (c) those which employ a method or means of combat the effects of which cannot be limited as required by this Protocol.’ One interesting proposal is proffered is mandating that UVs would not target humans, but only weapon systems. See John S Canning, A Concept for the Operation of Armed Autonomous Systems on the Battlefield, (2006) 3rd Annual Disruptive Technology Conference <http://www.dtic.mil/ndia/2006disruptive_tech/canning.pdf>. While this may minimise the danger somewhat, it is unclear how this would alleviate the problem of, for example, someone carrying a rifle for safety reasons or for hunting purposes. With respect to civilians, see Additional Protocol I Article 50(1), with respect to civilian objects, see Additional Protocol I Article 52(3). Robert Sparrow, ‘Killer Robots’ (2007) 24 Journal of Applied Philosophy 62, 63. More

29

30 31

recently, see Michael Lewis et al, ‘Scaling Up Wide-Area-Search Munition Teams’ (May-June 2009) 24 IEEE Intelligent Systems 10.

!

EAP 7

162

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

regarding this technology will enable such systems to function with the required accuracy in the near future.32

3.2 Principle of proportionality
More problematic for present purposes is the principle of proportionality. Laid out in various provisions throughout Additional Protocol I, proportionality requires that damage to civilian objects may not be ‘excessive in relation to the concrete and direct military advantage anticipated’.33 These provisions — which do not use the term proportionality — are designed to protect the civilian population, yet their application is made more difficult through the use of the term ‘excessive’. This choice of wording is a result of the tension mentioned above, between the competing interests during armed conflict: gaining military advantage, while protecting the civilian population.34 This tension was pointed out in a 2000 report to the International Criminal Tribunal for the Former Yugoslavia (ICTY) Prosecutor, which addressed the difficulty in applying the principle of proportionality and professed that ‘[o]ne cannot easily assess the value of innocent human lives as opposed to capturing a particular military objective’.35 It is thus impossible to find bright-line rules that can determine a priori what is permissible and what is prohibited.36 In order to minimise the legal

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
32

Note however that specifically with respect to Additional Protocol I Article 51(4)(c) there has been considerable controversy since it arguably contains elements of proportionality and thus may not be merely a quantitative assessment. See generally Stefan Oeter, ‘Methods and Means of Combat’, in Dieter Fleck (ed) The Handbook of International Humanitarian Law (Oxford University Press, 2nd ed, 2008) 119, 201 et seq. See eg Additional Protocol I Articles 51.5(b) and 57(2)(iii). This has led some authors to claim that the principle of proportionality is too vague a concept and proportionality would only be implicated when ‘acts have occurred that are tantamount to the direct attack of the civilian population’. W Hays Parks, ‘Air War and the Law of War’ (1990) 32 Air Force Law Review 1, 173; Michael N Schmitt, ‘Targeting and International Humanitarian Law in Afghanistan’ (2009) 39 Israel Yearbook on Human Rights 307, 312. For an opposing view, see Yoram Dinstein, The Conduct of Hostilities Under the Law of International Armed Conflict (Cambridge University Press, 2nd ed, 2010) 120-121. Problems relating to proportionality assessments in the context of targeted killings have been pointed out by Noel Sharkey, ‘Death Strikes From the Sky: The Calculus of Proportionality’ (Spring 2009) IEEE Technology and Society Magazine 17, 19. The idea that the principle of proportionality applies in armed conflict has been affirmed strongly by the Supreme Court of Israel. See HCJ 769/02 Public Committee against Torture in Israel et al v Government of Israel et al, [2006] especially 30-33, <http://elyon1.court.gov.il/Files_ENG/02/690/007/a34/02007690.a34.pdf>. International Criminal Tribunal for the Former Yugoslavia (ICTY), Final Report to the Prosecutor by the Committee Established to Review the NATO Bombing Campaign Against the Federal Republic of Yugoslavia, (June 8, 2000) 39 International Legal Materials 1257, [48]. Gary D Solis, The Law of Armed Conflict: International Humanitarian Law in War (Cambridge University Press, 2010) 273.

33 34

35

36

EAP 8

Taking Humans Out of the Loop: Implications for International Humanitarian Law

163!

exposure of commanding officers, Additional Protocol I Article 57(2) refers to certain precautions that must be taken, again using the same terminology, ie ‘excessive’. Thus, a legal evaluation will have to take place on a case-by-case basis.37 Given these difficulties, another question would have to be answered, namely whether a singular set of proportionality assessments actually exists which could thus be programmed. The answer to this question is obviously negative and it is clear that a military commander may arrive at different conclusions in different situations and would most certainly differ in that assessment from a human rights lawyer. It is readily apparent that this type of analysis is no longer open to quantitative assessments, but rather that such analysis requires the evaluation of a variety of data in addition to weighing the relative weight of each aspect in the specific circumstances that exist when an attack is about to be launched. In the context of designing UVs, this would require at the very least addressing the following areas: With respect to target selection, the program would have to be designed to anticipate all potential decisions in an abstract manner. It would have to be able to determine how many civilian casualties would be acceptable under the circumstances at the time.38 The program would also have to be able to determine which type of weapon could be used under which circumstances, eg whether it is permissible to fire a high-impact weapon despite the presence of civilians because of the military advantage that could be gained by doing so. Since UVs would be fully autonomous, these systems would have to be able to react to changing circumstances. While the use of a particularly large weapon may have been permissible at one point, circumstances may change so as to make the use of that weapon illegal.39 The inability of software to confront these challenges — leaving aside the ability to prosecute humans for transgressions, which may act as an additional deterrent — may render the use of UVs almost useless except in the narrowest of circumstances.40 Even if one were to accept that certain elements of the principle of distinction are amenable to quantitative

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
37 38 39

ICTY, Above n 35, [50]. Sharkey, above n 34, 18-19. For a different view, arguing that increased precision inherent in the rising UV technology make attacks more proportional by reducing the likelihood of collateral casualties, see Andy Myers, ‘The Legal and Moral Challenges Facing the 21st Century Air Commander’ (2007) 10 Royal Air Force Air Power Review 76, 89. These circumstances, though imaginable, can be predicted to occur only infrequently such as a lone military installation far from civilian objects or a military target similarly far removed from objects that could conflict with the requirements of international humanitarian law. Additional arguments that militate against the use of UVs and that could not be addressed here are based on ethical and political considerations. For an opposing view regarding the former, see Ronald C Arkin, Governing Lethal Behavior in Autonomous Robots (CRC Press, 2009). For a view from an operational perspective, see Brian Burridge, ‘PostModern Warfighting with Unmanned Vehicle Systems: Esoteric Chimera or Essential Capability?’ (2005) 150 Royal United Services Institute Journal 20.

40

!

EAP 9

164

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

assessment, this is clearly not the case for the principle of proportionality with its requirement of making highly relational and context-dependent assessments. One could argue that the move towards greater degrees of autonomy requires a new legal framework.41 Arguments to this effect have been made and are based on the view that current international humanitarian law is inadequate to deal with a decreased level of human participation in military operations.42 While it is true that international humanitarian law has been rooted in an anthropocentric paradigm, armed conflict has moved away considerably from its origins. Weapons have been designed to reach their target over longer distances and new categories of weapons have been invented, yet most fundamental principles of international humanitarian law have remained in place. This is why others argue that the existing framework provides the necessary rules for autonomous weapon systems.43 There is a danger that by moving away from legal principles that have governed armed conflict to date, the legal principles underlying current international humanitarian law become more diffuse or that the lack of consensus over the actual meaning of new legal rules will allow — at least for some time, but bearing in mind that early movers can have a disproportionate impact — for exploitation, possibly not only at the margins.

4

Conclusion

UVs will continue to proliferate in the future. Their technical utility is simply too great and there are many tasks in which there is little doubt that UVs can take on an important role which humans simply cannot carry out, such as rescue missions and fact-finding missions after natural or man-made disasters. The recent nuclear disaster in Fukushima is but one example.44 This being the case, it is all the more important that their introduction be accompanied by a legal structure that is designed to take on the challenges that this development inevitably poses. Gogarty and Hagger have posited

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
41

Note that there are no prohibitions under the current framework of international humanitarian law as the approach has been not to prohibit a particular invention as such but rather its specific use. This point has also been made by Philip Alston, above n 12, [79]. See eg Armin Krishnan, Killer Robots: Legality and Ethicality of Autonomous Weapons (Ashgate, 2009). See eg Sparrow, above n 31; Arkin, above n 40, 72. Indeed, the lack of UVs in this regard is striking. See eg the debate after the nuclear meltdown in the Fukushima nuclear plant. John M Glionna and Yuriko Nagano, ‘Japan’s Technology Not Up to this Task – Despite Its Expertise, the Country Must Go Abroad for Robots to Work in Its Damaged Nuclear Plant’, Los Angeles Times (Los Angeles), 8 May 2011, 3.

42 43 44

EAP 10

Taking Humans Out of the Loop: Implications for International Humanitarian Law

165!

that the proliferation of UVs in all areas of life will create pressure to form a regulatory framework which UVs fall into.45 This is true not only for the civilian realm, but equally — and potentially even more so — in the case of military use of genuine robotic technology. After all, it is the military use where the impact of autonomous systems on human life and liberty is — with the exception of only a handful of examples in the nonmilitary world — at its strongest. This brief comment only addressed two areas in which autonomous weapon systems confront potential legal limitations. Arguments can be made that these challenges can be overcome regarding the principle distinction which requires that combat be carried out without directly targeting the civilian population. Advanced technologies already can or will be able in the future to distinguish between civilian and military targets because of the quantitative nature of this assessment, although, as has been mentioned above, not all such assessments are necessarily quantitative. The same cannot be said regarding the principle of proportionality. The principle is, by nature, not amenable to quantification, but rather requires a qualitative assessment of various factors. The assessment at this stage is no longer quantitative, but because of its relational nature becomes a qualitative one. And while great strides have been made in advancing the former, this is not the case in the development of the latter. This means that the use of autonomous weapon systems is legally indefensible. It is, moreover, highly problematic both ethically and politically.

!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
45

Gogarty and Hagger, above n 1, 122.

!

EAP 11

The (Common) Laws of Man Over Vehicles Unmanned
COMMENT BY EMERITUS PROFESSOR JIM DAVIS

Abstract
My commentary is restricted to two issues raised in the précis article: (a) what response might the law make to autonomous or semi-autonomous unmanned ground vehicles (UGVs) on public roads; and (b) whether the use of unmanned vehicles, and especially the use of unmanned aerial vehicles (UAVs), will require the laws protecting privacy to be reviewed. A further limit on my commentary is that I can speak with any confidence only on the current law in Australia and New Zealand. I comment on the possible effect of the advent of unmanned vehicles on the law in England, Canada and the United States, but beyond those countries, I regret that my knowledge is insufficient to make any meaningful commentary.

1

UGVs on Public Roads

I agree entirely with the general comment in the précis article, at 124, that ‘the common law is not incapable of dealing with new technologies’, but I fear that I must disagree with the assumption which may be drawn from that comment, that the tort of negligence will be able to deal effectively with the advent of UGVs on public roads. In my view, the only circumstance in which a UGV might to be allowed on a public road, at least in Australia or New Zealand, is on condition that (a) the operator of the vehicle is to be held strictly liable for any loss or damage caused by it, and (b) the operator has satisfied the regulatory authorities that it would be able to provide adequate compensation for anyone injured by that operation. Let me justify this conclusion by reference, first, to the limitations imposed by Parliaments around the English-speaking world on the field of operation of the law of negligence, in an effort to provide effective compensation to victims of road trauma, and secondly to analogies that may be drawn with common law principles.

1.1 Negligence and the effective compensation of victims of road trauma
Legal scholars have long been all too well aware that the law of negligence is far from perfect as a means of providing effective compensation to victims of road trauma. Among the defects of the law are: • the need to prove fault, sometimes in relation to a collision which occurred in a fraction of a second;

!

EAP 1

The (Common) Laws of Man Over Vehicles Unmanned

167!

the fact that compensation is paid as a lump sum to cover both past and future losses, leading Lord Scarman to comment: ‘Knowledge of the future being denied to mankind, so much of [an] award as is to be attributed to future loss and suffering … will almost surely be wrong.’1 the fact that in most cases the amount of compensation is arrived at by settlement between the parties, which is likely to be to the disadvantage of the injured party because of the relatively weak bargaining position that any victim of road trauma suffers from; the delay in arriving at a conclusion of a dispute; and the costs of engaging in litigation.2

• •

These defects have led legislatures in New Zealand and Australia to enact schemes for the automatic compensation of the victims of road (and other) trauma, in most instances providing a statutorily determined level of compensation, payment of which is determined by the fact of an injury (and its severity), and with no reference to the cause of that injury. Under the schemes in New Zealand and the Northern Territory, the injured party is denied the right to bring civil proceedings for compensation.

1.2 No-fault Compensation Schemes in New Zealand and Australia
The most comprehensive of such no-fault compensation schemes is the Accident Compensation Act 2001 (NZ). The legislation provides for the payment of medical, hospital and other similar treatment to anyone who suffers personal injury by accident in New Zealand, while residents in that country receive 80% of their pre-accident taxable income (subject to relatively generous upper and lower limits) for so long as they are incapacitated from employment. The legislation abolishes the right to sue at common law in respect of any personal injury for which it provides compensation.3 The scheme is administered by the Accident Compensation Corporation, and the benefits payable to the victims of road accidents are funded, in general terms, from levies payable by the owners of motor vehicles.4 I assume that if anyone were to seek permission to operate a UGV on the roads in New Zealand, the Accident Compensation Corporation would assess the chances of the vehicle

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
1 2

Lim v Camden Area Health Authority [1980] AC 174, 183. All of these factors, and more, are fully discussed in Luntz, Hambly, Burns, Dietrich and Foster, Torts, Cases and Commentary (LexisNexis, 6th ed, 2009) [1.2.1][1.2.30]; see also Balkin and Davis, Law of Torts (Butterworths, 4th ed, 2009) [1.14]; Sappideen and Vines (eds), Fleming’s Law of Torts (Lawbook Co, 10th ed 2011) [20.80]. Accident Compensation Act 2001 (NZ) s 317. For a more detailed explanation of the scheme, see Balkin and Davis, above n 2, [12.42]-[12.49].

3 4

EAP 2

168

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

causing personal injury, and require the operator to pay a fee based on the likely costs involved. The only jurisdiction in Australia to have a similar scheme, under which the statutory entitlements to compensation for road injuries are in substitution for any rights at common law, is the Northern Territory. Under the Motor Accidents (Compensation) Act 1979 (NT), anyone who is injured or dies as a result of a motor accident that occurred in the Territory, or a resident of the Territory whose injury or death (wherever in Australia it occurred) arose out of the use of a motor vehicle registered there, is entitled to recover the benefits provided for in the Act. The principal benefits are compensation for any loss of earning capacity, up to a maximum of 85% of the average weekly earnings of wage-earners in the Territory,5 plus the cost of medical and rehabilitation expenses incurred as a result of the accident.6 The scheme is administered by the Territory Insurance Office, and presumably if the operator of a UGV were to seek to register the vehicle under the Motor Vehicles Act 1949 (NT) that Office would have to determine a registration fee that would cover the risk of the vehicle causing injury or death. Victoria and Tasmania also have statutory schemes for the compensation of the victims of road trauma. The Transport Accidents Act 1986 (Vic) provides for the payment by the Transport Accident Commission of up to 80% of the preaccident earnings of anyone injured in a road accident in Victoria, or of anyone injured elsewhere in Australia who is either a resident of that State or the driver of or passenger in a vehicle registered there.7 Such a victim is also entitled to recover benefits for any impairment of bodily function,8 and if such a victim dies, the dependents are entitled to a death benefit and weekly payments for a limited time.9 The scheme is subject to some exclusions, and is in addition to the victim’s rights at common law, so long as the Transport Accident Commission has assessed the injuries as serious.10 It is assumed that if a person sought to register a UGV for use on the roads in Victoria, the Transport Accident Commission would need to assess the risk of injury likely to arise from use of the vehicle, and charge a registration fee accordingly. The Motor Accidents (Liabilities and Compensation) Act 1973 (Tas) provides rather more limited benefits, generally in respect of road accidents occurring in Tasmania, or involving a motor vehicle registered there.11 The benefits

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
5 6

Motor Accidents (Compensation) Act 1979 (NT) s 13. Ibid s 18. For further details of the legislation, see Balkin and Davis, above n 2, [12.34]-[12.37]. Transport Accidents Act 1986 (Vic) s 35. Ibid s 47. Ibid ss 57-59. Ibid s 93(2). For further details of the legislation, see Balkin and Davis, above n 2, [12.21]-[12.26]. Motor Accidents (Liabilities and Compensation) Act 1973 (Tas) s 23.

7 8 9 10

11

!

EAP 3!

The (Common) Laws of Man Over Vehicles Unmanned

169!

include a disability allowance, death and funeral benefits and the payment of hospital and medical expenses incurred as a result of the accident.12 The scheme is in addition to any claim an injured person may have at common law, subject, of course, to the need to bring the statutory benefits into account in the assessment of damages.13 The scheme is administered by the Motor Accidents Insurance Board. As with the schemes already mentioned, it is presumed that if one sought to register a UGV for use on the roads in Tasmania that Board would assess a registration fee that was intended to cover the Board’s likely exposure to risk. The only other statutory scheme in Australia for the compensation of victims of road trauma is that in New South Wales, the benefits of which are available only to those suffering severe injuries14 as a result of a motor accident which occurred in that State.15 Those benefits are available solely by reason of the fact of the injury having been suffered, and regardless of the fault of anyone involved in the accident.16 The benefits include such treatment, rehabilitation and care as is necessary, the need for which is assessed on a regular basis by the Lifetime Care and Support Authority. It is presumed that if one sought to register a UGV for use on the roads in New South Wales, that Authority would assess the risk of injury likely to arise therefrom, and charge accordingly. A final comment on legislation in Australia providing no-fault compensation for road (and other) trauma is that in February 2011 the Productivity Commission handed down a Draft Report on Disability Care and Support, one aspect of which is a proposal for no-fault compensation for catastrophic injuries suffered as a result of accidents. Draft Recommendation 16.1 is: State and territory governments should establish a national framework in which state and territory schemes would operate – the National Injury Insurance Scheme. The NIIS would provide fully-funded care and support for all catastrophic injuries on a no-fault basis. The scheme would cover catastrophic injuries from motor vehicle, medical, criminal and general accidents. Common law rights to sue for long-term care and support should be removed.17

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
12 13 14

For details of the benefits, see Balkin and Davis, above n 2, [12.28]-[12.32]. Motor Accidents (Liabilities and Compensation) Act 1973 (Tas) s 27. See Motor Accidents (Lifetime Care and Support) Act 2006 (NSW) s 7, under which eligibility to be a participant in the Scheme set up by that Act is to be determined by the Lifetime Care and Support Authority, pursuant to Guidelines issued by it. Ibid s 4(2). Ibid s 4(4). Australian Government Productivity Commission, Disability Care and Support – Productivity Commission Draft Report Volume 2, (Commonwealth of Australia, 2011) 16.38.

15 16 17

EAP 4

170

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

The Report goes on to propose (in draft Recommendation 16.5) that a scheme covering catastrophic injuries arising from motor vehicle and medical accidents should be in place by 2013, and other forms of catastrophic injury should be covered by at least 2015.

1.3 No-fault compensation schemes in North America
Starting with the Canadian Province of Saskatchewan, many of the Canadian Provinces, and at least eight of the States in the United States, have legislative schemes similar to that currently operating in Tasmania, in that they provide for compulsory first-party insurance for the victims of road trauma in the various jurisdictions, but only as a source of compensation supplementary to common law rights.18

1.4 Common law analogies
For those jurisdictions in Australia or North America in which common law rights to sue for injuries suffered on the road have not been abolished, it is suggested that, in the case of an accident resulting from the use of a UGV, a court would be likely to impose strict liability, rather than merely a duty to take care, on the operator of the UGV. It is further suggested that such an imposition of strict liability follows a clear common law tradition. A UGV, whether autonomous or semi-autonomous, may properly be regarded as dangerous, as robotics have not yet advanced to such a state that one could confidently accept that an autonomous UGV would act in the same way as the reasonable person of the law of torts. And if it is accepted that a UGV is dangerous, it is but a short step to draw an analogy with the liability at common law of the owner or keeper of an animal that is either known or presumed to be dangerous to mankind. Such liability is strict, in that the care taken (or not taken) by the owner to keep the animal from doing harm is irrelevant. Once the harm is shown to have been caused by the animal, the only defence open to the owner or keeper is that the plaintiff voluntarily assumed the risk of that injury by his or her actions.19 It may be accepted that this strict liability has been abolished by statute in New South Wales, South Australia and the Australian Capital Territory,20 where the owners and keepers of all animals other than dogs are liable only if they are negligent in the control of their animal, but even in those jurisdictions, strict liability is statutorily imposed on the keeper of a dog, for the very reason that dogs ‘not only have the size and strength to inflict serious bodily injury, but also are generally privileged to roam freely.’21 A UGV, it may be argued, is very much

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
18 19 20

Fleming, above n 2, [20.95]. See Balkin and Davis, above n 2, [15.11]-[15.18]. Animals Act 1977 (NSW); Civil Liability Act 1936 (SA) Pt 3; Civil Liability (Wrongs) Act 2002 (ACT) ss 212-215. Balkin and Davis, above n 2, [15.20], referring to the New South Wales Law Reform Commission, Report on Civil Liability for Dogs, Report No 8 (1970).

21

!

EAP 5!

The (Common) Laws of Man Over Vehicles Unmanned

171!

more capable of inflicting serious bodily injury than a dog, and hence its use on public roads must render its operator subject to a like strict duty. A further analogy between the use of UGVs and the approach of the common law to dangerous activities relates to an aspect of vicarious liability. It is generally the case that an employer will be vicariously liable for another’s negligence only when the person who has caused the harm for which the plaintiff seeks recompense is an employee, and has acted in the course of employment. However, there is a heterogeneous collection of cases in which an employer will be vicariously liable for the negligence of one who is not an employee, but an independent contractor. One such circumstance is that in which an employer is engaged in the dangerous use of land which is likely to detrimentally affect people or property in the near vicinity. In such a case, the High Court of Australia held, in Burnie Port Authority v General Jones Pty Ltd,22 the employer will be vicariously liable, despite having taken all reasonable care, if an independent contractor, over whose actions the employer has no direct control, is negligent in causing harm to those nearby. An autonomous UGV may be likened to an independent contractor in that the operator of the UGV has no direct control over its actions while it is in operation. Once the operator sets the UGV in motion, it is not unreasonable to regard it as being engaged in the dangerous use of the roadway and, by analogy with the Burnie Port Authority case, strictly liable for any injury caused to others.

2

Privacy

The précis article assumes, at 127, that the law will protect an individual’s privacy only in those circumstances in which he or she has a reasonable expectation of privacy – that is, when the person assumes that they are not being subjected to surveillance or the recording of their activities by other members of the community. The précis article goes on to point out, at 128-9, that UAVs are becoming more and more sophisticated, with an increasing power to capture images of the conduct of members of the public wherever they may be found, subject only to the limitation that a person is not surrounded by an opaque structure. This latter view leads Gogarty and Hagger to assert that, as surveillance technology ‘becomes more available and less expensive, the reasonable expectation argument will become even harder to maintain.’23 While I agree with the initial premise, that privacy is protected only when the subject of the intrusion has a reasonable expectation of privacy, I differ from the conclusion drawn therefrom, that such an expectation of privacy is becoming harder to maintain. It is my contention that the reasonable expectation of privacy arises not from the fact that the subject of the intrusion had no reason to suspect that he or she was being covertly watched, but from the fact that the conduct of the subject of the intrusion is such that a

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
22 23

(1994) 170 CLR 520; see further Balkin and Davis, above n 2, [26.31]. Précis article, 129.

EAP 6

172

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

reasonable person would be highly offended if that conduct were published to the world at large. I seek to support this contention from a review of the current and proposed forms of protection provided around the Englishspeaking world.

2.1 The tort of invasion of privacy in the United States
In the English speaking world, only the United States has developed a tort of invasion of privacy solely by judicial decision. The courts started their development of the tort in the latter years of the nineteenth century, following the famous article by Warren and Brandeis24 which synthesised mainly English decisions on the torts of defamation, nuisance and trespass. The tort now finds expression in §§ 652A – 652E of the Second Restatement,25 although for present purposes only §§ 652A, 652B and 652D are relevant to invasions of privacy by UAVs. They provide: 652A. General Principle (1) One who invades the right of privacy of another is subject to liability for the resulting harm to the interests of the other. (2) The right of privacy is invaded by: (a) unreasonable intrusion upon the seclusion of another, as stated in 652B; or (b) appropriation of the other’s name or likeness, as stated in 652C; or (c) unreasonable publicity given to the other’s private life, as stated in 652D; or

(d) publicity that unreasonably places the other in a false light before the public, as stated in 652E. 652B. Intrusion upon Seclusion One who intentionally intrudes, physically or otherwise, upon the solitude or seclusion of another or his private affairs or concerns, is subject to liability to the other for invasion of his privacy, if the intrusion would be highly offensive to a reasonable person. 652D. Publicity Given to Private Life One who gives publicity to a matter concerning the private life of another is subject to liability to the other for invasion of his privacy, if the matter publicized is of a kind that (a) would be highly offensive to a reasonable person, and

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
24 25

‘Right to Privacy’ (1890) 4 Harvard Law Review 194. American Law Institute, Restatement (Second) of Torts (1997).

!

EAP 7!

The (Common) Laws of Man Over Vehicles Unmanned

173!

(b) is not of legitimate concern to the public. It is suggested that these principles would provide protection to a person who has been subject to surveillance by UAVs if that surveillance included the recording and publishing of conduct which, while perfectly lawful, was of a private nature such that it would be highly offensive to a reasonable person for that information to be widely dispersed.

2.2 The statutory tort of violation of privacy in Canada
British Columbia,26 Manitoba,27 Newfoundland and Labrador28 and Saskatchewan29 have created tort liability by statute for the violation of a person’s privacy. Liability is imposed in very general terms, in that, in three of the Provinces, the relevant statute states: ‘It is a tort, actionable without proof of damage, for a person, wilfully and without a claim of right, to violate the privacy of another.’ 30 In Manitoba, the difference in wording indicates no intention to differ in meaning. The relevant provision is: 2(1) A person who substantially, unreasonably and without claim of right, violates the privacy of another person commits a tort against that person. 2(2) An action for violation of privacy may be brought without proof of damage. The legislation in three of the Provinces goes on to provide some particularity to the tort by giving examples of its commission. It is noteworthy that each statute lists as an example of a violation of privacy, surveillance, auditory or visual, whether or not accomplished by trespass, of that person, his home or other place of residence, or of any vehicle, by any means including eavesdropping, watching, spying, besetting or following.31 British Columbia has the same idea in slightly different words, s 1(4) of its Privacy Act providing that ‘privacy may be violated by eavesdropping or surveillance, whether or not accomplished by trespass.’

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
26 27 28 29 30 31

Privacy Act, RSBC 1996, Chap 373. Privacy Act, CCSM, c P125. Privacy Act, RSNL 1990, Chap P-22. Privacy Act, RSS 1978, c P-24. BC s 1(1); NL s 3(1); Sask s 2. Manitoba s 3(a); Newfoundland s 4(a); Saskatchewan s 3(a).

EAP 8

174

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

Although this legislation was first introduced into Canada by the British Columbia legislature in 1968,32 there has been remarkably little litigation on it in any of the four Provinces. One of the matters to emerge from an analysis of that litigation is that the courts take a wide range of factors into account in determining whether there has been a violation of the plaintiff’s privacy, and that the plaintiff’s possible expectation of privacy is not an absolute, but is weighed against the defendant’s conduct. Thus, in Silber v BCTV Ltd,33 video footage of the plaintiff, who was engaged in an industrial dispute, that was taken while the plaintiff was in a public car-park, was held not to be a violation of his privacy. Although the car-park was privately owned, it was open to the public, and the plaintiff was well aware that a TV crew were filming, and the conduct of the TV crew was regarded as fulfilling a public duty of informing the public. On the other hand, in Watts v Klaemt,34 the parties, who were neighbours, had been engaged in an acrimonious dispute for some time. In reprisal for some imagined wrong of the plaintiff, the defendant used a scanner to eavesdrop on, and record, the plaintiff’s telephone calls which she made, in her home, on a cordless telephone. The Judge accepted that, in light of the age of the telephone that the plaintiff used, she ought to have realised that calls could be intercepted by relatively inexpensive equipment, but that, as she had made the calls in the privacy of her own home rather than in a public call box, she had a reasonable expectation of privacy. Furthermore, the defendant had no claim of right to intercept and record the calls, and his conduct was regarded as sufficiently reprehensible to warrant a finding that he had violated the plaintiff’s privacy. Similarly, in Milner v Manufacturers Life Insurance Co,35 the first plaintiff, Mrs Milner, had claimed a disability pension from the defendant, which arranged for a private investigator to film her from the street, while she was in her home, or in her front garden. The particular episode of filming of which Mrs Milner complained had taken place after dark, when she was inside her house, but had the blinds up and lights on inside. In those circumstances her expectation of privacy was held to be low, and the private investigator had done nothing more than he was employed to do. However, on the same occasion, Mrs Milner’s daughter (the second plaintiff) was also inside the house, trying on a costume. The private investigator’s continued filming caught the daughter without a shirt on. Malnick J considered that the daughter’s privacy had been violated thereby. He considered that, as the daughter was not the subject of any investigation by the defendant, and as she had no reason to know of the possibility of the investigator outside, her expectation of privacy was relatively high. Furthermore, the investigator had no claim of right with respect to the daughter, and must have known that his conduct would be offensive to her.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
32

See Peter Burns, ‘The Law and Privacy: The Canadian Experience’ (1976) 54 Canadian Bar Review 1, 31. [1986] 2 WWR 609 (BC SC). 2007 BCSC 662. 2005 BCSC 1661.

33 34 35

!

EAP 9!

The (Common) Laws of Man Over Vehicles Unmanned

175!

2.3 The judicial development of a right of privacy in New Zealand
Starting in the 1990s with Bradley v Wingnut Films Ltd36 and P v D,37 the courts in New Zealand are in the process of developing an independent tort based on the wrongful publication of private information. That development has continued over the last decade. In Hosking v Runting,38 a television personality complained about the publication in a newspaper of a photograph of his child taken in a public street (and with the knowledge of the plaintiff). The Court of Appeal rejected the claim, pointing out that the two fundamental requirements for the tort were the existence of facts in respect of which there was a reasonable expectation of privacy, and publicity given to those private facts which would be considered highly offensive to an objective reasonable observer. Since the photograph in question had been taken on a public street, the first of those requirements had not been met. More recently, in Television New Zealand Ltd v Rogers,39 the Supreme Court of New Zealand did not necessarily agree with all of the comments made in earlier cases, but nevertheless rejected an appeal by Rogers, who had failed to persuade the lower courts that the publication of the video recording of his being interviewed by the Police after he had been charged with murder, was within the bounds of this tort. The members of the Supreme Court were of the view that such a video recording was not one in which the subject could have a reasonable expectation of privacy. With regard to possible commission of this tort by surveillance by UARs, one might suppose that, so long as the conduct which has been recorded and published is such as could be regarded by a reasonable person as private, the intrusion by the UAR would fall within the parameters of this tort.

2.4 The judicial development of protection of privacy in England
Although the House of Lords held, in Wainwright v Home Office,40 that there was, at that time, ‘no over-arching, all-embracing cause of action for “invasion of privacy”’,41 the same tribunal held, only a few months later, in Campbell v MGN Ltd,42 that the well-established action for breach of confidence could be

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
36 37 38 39 40 41 42

[1993] 1 NZLR 415. [2000] 2 NZLR 591. [2005] 1 NZLR 1 [2008] 2 NZLR 277. [2004] 1 AC 406. See Campbell v MGN Ltd [2004] 2 AC 457, [11] (Lord Nicholls). [2004] 2 AC 457.

EAP 10

176

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

developed to protect against the disclosure of private information.43 In general terms, the House of Lords held that: • the duty of confidence would arise whenever the person who came under the duty was in a situation where he or she knew or ought to know that the other person could reasonably expect his or her privacy to be protected; and that reasonable expectation of the subject was to be assessed by asking whether the publication of the information would be highly offensive to a reasonable person of ordinary sensibilities placed in the same position as the subject of the disclosure.44

On the facts of the Campbell case, it was held that the plaintiff was entitled to an award of damages for breach of confidentiality. The defendant, the publisher of the Daily Mirror newspaper, had published both an article and a photograph of Naomi Campbell, an internationally known fashion model, emerging from a meeting of Narcotics Anonymous. A majority of the House of Lords considered that the details of the plaintiff’s treatment for drug addiction were akin to private medical records, and therefore imposed a duty of confidence on the newspaper, and that, in view of the assurance of anonymity and confidentiality given by Narcotics Anonymous, a reasonable person in Miss Campbell’s position would find the publication highly offensive. In terms of possible infringements of privacy by UARs, two points may be made about the Campbell case. On the one hand, the fact that the photograph was taken covertly was not, by itself, determinative. If, as Baroness Hale commented, the photograph had been taken ‘when she pops out to the shops for a bottle of milk’,45 no one could have complained. It is the activity photographed which must be private. Hence, on the other hand, even though Miss Campbell could presumably expect free-lance photographers to dog her every step in the hope of catching a newsworthy shot, she could still reasonably expect her privacy to be protected in the circumstances before the Court. In Hellewell v Chief Constable of Derbyshire,46 Laws J had earlier made the same point, when he said: If some-one with a telephoto lens were to take from a distance and with no authority a picture of another engaged in some private act his subsequent disclosure of the photograph would, in my judgment as surely amount to a breach of confidence as if he had found or stolen a

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
43

Part of the impetus for this development was the enactment of the Human Rights Act 1998 (UK), especially Art 8, which protects the right to respect for private life, and Art 10, which protects the right to freedom of expression. See generally [2004] 2 AC 457, [21]-[22] (Lord Nicholls). Ibid [154]. [1995] 1 WLR 804, 807 (emphasis supplied).

44 45 46

!

EAP 11!

The (Common) Laws of Man Over Vehicles Unmanned

177!

letter or diary in which the act was recounted and proceeded to publish it.47 Similarly, in Mosley v News Group Newspapers Ltd,48 Eady J awarded Max Mosley, head of the Formula 1 international racing car federation and son of fascist and Nazi-sympathiser Sir Oswald Mosley, ₤60,000 damages for mental distress arising from the publication of video footage of private sexual orgies in which he had indulged. Although there were at least five other people involved in the activities which had been filmed, his family and associates had no idea that they had taken place, and the information was consequently regarded as private for these purposes.

2.5 Protection of privacy in Australia
In Australian Broadcasting Corp v Lenah Games Meats Pty Ltd,49 the High Court was prepared to accept that, in due course, the common law of Australia might develop the means for protecting privacy,50 but on the facts of that case, the Court refused to extend any such protection to a company. Since then, the only superior court to consider the issue has been the Court of Appeal in Victoria, in Giller v Procopets,51 in which the court upheld the appellant’s claim for damages for breach of confidence based on the respondent’s showing (or threatening to show) to others videotapes of sexual activity between the parties, many of which had been taken with her consent. However, in 2008 the Australian Law Reform Commission produced a Report, For Your Information: Australian Privacy Law and Practice,52 which, while concerned largely with the operation of the Privacy Act 1988 (Cth), also proposed the introduction of a statutory cause of action for a serious invasion of privacy. The terms of its recommendations are: Recommendation 74–1 Federal legislation should provide for a statutory cause of action for a serious invasion of privacy. The Act should contain a non-exhaustive list of the types of invasion that fall within the cause of action. For example, a serious invasion of privacy may occur where: (a) there has been an interference with an individual’s home or family life;

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
47

This comment was subsequently quoted with approval by Gleeson CJ in Australian Broadcasting Corp v Lenah Games Meats Pty Ltd (2001) 208 CLR 199, [34]. [2008] EWHC 1777 (QB). (2001) 208 CLR 199. Ibid [40]-[43] (Gleeson CJ), [107]-[110] (Gummow and Hayne JJ), [335] (Callinan J). (2008) 24 VR 1. ALRC 108.

48 49 50 51 52

EAP 12

178

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

(b) an individual has been subjected to unauthorised surveillance; (c) an individual’s correspondence or private written, oral or electronic communication has been interfered with, misused or disclosed; or

(d) sensitive facts relating to an individual’s private life have been disclosed. Recommendation 74–2 Federal legislation should provide that, for the purpose of establishing liability under the statutory cause of action for invasion of privacy, a claimant must show that in the circumstances: (a) there is a reasonable expectation of privacy; and (b) the act or conduct complained of is highly offensive to a reasonable person of ordinary sensibilities. In determining whether an individual’s privacy has been invaded for the purpose of establishing the cause of action, the court must take into account whether the public interest in maintaining the claimant’s privacy outweighs other matters of public interest (including the interest of the public to be informed about matters of public concern and the public interest in allowing freedom of expression). Recommendation 74–3 Federal legislation should provide that an action for a serious invasion of privacy: (a) may only be brought by natural persons; (b) is actionable without proof of damage; and (c) is restricted to intentional or reckless acts on the part of the respondent.

Recommendation 74–4 The range of defences to the statutory cause of action for a serious invasion of privacy provided for in federal legislation should be listed exhaustively. The defences should include that the: (a) act or conduct was incidental to the exercise of a lawful right of defence of person or property; (b) act or conduct was required or authorised by or under law; or (c) publication of the information was, under the law of defamation, privileged.

Recommendation 74–5 To address a serious invasion of privacy, the court should be empowered to choose the remedy that is most appropriate in the circumstances, free from the jurisdictional constraints that may apply to that remedy in the general law. For example, the court should be empowered to grant any one or more of the following: (a) damages, including aggravated damages, but not exemplary damages;

!

EAP 13!

The (Common) Laws of Man Over Vehicles Unmanned

179!

(b) an account of profits; (c) (e) (f) an injunction; a correction order; an order for the delivery up and destruction of material; and (d) an order requiring the respondent to apologise to the claimant;

(g) a declaration. The recommendations have not yet been accepted by the federal government. It is suggested that if they are accepted, anyone who is subject to surveillance by UAVs and whose private conduct is subsequently made known, would have good grounds for claiming damages under such a statutory right of action.

Conclusion
I suggest that the means by which privacy is protected around the Englishspeaking world, both presently and prospectively, are sufficiently flexible to be able to continue to provide the same protection, whatever new means might be devised that would enable a greater number of people to carry out covert surveillance on their fellows.

EAP 14

Unmanned Vehicles, Surveillance Saturation and Prisons of the Mind
COMMENT BY BRENDAN GOGARTY Abstract
In this commentary I expand upon the discussion on privacy that I set out with my colleague in the précis to this edition. In particular I consider what the impact of military technologies, designed to achieve persistent and saturation capacity surveillance over war zones might be on civil space and civil society.

1

Introduction

Unmanned Vehicles (UVs) are lauded as ‘force multipliers’ but so too can they be seen as ‘problem magnifiers’, particularly for the law. That is, in very large part, because they are specifically designed to overcome traditional anthropocentric limitations, extending the reach and influence of their controllers into areas and arenas that the law previously needn’t concern itself. In the précis we argued this was particularly apparent in respect of the increasing use of surveillance drones in the civilian space. The recent success of unmanned vehicles (UVs), particularly aerial UVs (UAVs), is very much the result of their capacity to undertake ‘high-powered and constant surveillance over vast tracts of land’ in conflict zones.1 Given the majority of current civilian UV technology — especially those employed by state entities — is merely rebadged military adaptations, we argued that their ‘adoption into the civilian world will provide the same surveillance capacities to those controlling them; capacities far beyond those envisioned by the Courts of both those countries that recognise a right to privacy, and those that do not’.2 In this commentary I wish to examine the socio-legal implications of so-called ‘global, persistent, surveillance’3 by UVs employed by the state, over its own, rather than enemy territory. In particular, I will consider the potential impact on privacy and how the erosion of personal privacy will ultimately impact on other freedoms important to civil democratic societies, such as freedom of expression and freedom of association. This commentary will start with a basic overview of privacy and surveillance. Following this I will discuss how surveillance may impact on certain important privacy rights and consider how UV technologies threaten to erode

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
1

Brendan Gogarty and Meredith Hagger, ‘The Laws of Man over Vehicles Unmanned: The Legal Response to Robotic Revolution on Sea, Land and Air’ (2008) 19(1) Journal of Law, Information and Science 73, 130 (‘précis’). Ibid 130. Ibid 80.

2 3

EAP 1!

Unmanned Vehicles, Surveillance Saturation and Prisons of the Mind

181

those rights much further. I contend that current law is insufficient to act as a check on the over use or misuse of UV surveillance and argue that some form of regulatory debate is required to address current regulatory shortcomings. This commentary is not intended to recommend or frame possible regulatory responses to that attrition of civil rights. Rather I argue that, should the requisite public and legal debate not happen soon, then it will not only be relatively futile, but that, ironically, it may impact on people’s willingness to participate in democratic and participatory activities in the first place.

2

Privacy and Surveillance: Definitions

Before examining the impact of UVs on privacy it is important to discuss what privacy is. Unfortunately this is not a particularly easy task. Indeed, it is almost impossible to write about privacy without noting its definitional, conceptual and legal problems.

2.1 Privacy
Privacy is ‘somewhat of an esoteric concept, without precise objectively discernable boundaries’.4 It covers a wide range and forms of behaviour, can be context dependent and subjectively variable.5 The term can describe everything from interpersonal infringement of body space, to eavesdropping, computer hacking or surveillance by the state. In the précis we covered a larger range of these sub-categories6 than I plan to discuss here. What I intend to focus on is the notion of privacy as a ‘right to be left alone’,7 particularly from interference and monitoring by the state and its institutions. Specifically I wish to consider the far-reaching consequences of the temporal and physical extension of state surveillance that UV technology now makes possible. I believe this is the most worrisome immediate problem presented by civilian UV technology, at least in the near future.

2.2 Surveillance
Unlike the more nebulous concept of privacy, surveillance is somewhat more of a defined construct. Surveillance, according to James Rule, entails ‘any form of systematic attention to whether rules are obeyed, to who obeys and who does not, and to how those who deviate can be located and sanctioned.’8 Anthony Giddens described surveillance as the ‘the supervision of the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4 5 6 7 8

Précis, above n 1, 126. Daniel J Solove, ‘Conceptualizing Privacy’ (2002) 90 California Law Review 1087, 1092. Précis, above n 1, 124-132. Samuel Warren and Louis D Brandeis, ‘The Right to Privacy’ (1890) 4(5) Harvard Law Review 193. James Rule, Private Lives and Public Surveillance (Allen Lane, 1973) 40.

EAP!2!

182

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

activities of subject populations’, especially in the ‘political sphere’.9 He divides surveillance into direct (prisons/schools/workplaces) and indirect insofar as it relates to the authoritarian ‘control of information’ and the ordering and deployment of that knowledge.10 Hence, in this paper the term is taken to mean the observation and recording of individuals’ behaviour with the ultimate aim of ensuring rule compliance or metering sanction for rule breach.

2.3 Surveillance and privacy – the interface
Surveillance has seemingly direct and obvious implications for privacy, insofar as it results in the viewing and recording of individuals’ behaviour and movement. It is often undertaken without the surveillance subject’s consent and sometimes without their knowledge. Equally, once recorded, personal information may be re-used in ways, which the subject has not, or cannot be assumed to, have consented to. This would innately appear to be a fundamental breach of privacy. Yet, that innate sense does not always rationally translate into a clear form of actual harm. That is particularly the case where the surveillance is undertaken openly and in the public domain. Yet, sometimes it can even be hard to explain why covert surveillance causes harm or offense in the private domain, especially where the subject of the surveillance is unaware of it. Much surveillance, particularly audio-visual surveillance, is undertaken in places where the subject would not or could not have a reasonable expectation of privacy.11 In places like prisons, schools or workplaces direct surveillance occurs with either direct or implied knowledge or consent to being observed by the data subject. Equally, indirect surveillance of public places often does no more than observe and/or record what is open to the general public to view anyway. In a free and open civil society it is neither practical nor appropriate to limit who may watch another, or the manner by which they may do so. Even if surveillance is surreptitious and not in a public place there may be, Posner points out, ‘no rational basis’ for a person to claim they are harmed by it.12 There is clearly no physical harm done to a person if a photo is taken of them in their home, even if it is without their knowledge or consent. Moreover, Posner argues that if nothing is done with the photograph, and the person never finds out it was taken, then there is little cause to claim there

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
9 10 11

Anthony Giddens, The Consequences of Modernity!(Stanford University Press, 1990) 59. Ibid. Inasmuch as that phrase relates to the concealment of information from others. See Richard A Posner, Economic Analysis of Law (Aspen Law & Business, 5th ed, 1998) 46. Richard A Posner, ‘Privacy, Surveillance, and Law’ (2008) 75 University of Chicago Law Review 245.

12

!

EAP 3!

Unmanned Vehicles, Surveillance Saturation and Prisons of the Mind

183

was emotional harm from its creation.13 Similarly, if a telephone is tapped, but only a computer system, listening for key words relating to criminal activity actually monitors it — assuming no such words are used during the conversation — then one might ask, what the harm is, or indeed if anyone’s privacy is actually breached.14 To adopt Posner’s reasoning, if you are not an antisocial or dangerous person, then there is ‘no rational basis’ to claim harm from being surveilled, when all that is being monitored for is dangerous antisocial behaviour. Proponents of state surveillance often defend that position on the grounds that no harm is done, unless those being observed are doing something wrong to begin with. In other words, ‘if you’ve nothing to hide then you’ve nothing to fear.’ Of course the problem with that position is that it treats all of those being watched as potential rule breakers, whether they are or not. Assuming the surveillance is unidirectional it places the watchers in a position of perpetual oversight and power over those under their gaze, whether those people ostensibly should have had a reason to fear in the first place. Finally, it amplifies the power of the watchers to determine what should be feared. Privacy and surveillance scholars such as Goold therefore argue that, ‘we should resist the spread of surveillance not because we have something to hide, but because it is indicative of an expansion of state power’.15 It is perhaps in this sense — that is, the use and abuse of surveillance information by the state — that a compelling case can be made against unfettered and unconstrained surveillance as an abuse of the right to privacy.

2.4 Surveillance as an extension of state power
Whilst some civil libertarians deride any surveillance as a breach of a fundamental right to be ‘left alone’,16 the reality is that there has never really been an absolute right in any society for citizens to keep all information about themselves secret and away from the prying eyes of others.17 Indeed, the idea

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
13 14

Ibid. Indeed, one might argue, there is actually little difference if it was a human rather than computer listening in to that conversation, inasmuch as that human would be better trained to discount innocuous references to, say terrorism, and allow the remainder of the conversation to go unrecorded. Benjamin Goold, ‘How Much Surveillance is Too Much? Some Thoughts on Surveillance, Democracy, and the Political Value of Privacy’ in D W Schartum (ed) Overvaaking i en Rettstat (Surveillance in a Constitutional Government) (Fagbokforlaget, 2010) <http://ssrn.com/abstract=1876069>, 44. For a good overview of the normative status of privacy as a right see, Waldo et al, Engaging Privacy and Information Technology in a Digital Age (National Academies Press, 2001) 66-69. Indeed privacy as a legal concept only really arose in the nineteenth century, and then as a standalone ‘right’ in some countries, but not others. That said, the right has become more doctrinally accepted at an international, and multinational level. Indeed in Britain and most other common law countries, courts have been rather inimical to an enforceable common law right to privacy, even against the state, in

15

16

17

EAP!4!

184

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

of privacy as a right, particularly a human right, is a relatively recent legal concept and one which is intertwined with the development of surveillance technologies. One of the major, if not the primary, catalysts for the development of domestic and international privacy law has been as a response to monitoring and recording technologies. The invention of the instamatic camera drove the development of the US tort of privacy.18 Later developments in privacy law at the international level can similarly be seen to be a reaction to the adoption of increasingly powerful and invasive surveillance technologies during the cold war, when spying on foreigners and one’s own citizen’s became a central apparatus of state intelligence and defense.19 More recently, transnational data protection laws have been developed as a consequence of the introduction of international telecommunications networks, the Internet and now portable digital communications.20 The exception to this general trend has been open public surveillance, particularly of the audio-visual variety. Public surveillance has not received a great deal of regulatory attention or intervention, despite the rapid and near exponential growth of closed-circuit television (CCTV) — especially by state organs — in public spaces over the last four decades.21 The preponderance of this public surveillance technology, particularly by state institutions, and the seeming complacency about it amongst a large proportion of the public has worried scholars and civil libertarians concerned about its potential impact on civil rights.22

2.5 Surveillance and civil rights
The potential impacts of surveillance on civil rights have been subject to analysis, discussion and debate by scholars, philosophers and lawyers for a significant period. Perhaps the most seminal early work was that Jeremy Bentham in 1787 as part of his Panopticon Letters,23 a treatise on the design of

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
the absence of legislative protection. That position is different in other jurisdictions which recognise a right to privacy, and in international and multilateral agreements such as the ICCP and ECHR. See Dorothy J Glancy, ‘The Invention of the Right to Privacy’ (1979) 21(1) Arizona Law Review 1.
18

See n 63. See also, Robert E Mensel ‘“Kodakers Lying in Wait”: Amateur Photography and the Right of Privacy in New York, 1885-1915’ (1991) 43(1) American Quarterly 24. See generally, Deborah Nelson, Pursuing Privacy in Cold War America (Columbia University Press, 2002). Michael Kirby, ‘The History, Achievement and Future of the 1980 OECD Guidelines on Privacy’ (2010) 20(2) Journal of Law, Information & Science 1. Caoilfhionn Gallagher, ‘CCTV and Human Rights: the Fish and the Bicycle?’ (2004) 2(2/3) Surveillance & Society 270. Ibid. Jeremy Bentham, The Panopticon Writings (Verso, 1995) e-version available from <http://cartome.org/panopticon2.htm>.

19 20 21 22 23

!

EAP 5!

Unmanned Vehicles, Surveillance Saturation and Prisons of the Mind

185

an efficient prison system. That system was designed around the (then) nominal idea that prisoners would be placed in cells where they always might be observed by prison officers, but could never actually know if they actually were; the prison cells were permanently lit whilst officers were to be placed in an obscured and darkened guard tower. Bentham argued this system would be effective because, the more constantly the persons to be inspected are under the eyes of the persons who should inspect them, the more perfectly will the purpose [of social/behavioural control] … have been attained. Ideal perfection, if that were the object, would require that each person should actually be in that predicament, during every instant of time. This being impossible, the next thing to be wished for is, that, at every instant, seeing reason to believe as much, and not being able to satisfy himself to the contrary, he should conceive himself to be so.24 In other words, people will generally modify their behaviour to comply with rules when they are being watched by those with the power to sanction or punish rule breaking. However, they are also likely to modify their behaviour if there is a possibility of being watched by those authorities. That is, the uncertainty of whether someone is being observed can create the same effect on someone as actually observing him or her. Bentham’s system greatly increases the administrative efficiency of monitoring and controlling subject populations, by reducing the locus of that control from a one-to-one ratio to a one-to-many ratio. It achieves this power differential by placing a larger cohort on notice that they may be being observed by one or more watchers at any one time, whilst simultaneously denying them the capacity to confirm they actually are.25 Foucault, who built his work upon Bentham’s — and who is equally a standard reference in most surveillance literature — described the uncertainty control principle of surveillance as a ‘diagram of a mechanism of power reduced to its ideal form… it is in fact a figure of political technology that may and must be detached from any specific use’.26

3

Towards a ‘Surveillance Society’

Bentham’s ideas were both lauded and criticised, but gained little practical traction in practice, either in respect of prison populations or social and population control more generally. That was until the advent of modern audio-visual recording technology which allowed for the installation of recording devices to allow for efficient monitoring of both public and private

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
24 25

Ibid. Indeed it is possible that, sometimes at least, no one may actually be watching at any one time; but as long as the subject population does not know that, the effect should be the same. Michel Foucault, Discipline and Punish: The Birth of the Prison (Patheon, 1977) 205.

26

EAP!6!

186

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

spaces. CCTV cameras in particular have resulted in vast areas of public and private space being monitored and surveilled by a range of entities, but particularly state ones.27 Added to this is the fact that much human interaction now occurs via technological means and conduits, from telephones to the internet, all of which may be monitored and surveilled, with or without the participants’ knowledge. This turned many western countries into what some scholars describe as a ‘surveillance society’ given that so much of people’s lives in these countries is actively monitored, or at least capable of being monitored.28 Given the rise of the so-called surveillance society, it might be expected that the early theories of Bentham and others would finally be proven or disproven. Ultimately however, there is a lack of solid evidence that panoptic surveillance is an effective or ineffective mechanism to ensure social control.29 On the one hand, studies of small groups show that the panoptic effect of uncertainty does result in self-regulation in controlled situations.30 Panoptic designs have also been integrated into workplaces, and some studies indicate they are successful in increasing productivity, safety and efficiency, especially where the work is in a controlled environment or centres upon electronic communications (for instance, call centres).31 Other studies are less conclusive or argue that the negative affects of the constant monitoring undermine rather than promote worker morale and satisfaction and thereby reduce efficiency.32 Outside of controlled studies of small groups the evidence is even more controversial. For instance, some statistics seem to indicate that the introduction of CCTV cameras may reduce crime and anti-social behaviour,

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
27 28 29

Gallagher, above n 21, 23. David Lyon, The Electronic Eye: The Rise of Surveillance Society (University of Minnesota Press, 1994) 57-80. As Vorvoreanu and Baton note, ‘The paradox of electronic surveillance is that it is much used and little understood.’ Mihaela Vorvoreanu and Carl H Botan, ‘Examining Electronic Surveillance in the Workplace: A Review of Theoretical Perspectives and Research Findings’ (paper presented at Conference of the International Communication Association, Acapulco, June 2000) 3. For instance, students will avoid prohibited websites when they know their Internet browsing history may be reviewed. S Dawson, ‘The impact of institutional surveillance technologies on student behavior” (2006) 4(1/2) Surveillance and Society 69; see also Stuart Moran, Isaac Wiafe and Keiichi Nakata, ‘Ubiquitous Monitoring and User Perceptions as a Persuasive Strategy’ (2011) 3 Web Intelligence and Intelligent Agent Technology 41, doi: 10.1109/WI-IAT.2011.112. Shoshana Zuboff, In the Age of the Smart Machine (Basic Books, 1988) 322; Jengchung Chen and William Ross, ‘Individual differences and electronic monitoring at work’ (2007) 10(4) Information, Communication & Society 488 doi: 10.1080/13691180701560002. John R Aiello and Carol M Svec, ‘Computer Monitoring of Work Performance: Extending the Social Facilitation Framework to Electronic Presence’ (1993) 23(7) Journal of Applied Social Psychology 537, doi: 10.1111/j.1559-1816.1993.tb01102.x; Marylène Gagné and Devasheesh Bhave, ‘Autonomy in the Workplace’ (2011) 1(2) Human Autonomy in Cross-Cultural Context 163, doi: 10.1007/978-90-481-9667-8_8.

30

31

32

!

EAP 7!

Unmanned Vehicles, Surveillance Saturation and Prisons of the Mind

187

whilst other statistics seem to indicate the opposite, or merely show that the locus, nature and form of the activity shifts without reducing its quantum per se.33 Indeed, some of the critics who argue that CCTV limits fundamental freedoms simultaneously cite its lack of impact on crime as a reason for its abolition. Perhaps the most that can be said is that, ultimately, it is impossible to truly measure the impact of open surveillance on the population as a whole. Nevertheless, there is evidence that at least some people will be concerned about the monitoring, and, on a small scale at least, will self-regulate. Whilst those involved in crime might find ways around the surveillance,34 or become nonchalant about it, those who are not involved in or indenting to commit crime are still affected by it. In other words, surveillance treats all citizens as potential criminals and puts all on notice they are being watched for possible non-compliance with state authority. One of the main attacks on unfettered state surveillance is that it may have a panoptic affect on those who challenge or dissent against state authority, but probably more importantly those who might wish to hear, interact or agree with them.35 Governments have an interest in self-preservation, particularly from those who might undermine their authority, even in civil, democratic societies. Democracy however, can only flourish in an environment in which people are free to say and think what they wish, without fear of retribution or sanction for disagreeing with state policy or practice.36 Democracy can also only flourish where people are free and unafraid to listen to such ideas and judge the veracity of them for themselves. As Emerson writes, ‘[a]n individual is capable of [democratic participation] only if he can at some points separate himself from the pressure and conformities of collective life.’37 If there is nowhere for citizens to have such interactions without being fearful of the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
33

A good meta-analysis of the competing statistics is provided by Brandon Welsh, and David Farrington, ‘Public Area CCTV and Crime Prevention: An Updated Systematic Review and Meta!Analysis’ (2009) 26(4) Justice Quarterly 716 doi: 10.1080/07418820802506206; see also William Webster, ‘CCTV policy in the UK: reconsidering the evidence base’ (2007) 6(1) Surveillance & Society 10; Sam Waples, Martin Gill and Peter Fisher, ‘Does CCTV displace crime?’ (2009) 9 Criminology and Criminal Justice 207, doi: 10.1177/1748895809102554. Simon argues about open surveillance, ‘the post 9/11 security situation is that the individuals one hopes to detect are the very individuals that have the best chance of evading detection.’ Bart Simon, ‘The Return of Panopticism: Supervision, Subjection and the New Surveillance’ (2005) 3(1) Surveillance & Society 9. Such views are not new, US Justice Felix Frankfurter stated in Wolf v Colorado, 338 U.S. 25 (1949) that, the ‘security of one’s privacy against intrusion by the …[state]– is basic to a free society’. As Keith Boone puts it, privacy is ‘vital to a democratic society [because] it underwrites the freedom to vote, to hold political discussions, and to associate freely away from the glare of the public eye and without fear of reprisal.’ See C K Boone, ‘Privacy and Community’ (1983) 9(1) Social Theory and Practice 8. Thomas I Emerson, The System of Freedom of Expression (Random House, 1970) 546.

34

35

36

37

EAP!8!

188

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

gaze of the state, then there is likely to be an impact on the exchange of political ideas. Hence, surveillance scholars like Goold argue that: one of the greatest dangers of unfettered mass surveillance is the potential chilling effect on political discourse, and on the ability of groups to express their views through protest and other forms of peaceful civil action … [making it] harder for dissent to flourish or for democracy to remain healthy and robust …[and] the individual is always at the mercy of the state, forced to explain why the government should not know something rather than being in the position to demand why questions are being asked in the first place.38 Solove goes further and argues that, Surveillance is a different kind of privacy problem than disclosure, imposing a different type of injury to a different set of practices. Surveillance differs from disclosure because it can impinge upon practices without revealing any secrets. Being watched can destroy a person’s peace of mind, increase her self consciousness and uneasiness to a debilitating degree, and can inhibit her daily activities.39 One must, of course, be cautious about overstating the impact of surveillance on political discourse, just as one must be cautious about overstating its impact on crime. Nevertheless, there is at least some evidence to suggest the panopticon effect operates to deter people from engaging in behaviour that might result in sanction. As major or minor as that impact might be, it is an impact all the same; an impact which will mean that we cannot ever describe our speech or association as completely free. The question is just how much of an impact we are willing to accept, and, once the boundary line is drawn, how we will limit further incursions and encroachment. UV technology may just be the tipping point beyond which we can safely say there will be a real ‘chilling effect’ on political discourse, insofar as such technology promises to greatly increase the surveillance capacity of state organs. Surveillance capacity, according to James Rule is determined by examining the: 1. 2. 3. 4. size and scope of files in relation to the subjected population; centralization of those files; speed of information flow; and number of points of contact between the system and its subject population.40

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
38 39 40

Goold, above n 15, 43. Solove, above n 5, 1130. Rule, above n 8, 38.

!

EAP 9!

Unmanned Vehicles, Surveillance Saturation and Prisons of the Mind

189

As was discussed in the précis to this edition, UV technology dramatically increases the ‘degree and scale’ of all of these things: [The] concern about drones is how they may facilitate increasingly broad ranging, invasive and covert monitoring by the state, and possibly private companies and individuals … Unlike current surveillance systems, which tend to involve fixed, visible camera systems in public spaces, UVs will provide highly mobile and generally undetectable surveillance of any area within the relevant jurisdiction. Current UV applications could easily permit a person to be watched as they travel from home to work without their knowledge. Without some constraint, it is possible that covert surveillance will be ubiquitous in the not too distant future.41 UVs, particularly UAVs, permit an almost infinite number of points of contact with the population, because of the large and unmarked zones which they may surveil. Indeed, the fact that they are designed to operate without detection and from roving locations increases their panoptic effect, because, unlike modern CCTV cameras, a person can never know if a camera is actually watching them. Furthermore, much contemporary UV technology has been developed for intelligence, surveillance and reconnaissance (ISR) missions in war zones, specifically to collect vast amounts of audio-visual data over massive geographic areas. This generates massive amounts of ISR data that requires complex hardware and software systems to process and refine.42 ISR data can be stored on conventional data systems at a later stage to review suspect sites and persons at a later date.43 It is stored in highly centralised and interconnected within state data servers. When considered against Rule’s criterion, this is a level of surveillance capacity nearly reaching saturation point.

3.1 Towards surveillance saturation
Although states are currently capable of employing UVs in a manner through which they might potentially achieve surveillance saturation, that is not yet, entirely a reality; but in the absence of immediate law and debate, it is a fast approaching possibility. Already we are seeing a push by state agencies to adopt UV technology as an efficient and convenient solution to civilian policing and security.44 Indeed, the civilian transition of the technology is almost as rapid and exponential as its uptake in the military sphere post 9/11.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
41 42

Précis, above n 1, 126. Eli Lake, ‘Drone footage overwhelms analysts’, The Washington Times (online), 9 November 2010 < http://www.washingtontimes.com/news/2010/nov/9/dronefootage-overwhelming-analysts >. In fact the ISR data collected by military UVs is so vast that it is practically impossible for human controllers to process it all. As noted in the précis, it is so wide ranging that nearly every part of Afghanistan may be under observation at any one time. Précis, above n 1, 137. Ibid 106-108.

43

44

EAP!10!

190

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

If that is the case, then UV technology will quickly become as ubiquitous — albeit in a less obvious or transparent way — as that earlier surveillance technology, such as CCTV. That means, without proper debate, we may very well experience the same privacy creep in the use of UV technology that we saw previously with CCTV. The real effect of the move towards persistent, saturation level surveillance of civilian areas is, of course, also speculative. Nevertheless, there is good cause to assume it will have some affect on people’s feeling of freedom to associate and participate in democratic forms of activities which may be unfavourable to, or sanctioned by, the government of the day. For instance police have increasingly turned to videoing protesters with handheld cameras, even at peaceful demonstrations.45 The response by protesters has been to obscure their faces to avoid identification; and therefore they can, absent of being arrested, assume their privacy is maintained after they quit the protest. The difference in a world of UV surveillance is that those protesters cannot expect to return to anonymity once they leave the protest march and return to their homes and lives. Instead there is a very real chance they may be singled out and followed, silently and unknowingly from the scene of the protest all the way to their home. This is not a dystopian prediction, but rather a very realworld scenario exemplified by the killing of Tariq Azizm in Pakistan in late 2011, which is discussed in the commentary by Hagger and McCormack in this edition.46

3.2 Tariq’s legacy
Tariq Azizm was killed after attending a meeting, called a ‘Waziristan Grand Jirga’ — best explained as a hybrid parliament/courtroom — in Islamabad, Pakistan.47 He had been invited to attend that meeting, along a large group of villages from rural Pakistan, to commemorate drone strike victims and discuss the ongoing impact of such strikes on their own lives with western journalists.48 Pakistan prevents journalists from entering tribal areas to interview or document drone strikes themselves. At the Grand Jirga village elders refuted US Government claims that drone strikes were targeted, discrete and did not result in civilian casualties. Because

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
45 46

Goold, above n 15, 39. Meredith Hagger and Tim McCormack, ‘Regulating the Use of Unmanned Combat Vehicles: Are General Principles of International Humanitarian Law Sufficient?’ (2011) 20(2) Journal of Law, Information & Science EAP 23, 10.5778/JLIS.2011.21.McCormack.1. Clive S Smith, ‘For Our Allies, Death From Above’, The New York Times (online) 3 November 2011 <http://www.nytimes.com/2011/11/04/opinion/in-pakistandrones-kill-our-innocent-allies.html>. Justin Randle, ‘US Steps Outside the Law as the War on Terror Drones On’, Sydney Morning Herald (online) 24 January 2012, <http://www.smh.com.au/opinion/politics/us-steps-outside-the-law-as-the-waron-terror-drones-on-20120123-1qdsu.html>.

47

48

!

EAP 11!

Unmanned Vehicles, Surveillance Saturation and Prisons of the Mind

191

of the media blackout in that region, those claims could not be substantiated in a manner sufficient for journalists to publish them to the rest of the world.49 Consequently, western journalists and charity workers present promised to provide training, equipment and support to volunteer villagers, to permit them to collect ‘physical proof that civilians had been killed’.50 According to reporters present at the meeting, only three people were actually willing to volunteer for such a role, given the serious risks such work entailed; Tariq Aziz was one of those volunteers.51 Approximately 72 hours after the meeting in Islamabad, Tariq and his 12year-old cousin were killed as they drove their car to collect an aunt from a wedding in the rural city of Miran Shah in North Waziristan. It is alleged that two Hellfire missiles (ironically fired from a drone) struck the car, killing both occupants within a few hundred metres of their house.52 The CIA, which is responsible for such operations, neither confirms nor denies such strikes, so the basis for such claims cannot be substantiated; nor can speculation about if, or how, Tariq was tracked from the Grand Jirga in the capital back to his home town in the provinces. One British human rights lawyer who attended the Grand Jirga claimed, ‘a homing device may have been placed in Tariq’s car, possibly as a “warning” to others not to raise objections to the drone killings.’53 As Hagger and McCormack state, ‘the accuracy of these reports is almost impossible to determine, as are the reasons why these boys were targeted; herein lies the source of controversy.’54 Regardless of whether the claims that Tariq Aziz was killed because of his participation at the Jirga are true, they have been accepted by much of the world’s press, and, importantly, many of his tribespeople and countrymen. According to the journalists present at the Jirga, participants had already felt apprehensive about being identified as participants.55 Indeed, the small number of volunteers to document drone strikes must also be taken as indicative of the fear those participants felt about the proposed data gathering

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
49

Pratap Chatterjee, ‘Bureau reporter meets 16-year-old three days before US drone kills him’, The Bureau of Investigative Journalism (online), 4 November 2011, <http://www.thebureauinvestigates.com/2011/11/04/bureau-reporter-meets-16year-old-just-three-days-before-he-is-killed-by-a-us-drone/>. Smith, above n 47. Ibid. Tariq was said to be one of the few people with computer skills and was also excited about the possibility of being provided with, and trained to use, a digital camera. Pratap Chatterjee, ‘The CIA’s unaccountable drone war claims another casualty’, The Guardian (online), 7 November 2011 <http://www.guardian.co.uk/commentisfree/cifamerica/2011/nov/07/ciaunaccountable-drone-war>. Smith, above n 47. Anon, ‘Boys’ killing belies US claim on drone strikes’, The Australian (online), 7 November 2011 <http://www.theaustralian.com.au/news/world/boys-killingbelies-us-claim-on-drone-strikes/story-e6frg6so-1226187021609>. Hagger and McCormack, above n 46, EAP 23. Smith, above n 47.

50 51

52 53

54 55

EAP!12!

192

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

activities. Yet those were entirely peaceful measures, designed to create awareness about, and transparency around, local and foreign government activities and claims. That would seem to be a contradiction of the values of democracy, popular involvement and accountability that western countries such as the US ostensibly stand for. Regardless of whether all the details of Tariq’s story are true — indeed, perhaps the uncertainty and speculation about its veracity makes it all the more effective — it sends a compelling message of warning to those who might consider participating in such accountability activities in the future. Whilst one might hope the consequences in the civilian sphere would be much less dire, Tariq’s story indicates the potential consequences of a panoptic society in which there is near, if not complete surveillance saturation. Given the covert nature of much UV technology, a person living in a place where they are regularly employed as surveillance devices can never be sure when or where or why they are being watched. It is hard to imagine how such a situation would not create some reluctance amongst at least part of the population to participate in activities, or interact with people, that are unfavourable to the government of the day.

3.3 Finding a balance
As Goold argues ‘we need [privacy] in order to live rich, fulfilling lives, lives where we can simultaneously play the role of friend, colleague, parent and citizen without having the boundaries between these different and often conflicting identities breached without our consent.’56 Permitting states to increase their surveillance capacity to near saturation point threatens citizens’ autonomy to balance and control such boundaries. That, of course, does not axiomatically mean we must prohibit states from employing such technology. Indeed, the horse has already bolted, so to speak, on restraining governments from undertaking mass surveillance. Moreover, there are real and genuine security, economic, social and public interest reasons for utilising public surveillance systems. UV technology will, no doubt, add to those benefits, by, for instance, making sure criminals cannot escape the law by undertaking criminal activity just outside of the sphere of an obviously placed CCTV camera. Hence, we should not assume that it is only governments that will be attracted to the increased surveillance capacities provided by UV systems. The expansion in state surveillance capacity has not been received as critically or with as much widespread resistance as some may have originally predicted, so it cannot be expected that the additional reach provided by UVs will create a sudden public outcry. As McBride observes ‘some people may welcome the introduction of additional technology that may catch or decrease criminal activity’, whilst ‘others are significantly more apprehensive about the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
56

Benjamin Goold, ‘Surveillance and the Political Value of Privacy’ (2009) 1(4) Amsterdam Law Forum 4, <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1509393>.

!

EAP 13!

Unmanned Vehicles, Surveillance Saturation and Prisons of the Mind

193

widespread use of such technology’.57 Ultimately we may need to find a balancing line between these competing interests.

4

The Limits of the Law

Justice Posner observes extra-judicially that: People hide from government, and government hides from the people, and people and government have both good and bad reasons for hiding from the other. Complete transparency paralyzes planning and action; complete opacity endangers both liberty and security.58 Ultimately the role of the law is to both regulate and provide a socially acceptable balance between these two important competing interests. Yet, as was argued in the précis paper, the common law at the very least is relatively ill equipped to deal with modern surveillance systems and the socio-political issues they present.59 Without reiterating the entirety of that argument, the main reasons for this are: • Many common law countries still do not recognise a tort of privacy. In countries without a tort of privacy, laws that traditionally protect privacy, such as nuisance, trespass or confidentiality have extremely limited applicability to any form of surveillance of a public space and little in the way of ‘actionable rights against UVs that are used to survey their private property’.60 • In those countries (notably the US) that do recognise a tort of privacy — and to an extent where confidentiality is relied upon — it is based upon what a person might ‘reasonably expect’ to be safe from prying eyes. As technology becomes more accessible and ubiquitous, no person can reasonably expect not to be surveilled from one vantage point or another.

In his commentary, Jim Davis argues that some of these concerns are overstated, insofar as: the reasonable expectation of privacy arises not from the fact that the subject of the intrusion had no reason to suspect that he or she was being covertly watched, but from the fact that the conduct of the subject of the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
57

P McBride, ‘Beyond Orwell: The Application of Unmanned Aircraft Systems in Domestic Surveillance Operations’ (2009) Journal of Air Law and Commerce 74, 629, 638. Posner, above n 12, 246. Précis, above n 1, 126-130. Ibid 127.

58 59 60

EAP!14!

194

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

intrusion is such that a reasonable person would be highly offended if that conduct were published to the world at large.61 Davis therefore contends that ‘that such an expectation of privacy is [not] becoming harder to maintain’ even in the face of technological advances which increase the scope and degree of surveillance capacity by both private and state actors.62 Davis’ legal analysis in this respect is, of course, correct. However, courts have looked to a variety of factors to determine when someone may have a ‘reasonable expectation’ of privacy, or be ‘highly offended’ when their privacy is breached. In some instances the technological ubiquity of the device or measure utilised to surveil a person is relevant to establishing those objective standards, in other cases it is not. Furthermore, I would also argue that there is a very fine, if not largely artificial, line between having ‘no reason to suspect’ one is being watched and whether ‘a reasonable person would be highly offended’ at the watching and/or subsequent publication of data collected during it; particularly insofar as that distinction is used as a basis to contend that social expectations of privacy do not change as technology advances. The highly offensive test is an objective one, ascertained by virtue of what a reasonable person might expect to keep private in the social and temporal conditions in which they find themselves. Such expectations must naturally change as society does and technology is a predominant motivator of anthropomorphic and structural change in society. Our movements, communications and interactions may be captured and recorded in ways that were simply unimaginable even a few decades before, let alone the centuries ago when much of our common law developed.63 Two centuries ago a person

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
61

Jim Davis ‘The (Common) Laws of Man Over (Civilian) Vehicles Unmanned’ (2011) 21(2) Journal of Law, Information & Science, EAP 6, 10.5778/JLIS.2011.21.Davis.1. Ibid. It is important to remember that Warren and Brandeis’ seminal article ‘The Right to Privacy’ which led to the adoption of a tort of privacy in the US, was largely written as a response to the invention of the instamatic camera two years before. The result of that invention by Kodak was no small degree of moral panic and outrage and the prohibition of cameras from tourist sites and beaches. That technology, Warren and Brandeis argued, ‘invaded the sacred precincts of private and domestic life.’ They referred, as support for that proposition, to an unreported case of Marion Manola, who in that same year brought an action in the New York Supreme Court for being ‘photographed surreptitiously and without her consent.’ That was notwithstanding the fact the photograph had been taken whilst she was performing on Broadway in public. Whilst photographing someone participating in a public spectacle could no longer be considered abnormal or offensive, the fact that Ms Manola was wearing stockings (tights) was enough for a court to consider the photograph sufficiently offensive to warrant an injunction preventing its sale or distribution and for Warren and Brandies to argue that ‘the law must afford some remedy for the unauthorised circulation of portraits of private persons’. See Warren and Brandeis, above n 7; Robert E Mensel, ‘“Kodakers Lying in Wait”: Amateur Photography and the Right of Privacy in New York, 1885-1915” (1991)

62 63

!

EAP 15!

Unmanned Vehicles, Surveillance Saturation and Prisons of the Mind

195

would have little reasonable expectation of having their image captured while transiting through a public place, and even less expectation of it being captured from above their house or property. Today digital technology is so ubiquitous that it is impossible to expect that one’s image will not be captured wherever there is another person or whenever one is visible to the open sky.64 Technology changes our sense of self and other’s place in the world and how we interact with each other in it. It serves to modify our expectations, moral or otherwise and it changes what we are offended about, highly or otherwise. There are, of course, times when the technological state of play is not particularly relevant to establishing an objective standard of what is reasonable or what is highly offensive; as I noted above, courts have taken into account a variety of considerations in this respect. In the précis we discussed the case of United States v Knotts,65 in which the Court held that a tracking device installed in a car did not breach the occupant’s privacy.66 Key to that decision was the fact that a person travelling on a road could never reasonably expect not to be watched by others, or indeed monitored by authorities for legal compliance with road rules and the like. As such, the fact that an advanced technology had permitted a more efficient level of monitoring did not make the expectation of secrecy and privacy any more reasonable, or the fact that the occupant was being watched any more objectively offensive. Ultimately the relevance of the novelty or ubiquity of a technological surveillance system will turn on whether it dramatically alters the surveillance capacities of the surveillor in a manner which an ordinary person cannot be expected to have predicted or understood. It is also true, that in some circumstances, common surveillance technologies, such as cameras with telescopic lenses, may capture information which an ordinary person may not have expected to be kept completely free from prying eyes, but which that person may have a reasonable expectation of privacy about nonetheless. As Davis notes, Campbell’s case67 was one of these situations.68 However, the crux of the issue in Campbell was not the viewing so much as the disclosure subsequent to the viewing of a recognised category of confidential information — namely medical information about Campbell’s rehabilitation — to third parties, who were not privy to the original viewing.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
43(1) American Quarterly 24; Dorothy J Glancy, ‘Privacy and the Other Miss M’ (1990) 10 Northern Illinois University Law Review 401.
64

Hence, in US curtilage cases such as Florida v Riley, 488 US 445 (1989) and Dow Chemical Co v United States, 476 US 227 (1986) — which consider whether aerial surveillance of property breaches the right to privacy and the right against unlawful search — the court has been particularly concerned as to whether the surveillance equipment used was commonly available. Indeed, in the latter case the fact that the police used ordinary aviation and photographic equipment was in fact pivotal to the determination that the surveillance was legal. 460 US 276 (1983), 283. Précis, above n 1, 129. Campbell v MGN [2004] UKHL 22 (‘Campbell’). Davis, above n 61, EAP 10.

65 66 67 68

EAP!16!

196

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

That case has much less to do with surveillance as the transfer of data collated and recorded as a result of it. Indeed, as an equitable doctrine confidentiality law ordinarily only provides an injunction to restrain the use of the information collected, rather than punish or remedy for the damages caused in collecting it. What these cases reveal, in toto, is that the common law can do very little to restrain state surveillance over public areas, and indeed private ones that are open to plain view from either the ground or on high. As most state surveillance data is not published to the world at large, there is little chance for people to argue their common law rights have been violated, because there is no evidence of harm, either to the person or their sensibilities. More to the point, the common law, particularly tort law, is remedial, not prospective; operating ex-post-facto to sanction past behaviour. It is not particularly adapted to limiting or controlling future behaviour in the absence of ascertainable or substantive proof of harm. Given that surveillance may occur without the knowledge of those watched, and in such situations, no person can claim to be more harmed than any other member of the community, such law is a poor mechanism to balance the competing social interests of privacy and security. As I set out at the beginning of this commentary, the real harm, or at least the prevalent social harm arising from surveillance capacity saturation, is the fact that people simply don’t know when, or if they are being watched, or for what purposes or how that information might affect them now or in their future lives. In the panoptic world it is the uncertainty about whether data is being collected which is most harmful, not the disclosure of that data to third parties per se. Moreover, the most overwhelming harm is to society as a whole — by undermining and eroding the fundamental institutions upon which it is based — rather than discrete individuals within it. Should we consider such harms detrimental to fundamental democratic values of freedom of thought, freedom of expression and freedom of association, then pre-emptive laws are required to limit the causative factor that reduces citizens’ capacity or willingness to exercise these freedoms. In other words, it is proscriptive legislation, restraining state capacity to expand its surveillance capacities to, or close to saturation point which is required, not expanding or modifying civil law to remedy perceived harms once they occur.

4.1 Regulatory ‘disarray’
As was noted previously however, despite long-standing academic debate and the derision of civil libertarians, the surveillance society has grown and expanded without a great deal of regulatory restraint. That is not to say no laws exist. Most countries do in fact have privacy and data protection laws, but their application to open, indirect surveillance is patchy at best. In the UK for instance, CCTV surveillance has generally been held to fall outside the Data Protection Act;69 a rather strange oversight for the country in the world

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
69

Simon Chesterman, One Nation Under Surveillance (Oxford University Press, 2011) 150.

!

EAP 17!

Unmanned Vehicles, Surveillance Saturation and Prisons of the Mind

197

with the highest concentration of this form of surveillance device. Indeed, although Europe more generally is considered to have the most comprehensive privacy and data protection laws in the world — by virtue of the European Court of Human Rights and the Directive on Data Protection Privacy — Privacy International reported at the conclusion of 2011 that: Surveillance harmonisation [in Europe] that was once threatened is now in disarray. Yet there are so many loopholes and exemptions that it is increasingly challenging to get a full understanding of the privacy situations in European countries.70 Certainly the massive uptake in surveillance technologies by all forms of bureaucratic and security agencies make it particularly hard to ascertain just how much or where surveillance is occurring. Privacy International argues that the ‘cloak of “national security” enshrouds many practices, minimises authorisation safeguards and prevents oversight’.71 In the security conscious United States, the situation is equally bad, if not worse. Chesterman points to ‘the many actors in the intelligence community’, not to mention domestic law enforcement and state agencies operating surveillance devices in the United States who ‘may pose accountability difficulties through sheer complexity … [and] fragmentation of authority can pose practical problems in ensuring appropriate oversight.’72 Indeed, although accountability mechanisms do exist, including crossinstitutional regulatory regimes to ‘watch the watchers’, the focus of legislative restraint on surveillance has, centred upon the collection of surveillance data, especially in the audio visual realm.73 As Solove argues, the problem with this situation is that: Surveillance is a different kind of privacy problem than disclosure, imposing a different type of injury to a different set of practices. Surveillance differs from disclosure because it can impinge upon practices without revealing any secrets. Being watched can destroy a person’s peace of mind, increase her self consciousness and uneasiness to a debilitating degree, and can inhibit her daily activities.74 There is certainly very little regulatory consideration of the collective impact of the process of mass surveillance — as opposed to individual surveillance for

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
70

Privacy International, European Privacy and Human Rights (EPHR) 2010 Privacy International, the Electronic Privacy Information Center (2011) 11 <https://www.privacyinternational.org/Ephr>. Ibid. Chesterman, above n 69, 212. Ibid 151. Solove, above n 5, 1130.

71 72 73 74

EAP!18!

198

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

the process of criminal investigation.75 That is, it overlooks the monitoring and tends to only be concerned with what is done with the recorded data or how it is disclosed. Like the civil law, privacy legislation tends to be more concerned with individual rather than social harm. Equally problematic is the fact that legislation tends not to operate at the macro level, nor evaluate the level of state surveillance capacity in a whole-of-government sense. The reality is that existing privacy and accountability legislative regimes are not, as of yet, appropriate regulatory devices to tip the balance from an appropriate level to saturation level surveillance capacity (assuming that there is a line to be drawn). That is, not least, because they are not so much concerned with surveillance capacity as post surveillance data use. Whilst the latter issue is extremely important in respect of privacy, the former has serious and profound implications for civil and democratic rights.

5

Conclusion

As has been discussed at length in the précis and a number of other commentaries, UVs do not create new issues per se, so much as extend the influence, capacities and reach of their controllers and thereby expand and compound the social and legal problems relating to their intended use. In respect of surveillance, they greatly magnify the surveillance capacity of those controlling them, most worryingly state institutions. There are, of course, a range of benefits promised by UV technologies, not least for policing, law enforcement and public safety. But it is important not to forget that this is a technology developed in the theatre of war. We must also remember that it is a technology that promises to realise a panoptic vision originally designed around maintaining control over prison populations; albeit now on a much grander society-wide scale. Of course, we already live in a surveillance society, but UVs are the technology which may close the remaining gaps in the open spaces where people could previously expect to be ‘left alone’. Unlike CCTV cameras UVs are, more often than not, designed to be covert and undetectable. Even if CCTV is now almost so prolific that it is hard to avoid it completely in a public place, UVs now render void the theoretical idea that state surveillance can be avoided in public. Moreover, because this technology is unmanned there will certainly be no time when one can hope not to fall under the gaze of unsleeping eyes. The world of UV surveillance is absolute, global and persistent and it threatens to turn civilian spaces into the panoptic prison of Bentham’s imagination, if not a physical prison, a prison of the mind. That is because, as Tariq’s story shows us, people living in a surveilled world must be constantly

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
75

This distinction is evident in Australia in the form of the Surveillance Devices Act 2004 (Cth), which limits the capacity of law enforcement agencies to undertake electronic surveillance of suspects or as part of investigations.

!

EAP 19!

Unmanned Vehicles, Surveillance Saturation and Prisons of the Mind

199

on their guard about whom they meet, what they talk about and whether those interactions might be with persons or about subject matters that draw the attention of a hostile state. It is, of course, easy to overstate the impact of new technologies. Once the moral panic subsides, we have, as a society proven remarkably adept at subsuming technological advances into everyday life in a way that maximises their social utility and benefit. However, successfully integrating novel technologies in a manner which maximises their benefits and reduces their risks requires foresight, consideration and effective debate. Such debate and deliberation works most effectively in advance of technological change, and certainly in advance of the social change that it brings. That is a lesson from the nuclear proliferation debate, which is particularly relevant to UV technology and one highlighted in the précis paper.76 Even since that paper was written the world has proceeded further into a UV arms race; most recently with Asia increasing its research and production in the area. Unchecked, there will be equally wide proliferation of the technology in the civilian sphere given the strength of support by proponents and governments for are its — as Mary Ellen O’Connell describes — ‘seductive’ qualities;77 in this case: scope, efficiency, cost savings and reach. The point of this commentary was not to suggest where the line should be drawn for the use of UV technology in civil society, nor the regulatory mechanism to achieve it. Rather it was to point out some of the socio-legal risks of unfettered proliferation of UV technology should we not take some form of action. I have argued that the law does very little to restrain the use or impacts of UVs by state authorities. Ultimately, at present, the only real brake on reaching near saturation point state surveillance capacity is the speed of the transition from military to civilian spheres. As Chesterman rightly notes ‘[t]he notion that courts will have a leisurely opportunity to consider the implications of new surveillance technologies and their use now seems quaint.’78 The same is true of legislatures and society as a whole. That means we are running out of time for debate and running out of time for effective regulatory responses should the debate determine some limits are required. Ironically, the unfettered and unrestrained use of surveillance threatens the very democratic institutions which operate to ensure that debate is effective and truly representative. That, more than anything else should be a motivating factor for real commitment to regulatory deliberation on the use of UVs in civil society.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
76 77

Précis, above n 1, 142. Mary Ellen O’Connell, ‘Seductive Drones: Learning from a Decade of Lethal Operations’ (2011) 21(1) Journal of Law, Information & Science, EAP 1, doi: 10.5778/JLIS.2011.21.OConnell.1. Chesterman, above n 69, 154.

78

EAP!20!

Unmanned Vehicles: Subordination to Criminal Law under the Modern Concept of Criminal Liability
COMMENT BY GABRIEL HALLEVY* Introduction
This commentary is restricted to the question of criminal liability of the autonomous unmanned vehicle. Autonomous unmanned vehicles are based on artificial intelligence technology, and they function as artificial intelligence entities. If the use of these entities may result in tortious conduct,1 then their use may also be capable of resulting in criminal offenses. The ultimate legal question is, therefore, who is to be held responsible for offenses committed by unmanned vehicles based on artificial intelligence technology (hereinafter ‘AIUV’).2 Modern concepts of criminal liability suggest that criminal liability models are relevant to artificial intelligence entities.3 This commentary argues that these criminal liability models are relevant and available to AIUV.

1

Relevant Criminal Liability Models

In order to impose criminal liability upon a person, two main elements must exist. The first is the factual element (actus reus), while the other is the mental element (mens rea). The actus reus requirement is expressed mainly by acts or omissions.4 Sometimes, other external elements are required in addition to conduct, such as the specific results of that conduct and the specific circumstances underlying the conduct.5

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*

Ph.D., Associate Professor, Faculty of Law, Ono Academic College. I thank Dr Brendan Gogarty for inviting me to comment on the précis article, and Bruce Newey, the managing director of the Journal of Law, Information and Science, for the excellent editing. Brendan Gogarty and Meredith Hagger, ‘The Laws of Man over Vehicles Unmanned: The Legal Response to Robotic Revolution on Sea, Land and Air’ (2008) 19 Journal of Law, Information and Science 73, 122-124. AIUV – Artificial Intelligence Unmanned Vehicles. See eg Gabriel Hallevy, ‘Criminal Liability of Artificial Intelligence Entities – From Science Fiction to Legal Social Control’ (2010) 4 Akron Intellectual Property Journal 171; Gabriel Hallevy, ‘“I, Robot - I, Criminal” – When Science Fiction Becomes Reality: Legal Liability of AI Robots Committing Criminal Offenses’ ( 22 )2010 Syracuse Science and Technology Law Reporter 1. W H Hitchler, ‘The Physical Element of Crime’ (1934) 39 Dickinson Law Review 95; Michael S Moore, Act and Crime: The Philosophy of Action and Its Implications for Criminal Law (Oxford University Press, 1993). John William Salmond, Salmond on Jurisprudence (Glanville Williams ed, 11th ed, 1957) 505; Glanville L Williams, Criminal Law: The General Part (2nd ed, 1961) § 11;

1

2 3

4

5

! !

EAP 1!

Unmanned Vehicles: Subordination to Criminal Law

201!

The mens rea requirement has various levels of mental elements. The highest level is expressed by knowledge, and sometimes that knowledge is accompanied by a requirement of intent or specific intention.6 Lower levels are expressed by negligence7 (a reasonable person should have known) or by strict liability offenses.8 These are the only criteria or capabilities which are required in order to impose criminal liability, not only on humans, but on any other kind of entity, including corporations and AI entities. Any entity might possess further capabilities, such as creativity, for example. However, in order to impose criminal liability, the existence of actus reus and mens rea of the specific offense is adequate. No further capabilities are required for the imposition of criminal liability. These requirements may be fulfilled by AIUV through three possible models of liability: the ‘Perpetration-by-Another’ liability model; the ‘NaturalProbable-Consequence’ liability model; and the ‘Direct’ liability model. These models are discussed below.

1.1 Perpetration-by-Another liability
The Perpetration-by-Another liability model does not consider the AIUV as possessing any human attributes. The AIUV is considered an innocent agent. Accordingly, under this legal viewpoint, a machine is a machine, and it is never human. However, one cannot ignore an AIUV’s capabilities. Pursuant to this model, these capabilities are insufficient to deem the AIUV a perpetrator of an offense. These capabilities resemble the parallel capabilities of a mentally limited person, such as a child, or of a person who is mentally incompetent or who lacks a criminal state of mind. When an innocent agent physically commits an offense, the person who sent or activated the innocent agent is criminally responsible as a perpetrator-by-

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Oliver W Holmes, Jr, The Common Law (1923) 54; Walter Wheeler Cook, ‘Act, Intention, and Motive in the Criminal Law’ (1917) 26 Yale Law Journal 645.
6

J Ll J Edwards, ‘The Criminal Degrees of Knowledge’ (1954) 17 Modern Law Review 294; Rollin M Perkins, ‘“Knowledge” as a Mens Rea Requirement’ (1978) 29 Hastings Law Journal 953; United States v Youts, 229 F.3d 1312 (10th Cir, 2000); United States v Spinney, 65 F.3d 231 (1st Cir, 1995); State v Sargent, 594 A.2d 401 (Vt, 1991); State v Wyatt, 482 S.E.2d 147 (W Va, 1996); People v Steinberg, 595 N.E.2d 845 (NY, 1992). Jerome Hall, ‘Negligent Behavior Should Be Excluded from Penal Liability’ (1963) 63 Columbia Law Review 632; Robert P Fine and Gary M Cohen, Comment, ‘Is Criminal Negligence a Defensible Basis for Criminal Liability?’ (1966) 16 Buffalo Law Review 749. Jeremy Horder, ‘Strict Liability, Statutory Construction and the Spirit of Liberty’ (2002) 118 Law Quarterly Review 458; Francis Bowes Sayre, ‘Public Welfare Offenses’ (1933) 33 Columbia Law Review 55; Stuart P Green, ‘Six Senses of Strict Liability: A Plea for Formalism’ in A P Simester (ed) Appraising Strict Liability (Oxford University Press, 2005) 1; A P Simester, ‘Is Strict Liability Always Wrong?’ in A P Simester (ed) Appraising Strict Liability (Oxford University Press, 2005).

7

8

EAP!2!

202

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

another.9 In such cases, the innocent agent is regarded as a mere instrument, albeit a sophisticated instrument, while the party orchestrating the offense (the perpetrator-by-another) is the real perpetrator as a principal in the first degree and is held accountable for the conduct of the innocent agent. The perpetrator-by-another’s liability is determined on the basis of the innocent agent’s conduct10 and the perpetrator-by-another’s own mental state.11 In such situations, the derivative question relative to AIUV is: who is the perpetrator-by-another? There are two candidates: the first is the programmer of the AI software and the second is the user, or the end-user. A programmer of AI software might design a program in order to commit offenses via the AIUV. For example, a programmer designs software for an operating AIUV. The AIUV is intended to be placed on the road, and its software is designed to kill innocent people by running over them. The AIUV may commit the homicide, but the programmer is deemed the perpetrator. The other person who might be considered the perpetrator-by-another is the user of the AIUV. The user did not program the software, but he or she uses the AIUV, including its software, for his or her own benefit. For example, a user purchases an AIUV, which is designed to execute any order given by its master. The specific user is identified by the AIUV as that master, and the master orders the AIUV to run over any trespasser on his or her farm. The AIUV executes the order exactly as ordered. This is no different than a person who orders their dog to attack a trespasser. The AIUV committed the assault, but the user is deemed the perpetrator.12 In both scenarios, the actual offense was committed by the AIUV. The programmer or the user did not perform any action conforming to the definition of a specific offense; therefore, neither the programmer nor the user meets the actus reus requirement of the specific offense. The Perpetration-byAnother liability model considers the action committed by the AIUV as if it had been the programmer’s or the user’s action. The legal basis for that is the instrumental usage of the AIUV as an innocent agent. No mental attribute is required for the imposition of criminal liability which is attributed to the AIUV.13

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
9

Morrisey v State, 620 A.2d 207 (Del, 1993); Conyers v State, 790 A.2d 15 (Md, 2002); State v Fuller, 552 S.E.2d 282 (S C, 2001); Gallimore v Commonwealth, 436 S.E.2d 421 (Va, 1993). Dusenbery v Commonwealth, 263 S.E.2d 392 (Va, 1980). United States v Tobon-Builes, 706 F.2d 1092 (11th Cir, 1983); United States v Ruffin, 613 F.2d 408 (2d Cir, 1979); See more in Gabriel Hallevy, ‘Victim's Complicity in Criminal Law’ (2006) 2 International Journal of Punishment and Sentencing 72. See, eg, Regina v Manley, (1844) 1 Cox’s Criminal Cases 104; Regina v Cogan, [1976] QB 217; Gabriel Hallevy, Theory of Criminal Law (vol 2, 2009) 700-06. The AI is used as an instrument and not as a participant, although it uses its features of processing information. See, eg, Cary G Debessonet and George R Cross, ‘An Artificial Intelligence Application in the Law: CCLIPS, A Computer

10 11

12 13

! !

EAP 3!

Unmanned Vehicles: Subordination to Criminal Law

203!

When programmers or users use an AIUV instrumentally, the commission of an offense by the AIUV is attributed to them. The mental element required in the specific offense already exists in their minds. The programmer had criminal intent when he or she ordered the commission of the offense, and the user had criminal intent when he or she ordered the commission of the offense, even though these offenses were actually committed through an AIUV. When an end-user makes instrumental usage of an innocent agent to commit a crime, the end-user is deemed the perpetrator. This liability model does not attribute any mental capability, or any human mental capability, to the AIUV. According to this model, there is no legal difference between an AIUV and a screwdriver or an animal. When a burglar uses a screwdriver in order to open up a window, he or she uses the screwdriver instrumentally, and the screwdriver is not criminally responsible. The screwdriver’s ‘action’ is, in fact, the burglar’s. This is the same legal situation when using an animal instrumentally. An assault committed by a dog by order of its master is, in fact, an assault committed by the master. This kind of legal model might be suitable for two types of scenarios. The first scenario is using an AIUV to commit an offense without using its advanced capabilities, which enable it to ‘think’ or to think (with no quotation marks, ie when an AIUV decides to commit an offense based on its own accumulated experience or knowledge). The second scenario is using a very old version of an AIUV, which lacks the modern advanced capabilities of the modern AIUV.14 In both scenarios, the use of the AIUV is instrumental usage. Still, it is usage of an AIUV, due to its ability to execute an order to commit an offense. A screwdriver cannot execute such an order; a dog can. A dog cannot execute complicated orders; an AIUV can.15 The Perpetration-by-Another liability model is not suitable when an AIUV decides to commit an offense based on its own accumulated experience or knowledge.16 This model is not suitable when the software of the AIUV was not designed to commit the specific offense, but the offense was committed by

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Program that Processes Legal Information’ (1986) 1 High Technology Law Journal 329.
14

For some of the modern advanced capabilities of the modern AI entities see generally Donald Michie, ‘The Superarticulacy phenomenon in the context of software manufacture’ in Derek Partridge and Yorick Wilks (eds) The Foundations of Artificial Intelligence (Cambridge University Press, 2006) 411-439. Cf Andrew J Wu, ‘From Video Games to Artificial Intelligence: Assigning Copyright Ownership to Works Generated by Increasingly Sophisticated Computer Programs’ (1997) 25 American Intellectual Property Law Association Quarterly Journal 131; with Timothy L Butler, ‘Can a Computer be an Author? Copyright Aspects of Artificial Intelligence’ (1982) 4 Comm/Ent Law Journal 707. The programmer or user should not be held criminally responsible for the autonomous actions of the AI if he or she could not have predicted these actions.

15

16

EAP!4!

204

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

the AIUV nonetheless. This model is also not suitable when the specific AIUV functions not as an innocent agent, but as a semi-innocent agent.17 However, the Perpetration-by-Another liability model might be suitable when a programmer or user makes instrumental usage of an AIUV, but without using the AIUV’s advanced capabilities. The legal result of applying this model is that the programmer and the user are fully criminally responsible for the specific offense committed, while the AIUV has no criminal liability whatsoever.

1.2 Natural-Probable-Consequence liability
The Natural-Probable-Consequence liability model assumes deep involvement of the programmers or users in the AIUV’s daily activities, but without any intention of committing any offense via the AIUV. For example, during the execution of its daily tasks, an AIUV commits an offense. The programmers or users had no knowledge of the offense until it had already been committed; they did not plan to commit any offense, and they did not participate in any part of the commission of that specific offense. One example of such a scenario is an AIUV which is designed to function as an automatic driver together with a human driver. The AIUV is programmed to protect the mission as part of its duties to drive the vehicle. During the driving, the human driver activates the automatic driver, which is the AIUV, and the program is initialised. At some point after activation of the automatic driver, the human driver sees an approaching traffic jam and tries to abort the mission and return back. The AIUV deems the human driver’s action as a threat to the mission and takes action in order to eliminate that threat. As a result, the human driver is killed by the AIUV’s actions. Obviously, the programmer had not intended to kill anyone, especially not the human driver, but nonetheless, the human driver was killed as a result of the AIUV’s actions. Moreover, these actions were done according to the program. In this example, the Perpetration-by-Another model is not legally suitable. The Perpetration-by-Another model assumes mens rea, the criminal intent of the programmers or users to commit an offense via the instrumental use of some of the AIUV’s capabilities. However, this is not the legal situation in this case. Rather, in this situation the programmers or users had no knowledge of the committed offense; they had not planned it, and had not intended to commit the offense using the AIUV. For such cases, the second model might create a suitable legal response. This model is based upon the ability of the programmers or users to foresee the potential commission of offenses. According to the second model, a person might be held accountable for an offense, if that offense is a natural and probable consequence of that person’s conduct. Originally, the Natural-Probable-Consequence liability model was used to impose criminal liability upon accomplices, when one committed an

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
17

Nicola Lacey and Celia Wells, Reconstructing Criminal Law – Critical Perspectives on Crime and the Criminal Process (Butterworths, 2d ed, 1998) 53.

! !

EAP 5!

Unmanned Vehicles: Subordination to Criminal Law

205!

offense, which had not been planned by all of them and which was not part of a conspiracy. The established rule prescribed by courts and commentators is that accomplice liability extends to acts of a perpetrator that were a ‘natural and probable consequence’18 of a criminal scheme that the accomplice encouraged or aided.19 The Natural-Probable-Consequence liability has been widely accepted in accomplice liability statutes and recodifications.20 The Natural-Probable-Consequence liability model seems to be legally suitable for situations in which an AIUV committed an offense, while the programmer or user had no knowledge of it, had not intended it, and had not participated in it. The Natural-Probable-Consequence liability model requires the programmer or user to be in nothing more than the mental state required for negligence. Programmers or users are not required to know about any forthcoming commission of an offense as a result of their activity, but are required to know that such an offense is a natural, probable consequence of their actions. In a criminal context, a negligent person has no knowledge of the offense; however, a reasonable person in the same situation would have known about the offense, because it is a natural and probable consequence of the situation.21 The programmers or users of an AIUV, who should have known about the probability of the forthcoming commission of the specific offense, are criminally responsible for the specific offense, even though they did not actually know about it. This is the fundamental legal basis for criminal liability in negligence cases. Negligence is, in fact, an omission of awareness or knowledge. The negligent person omitted knowledge, not acts. The Natural-Probable-Consequence liability model would permit liability to be predicated upon negligence, even when the specific offense requires a different state of mind.22 This has been accepted in the modern criminal law, and thus reduced significantly the mental element requirements in these

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
18 19

United States v Powell, 929 F.2d 724 (DC Cir, 1991). William L Clark and William L Marshall, A Treatise on the Law of Crimes (7th ed, 1967) 529; Francis Bowes Sayre, Criminal Liability for the Acts of Another, 43 (1930) Harvard Law Review 689; People v Prettyman, 926 P.2d 1013 (Cal, 1996); Chance v State, 685 A.2d 351 (Del, 1996); Ingram v United States, 592 A.2d 992 (DC, 1991); Richardson v State, 697 N.E.2d 462 (Ind, 1998); Mitchell v State, 971 P.2d 813 (Nev, 1998); State v Carrasco, 928 P.2d 939 (NM Ct App, 1996); State v Jackson, 976 P.2d 1229 (Wash, 1999). State v Kaiser, 918 P.2d 629 (Kan, 1996); United States v Andrews, 75 F.3d 552 (9th Cir, 1996). Robert P Fine and Gary M Cohen, Comment, ‘Is Criminal Negligence a Defensible Basis for Penal Liability?’ (1966) 16 Buffalo Law Review 749; Herbert L A Hart, ‘Negligence, Mens Rea and Criminal Liability’ in Anthony Gordon Guest (ed) Oxford Essays in Jurisprudence (Clarendon Press, 1961) 29; Donald Stuart, ‘Mens Rea, Negligence and Attempts’ (1968) Criminal Law Review 647. Model Penal Code – Official Draft and Explanatory Notes sec 2.06, (1985) 31-32 (hereinafter ‘Model Penal Code’); State v Linscott, 520 A.2d 1067 (Me, 1987).

20 21

22

EAP!6!

206

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

situations, since the relevant accomplice did not really know about the offense, but a reasonable person could have predicted it. Negligence is suitable for this kind of situation. This is not valid in relation to the person who personally committed the offense, but rather, is considered valid in relation to the person who was not the actual perpetrator of the offense, but was one of its intellectual perpetrators. Reasonable programmers or users should have foreseen the offense, and prevented it from being committed by the AIUV. However, the legal results of applying the Natural-Probable-Consequence liability model to the programmer or user differ in two different types of factual scenarios. The first type of scenario is when the programmers or users were negligent while programming or using the AIUV and had no criminal intent to commit any offense. The second type of scenario is when the programmers or users programmed or used the AIUV knowingly and willfully in order to commit one offense via the AIUV, but the AIUV deviated from the plan and committed some other offense, in addition to or instead of the planned offense. The first scenario is a pure case of negligence. The programmers or users acted or omitted negligently; therefore, they should be held accountable for the offense of negligence, if there is such an offense in the relevant legal system. Thus, as in the above example, where a programmer of an automatic driver negligently programmed it to defend its mission with no restrictions on the taking of human life, the programmer is negligent and responsible for the homicide of the human driver. Consequently, if negligent homicide exists as a specific offense in the relevant legal system, negligent homicide is the most severe offense for which the programmer may be held accountable, as opposed to manslaughter or murder which requires at least knowledge or intent. The second scenario resembles the basic idea of the Natural-ProbableConsequence liability model in accomplice liability cases. The dangerousness of the very association or conspiracy, the aim of which is to commit an offense, is the legal reason for more severe accountability to be imposed upon the co-conspirators. In such cases, the criminal negligence liability alone is insufficient. The social danger posed by such a situation far exceeds the situations that negligence was accepted as sufficient to be applied (through retribution, deterrence, rehabilitation and incapacitation). As a result, according to the Natural-Probable-Consequence liability model, if the programmers or users knowingly and willfully used the AIUV to commit an offense and if the AIUV deviated from the plan by committing another offense, in addition to or instead of the planned offense, the programmers or users should be held accountable for the additional offense, as if it had been committed knowingly and willfully.23

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
23

Regina v Cunningham, (1957) 2 QB 398; Regina v Faulkner, (1876) 13 Cox’s Criminal Cases 550; United States v Greer, 467 F.2d 1064 (7th Cir, 1972); People v Cooper, 743

! !

EAP 7!

Unmanned Vehicles: Subordination to Criminal Law

207!

However, the question still remains: what is the criminal liability of the AIUV itself when the Natural-Probable-Consequence liability model is applied? In fact, there are two possible outcomes. If the AIUV acted as an innocent agent, without knowing anything about the criminal prohibition, it is not held criminally accountable for the offense it committed. Under such circumstances, the actions of the AIUV were not different from the actions of the AIUV under the first model (the Perpetration-by-Another liability model). However, if the AIUV did not act merely as an innocent agent, then, in addition to the criminal liability of the programmer or user pursuant to the Natural-Probable-Consequence liability model, the AIUV itself should be held criminally responsible for the specific offense directly. The direct liability model of AIUV is the third model, as described below.

1.3 Direct liability
The third model, the direct liability model, does not assume any dependence of the AIUV on a specific programmer or user; rather, it focuses on the AIUV itself.24 As discussed above, criminal liability for a specific offense is mainly comprised of the factual element (actus reus) and the mental element (mens rea) of that offense. Any person attributed with both elements of the specific offense is held criminally accountable for that specific offense. No other criteria are required in order to impose criminal liability. A person might possess further capabilities, but, in order to impose criminal liability, the existence of the factual element and the mental element required to impose liability for the specific offense is quite enough. In order to impose criminal liability on any kind of entity, the existence of these elements in the specific entity must be proven. Generally, when it has been proven that a person committed the offense in question with knowledge or intent, that person is held criminally responsible for that offense. The criminal liability of AIUV depends upon the following questions: How can these entities fulfill the requirements of criminal liability? Do AIUVs differ from humans in this context? An AI algorithm might have numerous features and qualifications far exceeding those of an average human, such as higher velocity of data processing (thinking), ability to take into consideration many more factors, etc. Nevertheless, such features or qualifications are not required in order to impose criminal liability. They do not negate criminal liability, but they are

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
N.E.2d 32 (Ill, 2000); People v Weiss, 9 N.Y.S.2d 1 (NY App Div, 1939); People v Little, 107 P.2d 634 (Cal Dist Ct App, 1941); People v. Cabaltero, 87 P.2d 364 (Cal Dist Ct App, 1939); People v Michalow, 128 N.E. 228 (NY, 1920).
24

Cf, eg, Steven J Frank, ‘Tort Adjudication and the Emergence of Artificial Intelligence Software’ (1987) 21 Suffolk University Law Review 623; Sam N LehmanWilzig, ‘Frankenstein Unbound: Towards a Legal Definition of Artificial Intelligence’ (1981) 13 Futures 442; Marguerite E Gerstner, ‘Liability Issues with Artificial Intelligence Software’ (1993) 33 Santa Clara Law Review 239; Richard E Susskind, ‘Expert Systems in Law: A Jurisprudential Approach to Artificial Intelligence and Legal Reasoning’ (1986) 49 Modern Law Review 168.

EAP!8!

208

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

not required for the imposition of the criminal liability. When a human or corporation fulfills the requirements of both the factual element and the mental element, criminal liability is imposed. If an AIUV is capable of fulfilling the requirements of both the factual element and the mental element, and, in fact, it does fulfil them, then there is nothing to prevent criminal liability from being imposed on that AIUV. Generally, the fulfillment of the factual element requirement of an offense is easily attributed to AIUV. As long as an AIUV controls a mechanical or other mechanism that moves its parts, any act might be considered as performed by the AIUV. Thus, when an AIUV activates its electric or other mechanic system and moves it, this might be considered an act, if the specific offense involves such an act. For example, in the specific offense of assault, such an electric or mechanic movement of an AIUV that hits a person standing nearby is considered to fulfill the actus reus requirement of the offense of assault. The attribution of the mental element of offenses to AIUV is the real legal challenge in most cases. The attribution of the mental element differs from one AI technology to other. Most cognitive capabilities developed in modern AI technology, such as creativity, are immaterial to the question of the imposition of criminal liability. The only cognitive capability required for the imposition of criminal liability is embodied within the mental element requirement (mens rea). Creativity is a human feature that some animals also have, but creativity is a not a requirement for imposing criminal liability. Even the most uncreative persons may be held criminally responsible. The sole mental requirements needed in order to impose criminal liability are knowledge, intent, negligence, etc, as required by the specific offense and under the general theory of criminal law. As a result, AIUVs do not have to create the idea of committing the specific offense, but in order to be criminally responsible the have only to commit the specific offense with knowledge as to the factual elements of that offense. Knowledge is defined as sensory reception of factual data and the understanding of that data.25 Most AI systems are well equipped for such reception. Sensory receptors of sights, voices, physical contact, touch, etc, are common in most AI systems. These receptors transfer the factual data received to central processing units that analyze the data. The process of analysis in AI systems parallels that of human understanding.26 The human

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
25

William James, The Principles of Psychology (1890); Hermann von Helmholtz, The Facts of Perception (1878). In this context knowledge and awareness are identical. See, eg United States v Youts, 229 F.3d 1312 (10th Cir, 2000); State v Sargent, 594 A.2d 401 (Vt, 1991); United States v Spinney, 65 F.3d 231 (1st Cir, 1995); State v Wyatt, 482 S.E.2d 147 (W Va, 1996); United States v Wert-Ruiz, 228 F.3d 250 (3d Cir, 2000); United States v Jewell, 532 F.2d 697 (9th Cir, 1976); United States v Ladish Malting Co, 135 F.3d 484 (7th Cir, 1998); Model Penal Code, above n 22, § 2.02(2)(b). Margaret A Boden, ‘Has AI Helped Psychology?’ in Derek Partridge and Yorick Wilks (eds) The Foundations of Artificial Intelligence (Cambridge University Press, 2006) 108; Partridge, above n 14, 112; David Marr, ‘AI: A Personal View’ in Derek Partridge and Yorick Wilks (eds) The Foundations of Artificial Intelligence (Cambridge University Press, 2006) 97.

26

! !

EAP 9!

Unmanned Vehicles: Subordination to Criminal Law

209!

brain understands the data received by eyes, ears, hands, etc, by analysing that data. Advanced AI algorithms are trying to imitate human cognitive processes. These processes are not so different.27 Specific intent is the strongest of the mental element requirements.28 Specific intent is the existence of a purpose or an aim that a factual event will occur.29 The specific intent required to establish liability for murder is a purpose or an aim that a certain person will die.30 As a result of the existence of such intent, the perpetrator of the offense commits the offense; ie, he or she performs the factual element of the specific offense. This situation is not unique to humans. Some AIUVs might be programmed to figure out by themselves a purpose or an aim and to take actions in order to achieve that figured-out purpose, and some advanced AIUV may figure out by themselves a purpose and to take relevant actions in order to achieve that purpose. In both cases this might be considered as specific intent, since the AIUV figured out by itself the purpose and figured out by itself the relevant actions in order to achieve that purpose. One might assert that many crimes are committed as a result of strong emotions or feelings that cannot be imitated by AI software, not even by the most advanced software. Such feelings are love, affection, hatred, jealousy, etc. This might be correct in relation to AI technology of the beginning of the twenty-first century. Even so, such feelings are rarely required in specific offenses. Most specific offenses are satisfied by knowledge of the existence of the external element. Few offenses require specific intent in addition to knowledge. Almost all other offenses are satisfied by much less than that (negligence, recklessness, strict liability). Perhaps in a very few specific offenses that do require certain feelings (eg, crimes of racism, hate),31 criminal liability cannot be imposed upon an AIUV, which have no such feelings, but in any other specific offense, lack of certain feelings is not a barrier to imposing criminal liability.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
27

Daniel C Dennett, ‘Evolution, Error, and Intentionality’ in in Derek Partridge and Yorick Wilks (eds) The Foundations of Artificial Intelligence (Cambridge University Press, 2006) 190; B Chandrasekaran, ‘What Kind of Information Processing is Intelligence?’ in Derek Partridge and Yorick Wilks (eds) The Foundations of Artificial Intelligence (Cambridge University Press, 2006) 14. Robert Batey, ‘Judicial Exploration of Mens Rea Confusion, at Common Law and Under the Model Penal Code’ (2001) 18 Georgia State University Law Review 341; State v Daniels, 109 So. 2d 896 (La, 1958); Carter v United States, 530 U.S. 255 (2000); United States v Randolph, 93 F.3d 656 (9th Cir, 1996); United States v Torres, 977 F.2d 321 (7th Cir, 1992); Frey v State, 708 So. 2d 918 (Fla, 1998); State v Neuzil, 589 N.W.2d 708 (Iowa, 1999); People v Disimone, 650 N.W.2d 436 (Mich Ct.App, 2002); People v Henry, 607 N.W.2d 767 (Mich Ct App, 1999). Wayne R LaFave, Criminal Law (Thomson/West, 4th ed, 2003) 244-249. For the Intent-to-Kill murder see LaFave, ibid, 733-34. See, eg Elizabeth A Boyd et al, ‘“Motivated by Hatred or Prejudice”: Categorization of Hate-Motivated Crimes in Two Police Divisions’ (1996) 30 Law and Society Review 819; ‘Crimes Motivated by Hatred: The Constitutionality and Impact of Hate Crimes Legislation in the United States’ (1995) 1 Syracuse Journal of Legislation and Policy 29.

28

29 30 31

EAP!10!

210

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012

If a person fulfills the requirements of both the factual element and the mental element of a specific offense, then the person is held criminally responsible. If an AIUV fulfills all elements of an offense, there is no reason for it to be exempt from criminal liability. When an AIUV fulfils all elements of a specific offense, both factual and mental, there is no reason to prevent imposition of criminal liability upon it for that offense. The criminal liability of an AIUV does not replace the criminal liability of the programmers or the users, if criminal liability is imposed on the programmers and/or users by any other legal path. Criminal liability is not to be divided, but rather, added. The criminal liability of the AIUV is imposed in addition to the criminal liability of the human programmer or user.

Conclusion
The criminal liability of an AIUV is not dependent upon the criminal liability of the programmer or user of that AIUV. As a result, if the specific AIUV was programmed or used by another AIUV, the criminal liability of the programmed or used AIUV is not influenced by that fact. The programmed or used AIUV shall be held criminally accountable for the specific offense pursuant to the direct liability model, unless it was an innocent agent. In addition, the programmer or user of the AIUV shall be held criminally accountable for that very offense pursuant to one of the three liability models, according to its specific role in the offense. The chain of criminal liability might continue, if more parties are involved, whether they are human or an AIUV. Not only may positive factual and mental elements be attributed to an AIUV, but also all relevant negative fault elements are attributable to AIUV. Most of these elements are expressed by the general defenses in criminal law; eg, selfdefense, necessity, duress, intoxication, etc. For some of these defenses (justifications),32 there is no material difference between humans and AIUV, since they relate to a specific situation (in rem), regardless of the identity of the offender. For example, an AIUV serving under the local police force is given an order to block a suspect car, and unbeknownst to the AIUV, this order is illegal. If the executer is unaware, or could not reasonably become aware, that an otherwise legal action is illegal in this specific instance, the executer of the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
32

John C Smith, Justification and Excuse in the Criminal Law (Stevens, 1989); Anthony M Dillof, ‘Unraveling Unknowing Justification’ (2002) 77 Notre Dame Law Review 1547; Kent Greenawalt, ‘Distinguishing Justifications from Excuses’ (1986) 49 Law and Contemporary Problems 89; Kent Greenawalt, ‘The Perplexing Borders of Justification and Excuse’ (1984) 84 Columbia Law Review 1897; Thomas Morawetz, ‘Reconstructing the Criminal Defenses: The Significance of Justification’ (1986) 77 Journal of Criminal Law & Criminology 277; Paul H Robinson, ‘A Theory of Justification: Societal Harm as a Prerequisite for Criminal Liability’ (1975) 23 UCLA Law Review 266; Paul H Robinson and John M Darley, ‘Testing Competing Theories of Justification’ (1998) 76 North Carolina Law Review 1095.

! !

EAP 11!

Unmanned Vehicles: Subordination to Criminal Law

211!

order is not criminally responsible.33 In that case, there is no difference whether the executer is human or an AIUV. For other defenses (excuses and exemptions)34 some applications should be adjusted. For example, the intoxication defense is applied when the offender is under the physical influence of an intoxicating substance, eg, alcohol, drugs, etc. The influence of alcohol on an AIUV is minor, at most, but the influence of an electronic virus that is infecting the operating system of the AIUV might be considered parallel to the influence of intoxicating substances on humans. Some other factors might be considered as being parallel to insanity or loss of control. It might be summed up that the criminal liability of an AIUV according to the direct liability model is not different from the relevant criminal liability of a human. In some cases, some adjustments are necessary, but substantively, it is the very same criminal liability, which is based upon the same elements and examined in the same ways.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
33

Michael A Musmanno, ‘Are Subordinate Officials Penally Responsible for Obeying Superior Orders which Direct Commission of Crime?’ (1963) 67 Dickinson Law Review 221. Peter Arenella, ‘Convicting the Morally Blameless: Reassessing the Relationship Between Legal and Moral Accountability’ (1992) 39 UCLA Law Review 1511; Sanford H Kadish, ‘Excusing Crime’ (1987) 75 California Law Review 257; Andrew E Lelling, ‘A Psychological Critique of Character-Based Theories of Criminal Excuse’ (1998) 49 Syracuse Law Review 35.

34

EAP!12!

The Regulation of Unmanned Air System Use in Common Airspace
COMMENT BY ANNA MASUTTI* 1 Introduction

While in the military domain unmanned aerial systems (UASs) are already widely used under specific conditions and in segregated airspaces, UASs for civil use are still in an early stage of development. However, recent studies have shown that UASs applications for civil use have been developed and their deployment, especially for security purposes, is considered more and more a necessity. The Single European Sky (SES) will be implemented by the Single European Sky ATM Research (SESAR) project, which is the European air traffic control infrastructure modernisation programme. SESAR aims to develop a new generation air traffic management system capable of ensuring the safety and fluidity of the European air transport over the next 20 years. SESAR brings a new dimension to European air traffic management (ATM), which has a wide effect on all airspace users including UASs. The SESAR Concept of Operations (CONOPS) for 2020 fully recognises UASs as potential users of the common airspace. It expects increasing numbers of UASs, starting with military missions and extending to many types of civilian tasks, with machines ranging from very light to heavy. The basic assumption is that, when an UAS enters non-segregated airspace, the provision of an Air Traffic Service (ATS) to the UAS must be transparent to air traffic control (ATC) and other airspace users. However, the full potential of UASs cannot be realised until they can fly in non-segregated areas and appropriate legislation and regulatory measures are developed. The necessity to quickly have a full set of common European rules on UAS airworthiness and integration within non-segregated airspace has become a matter of urgency and an unavoidable task. The lack of this regulatory framework prevents the industry from building pertinent business plans and from launching the developments required to answer civil customer’s needs. The present contribution to the debate raised by the European Space Policy Institute (ESPI) on UASs examines the existing legislation at international level and at the European level applicable to UASs, the basic principles that should be taken into consideration for designing a regulatory framework permitting UASs to fly in common airspace and the contribution given by

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*

Professor of Air Law – University of Bologna. This paper was presented at the European Space Policy Institute seminar on 4 November 2010.

!

EAP$1$!

The$Regulation$of$Unmanned$Air$System$Use$in$Common$Airspace$

213$

international organisations, the International Civil Aviation Organization (ICAO), and the European Organisation for Civil Aviation Equipment (EUROCAE), to this project. The key role of the European Aviation Safety Agency (EASA) is outlined with regard to certification of UASs (aircraft and ground stations) and pilot licensing, without omitting to consider a regime of responsibility and accountability to identify the liable party in case of damage to persons or property caused by a UAS accident. The European legislation has divided UASs into two major groups, which are each regulated by different authorities: • UASs with a maximum take-off mass of more than 150 kg. • UASs with a maximum take-off mass of less than 150 kg, commonly designated as light UASs. Naturally, reference is made to UASs with a maximum take-off mass of more than 150 kg, which will be regulated by EASA, while the regulation of UASs with a maximum take-off mass of less than 150 kg is left to the civil aviation authority of each Member State.

2

The Reference Legal Framework

The present legal framework has a limited number of references. The most important one is certainly the 1944 Chicago Convention,1 which introduces Article 8, an actual over-fly prohibition for unmanned air vehicles without specific authorisation.2 Indeed, this Article states that: No aircraft capable of being flown without a pilot shall be flown over the territory of a contracting State without special authorisation by that State and in accordance with the terms of such authorisation. Each contracting State undertakes to insure that the flight of such aircraft without a pilot in regions open to civil aircraft shall be so controlled as to obviate danger to civil aircraft. Accordingly, it clearly appears that consent for contracting States over-flying is only granted when various conditions are satisfied, such as authorisation from the over-flown State, the respect of the over-flight terms, the engagement of the over-flown State to take all necessary measures to ensure

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
1

1944 Convention on International Civil Aviation, opened for signature 7 December 1944, 15 UNTS 295 (entered into force 4 April 1947) (‘Chicago Convention’). We should remember that the Chicago Convention is applicable, as stated by Article 3, only to civilian aircraft and it is not applicable to aircraft used for State flights, military flights, customs and police flights. For such aircraft, excluded from the international ruling system, over-fly or landing in other States is consented only under previous special authorisation and conditions. (See Article 3(c)).

2

EAP$2$ !

214

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

that the over-flight in the airspace that is open to civil aircraft does not affect their safety. Therefore, the pre-conditions to authorise the over-flight of UASs are various and involve the adoption of complex initiatives aimed at guaranteeing the safety of related operations, as prescribed by the Chicago Convention. Consequently, international regulations require the submission of the relating authorisations demonstrating that the UAS complies with the airworthiness rules as stated, for example, by Articles 20 and 29 of the Convention as well as with the ICAO technical annexes issued for putting into effect such rules.3 The previously mentioned articles require, for example, the possession of an airworthiness certificate, an ordinary license for the crew, boarding documents, etc, as well as the acknowledgment of their validity by the contracting States. For ordinary aircraft, this process has been followed since the adoption of the international regulation, and has received a strong impetus thanks to the recent intervention of the European Union. However, for unmanned air systems, no such technical rules have been drawn up to date. This deficiency has been discussed on various occasions both at the international and EU level, which has forced the relevant authorities to attempt to overcome this situation.

3

Application to UASs of Principles and Airworthiness Rules Introduced by European Regulations

The absence of a legal framework able to offer solutions to the various legal problems created by the use of the aircraft in question and the interest shown for their use for civil purposes, has initiated a process involving many international authorities at the EU and international level thanks to the initiative of EASA, which was formed by Regulation 1592/2002, as amended by Regulation 216/2008 and Regulation 1108/2009.4 These regulations, while

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
3

Article 20 (Nationality mark) of the Chicago Convention forces all aircraft to show nationality and registration marks. Article 29 lists the documents that all aircraft of the contracting States must carry on board. Regulation (EC) No 216/2008 of 20 February 2008 on common rules in the field of civil aviation and establishing a European Aviation Safety Agency, and repealing Council Directive 91/670/EEC [2008] OJ L 79; Regulation (EC) No 1592/2002 and Directive 2004/36/EC have been amended by Regulation (EC) No 1108/2009 of 21 October 2009. As a consequence of the formation of the EASA, the ICAO Annex 8, containing airworthiness rules for aircraft, has received a more harmonic and exhaustive application in EU countries and has also consented to draw up the guidelines for the future technical-legal regulation for the use of UAVs. See S Sciacchitano, ‘La nascita dell’European Aviation Safety Agency (EASA)’ (JulySeptember) News Letter, Bologna University.

4

$

EAP$3$

The$Regulation$of$Unmanned$Air$System$Use$in$Common$Airspace$

215$

stating the obligations for the aircraft5 to comply with the essential airworthiness requisites established in the relevant annexes, do not extend this obligation to UASs. The absence of a specific mention of UASs could suggest that they are excluded from the new airworthiness rules. Indeed the definition of aircraft and product contained in Article 3 of Regulation 1592/2002 appears insufficient to include UASs.6 However, a correct interpretation of these rules leads one to consider that UASs are subject to EU rules and to the harmonisation action of EASA for manned aircraft. Support for a broader interpretation can be found in annex II of the Regulation mentioned above, issued for the application of Article 4. This article, while stating the obligation for aircraft and related products to comply with the technical rules of the Regulation, leaves to a specific annex the identification of the exempted categories and, among them, lists ‘(i) unmanned air vehicles having an operating mass below 150 kg’ which could lead to the conclusion that those weighting 150 kg or more must comply with the essential airworthiness rules that the EU Agency will have to establish. The necessity to pursue the elaboration of the essential requirements for UAVs has also been recognised by the European Economic and Social Committee, who recently intervened on the matter of air safety. This committee has reiterated the need for EASA to define ‘the necessary protocols before considering the hypothesis to authorise UAV flights out of the reserved airspace’.7 In order to ease this process, and authorise the use of UASs in general airspace, the European body has reiterated that these aircraft are subject to the existing set of rules for conventional aircraft, confirming the interpretation noted above. Article 11.2 of the document under examination states, in fact, that ‘all rules relating to conventional aircrafts must also be considered compulsory for the UAV’. This conclusion, ie the application to UASs of the same technical norms applicable to conventional aircraft, and the necessity (if it is to work) of

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
5

The Regulation is applicable to aircraft, including products, parts and pertinences installed that have been designed or produced by an organisation for which the Agency or a member State assure the safety control, or have been registered in a member State or even registered in a third country, provided that they are managed by operators for which a member State assures the surveillance of the operations. These considerations have also been anticipated by the EASA which, describing the policy for UAV systems certification (Airworthiness and Environmental Protection) has observed that ‘The proposed policy is applicable to UAV systems with a maximum take off mass of 150 kg or more; which are not excluded by article 1(2) or Article 4(2) and Annex II of EC Regulation 1592/2002’. Opinion expressed by the European Economic and Social Committee on safety matter, 2006/C, 309/11, in GUUE of 16 December 2006, C.309/51.

6

7

EAP$4$ !

216

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

issuing the protocols required by Regulation (CE) 1592/2002, have required the EU authorities to identify some criteria which should be useful to make some consideration to understand the future legal framework.

4

UAS Certification

The initial actions of the bodies charged to study the essential airworthiness requirements of UASs have clearly said that the technical and operating features of this particular category of aircraft make the certification of the sole aircraft insufficient to guarantee the safety of its flight operations. This position has been supported, for example, by the Joint Aviation Authorities (JAA) in a study made in conjunction with Eurocontrol, anticipating an opinion later expressed by EASA.8 This conclusion has been based on the principle that the flight of an unmanned air vehicle is operated by complex equipment from a control station and a link system between the station and the aircraft. On this basis, and in order to guarantee the flight operation’s safety, it has been deemed that the certification must refer to the entire equipment used for such purpose. The equipment comprises the control station9 and any other necessary element to realise flight operations, like the communication link10 and the ‘launch and recovery element’.11 The equipment may allow the use of more than one vehicle, various control stations and ‘launch and recovery elements’. Such a configuration of UASs raises many delicate questions that must be examined in order to identify the essential requirements that are necessary in order to guarantee a flight’s safety. Particular attention should be paid to the possible communication between the number of control stations and the number of flying aircraft. When the configuration of the system foresees one or more stations controlling the same aircraft, no problem should arise, as the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
8

Eg see on this point, JAA/Eurocontrol Initiative on UAVs, Task Force Final Report – A Concept for European Regulation for Civil Unmanned Air Vehicles, 11 May 2004; European Aviation Safety Agency, Advance – Notion of proposed amendment (NPA) No. 16/2005 – Policy for Unmanned Aerial Vehicle (UAV) certification. EASA defines the control station (CS) as ‘A facility or device(s) from which a UAV is controlled for all phases of flight. There may be more than one control station as part of a UAV system.’ See European Aviation Safety Agency, above n 8. The Communication Link has been defined, on the contrary, as, ‘The means to transfer command and control information between the elements of a UAV System, or between the system and any external location. (Eg, transfer of command and response data between control stations and vehicles and between the UAV System and Air Traffic Control)’. Ibid. EASA states a ‘UAV Launch and recovery element’ is a ‘facility or device(s) from which a UAV is controlled during launch and/or recovery. There may be more than one launch and recovery element as part of a UAV System’. Ibid.

9

10

11

$

EAP$5$

The$Regulation$of$Unmanned$Air$System$Use$in$Common$Airspace$

217$

airworthiness certificate stating the conformity of the vehicle with the safety regulations could be issued foreseeing the use of a range of control stations for one aircraft. It becomes more complex, however, in the case of one station controlling more than one aircraft of different models. In such case, it should be decided whether to issue the control station with two or more airworthiness certificates (according to the number of guided aircrafts) or a single certificate specifically created for control stations having this particular feature. In addition to these considerations, which up to now have not been resolved, it has been decided to follow an approach similar to that adopted for conventional aircraft as far as the pilot in command is concerned, ie the need for such persons to be in possession of the same licences accepted at the European level. The tendency that has emerged at the EU level to proceed towards a certification ‘of the system’, which is not limited to a sole aircraft, appears justified in light of the intrinsic working mechanisms of UASs. The complexity of the system and the necessity to reach a shared solution on the criteria and principles to be adopted in order to draw up a technical ruling system for the use of unmanned air vehicles, has persuaded EU authorities to involve other competent bodies in this sector. EUROCAE has been requested to produce a study on airworthiness certification and the operative authorisation of UASs.12 This working group, after having underlined that the lack of a clear legal framework on this matter is limiting the use of unmanned air vehicles in Europe, has prepared a program to produce a proposal for a set of technical rules governing all UASs. Such a proposal, suggested above, shall be applied not only to the aircraft, but also to the personnel who are employed in the control stations (despite not being on board the aircraft), and to the structure organised by the operator for this purpose, to the airports and to the air traffic controllers. Finally, an important working group has been set up by ICAO,13 under pressure from member States and in particular from EU countries which have forced the international organisation to define its role in the creation of a set of rules for this sector in order to guarantee harmonisation of terminology, principles and strategies for the future regulation of the sector itself. Consequently, it has been suggested that there also needs to be a review of the ICAO Annexes to introduce ‘Standards and Recommended Practises’ for this kind of aircraft.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
12

European Organization for Civil Aviation Equipment WG-73, Unmanned Aerial Vehicle – Working Paper, 25 October 2006. ICAO Exploratory Meeting on Unmanned Aerial Vehicles, Montreal, 23-24 May 2006, ICAO-UAV WP/2.

13

EAP$6$ !

218

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

Within ICAO, the Air Navigation Commission14 has examined the results of the above-mentioned working group on UAVs, stressing the importance of guidelines to give proper answers to the many questions that have been raised. In particular, it has been proposed to change the terminology of Unmanned Air Vehicles to Unmanned Aerial System and more recently, in Circular 328 of October 2010, to change the name of 'Remotely Piloted Aircraft System (RPAS) as, in reality, they are not unmanned vehicles but remotely piloted aircraft. The necessity to guarantee the safety of the system requires the certification of the entire apparatus. However, this consideration raises many legal questions that need appropriate answers.

5

The Criteria to Identify the Essential Airworthiness Requisites for UASs: The Objective to Avoid Excessive Burden

The option to produce certification of the entire system could lead to a complex legal framework which is not appropriate to reach the goal, at both the EU and international level, of adopting a coherent set of rules for the development of this economic sector. In order to avoid such an outcome, it has been pointed out since the first production of documents at the EU level15 that the requirements necessary for the certification of UASs, and the relevant technical principles for assuring the safety of these flights should be, as much as possible, similar to those existing for conventional aircraft, thereby avoiding the introduction of more obligations and excessive burden. For this purpose, some fundamental principles and disciplinary approaches have been outlined for the concrete implementation of the technical rules in this sector. EASA, while producing the criteria to be followed for the description of the essential requirements, has first of all deemed it appropriate to specify that UASs will be affected by this regulation16 and those that will be subject to rules established by their single national authorities only. This last category comprises unmanned air vehicles with a maximum take-off mass below 150 kg, those designed for scientific or research purposes, or produced in limited numbers and, finally, UASs used for military, customs or police activities. However, national authorities should take into consideration,

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
14

Air Navigation Commission, Results of unmanned aerial vehicle (UAV) questionnaire – Progress report on unmanned aerial vehicle work and proposal for establishment of a study group, AN-WP/8221, 17 April 2007. European Aviation Safety Agency, above n 8. Ibid.

15 16

$

EAP$7$

The$Regulation$of$Unmanned$Air$System$Use$in$Common$Airspace$

219$

as far as possible, the principles and regulations suggested by EASA when regulating these activities.17 Beyond this distinction, importance has been given to the objective of avoiding the introduction into the certification requirements of elements that are considerably different from those required of conventional aircraft whose regulation can be applicable to UAVs, although amended in consideration of the particular nature of these aircraft. To this end, it has been necessary to stress the importance of the impartiality (or fairness) principle and consequently to utilise as far as possible the existing legal framework for conventional aircraft excluding a tailored regulation for UASs only. Therefore, UASs will have to comply with the airworthiness rules in force for conventional aircrafts and acknowledged by ATC service providers, avoiding, as far as possible, the application of different rules (transparency). The same importance has been attributed to the equivalence principle (equivalent risk, equivalent operation) that refers to the necessity of maintaining a safety standard at least equivalent to the one required for conventional aircraft.18 Finally, on various occasions it has been necessary to stress the importance of establishing rules on responsibility (responsibility/accountability), again subject to the same rules applicable to conventional aircraft. On the contrary, it is recommended that specific rules be adopted regarding the transfer of command and, consequently, the distribution of responsibilities among the operators, when the command operations are distributed among various control stations.19

6

Legal Problems Deriving from the Use of UASs in Common Air Space: The Identification of the Civil Liability Regime for Damages to Third Parties and for the Liable Party

From an examination of the recent initiatives adopted by the EU in order to consent to the use of unmanned air vehicles in non-segregated areas (and to overcome the prohibition established by Article 8 of the Chicago Convention), it appears the EU wishes to create a reference regulating framework to guarantee the safe use of UASs, without imposing onerous measures that would prevent their use.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
17

A working group at JAA had already suggested this route. See JAA/Eurocontrol, above n 8. The matter of the rules regulating the use of UASs in non-segregated areas was also discussed within ICAO and it appears in a working document which recalls the above-mentioned principles worked out by the EU bodies. In particular, the distribution of tasks should be clearly defined between the operator that guarantees the operations of the system and the pilot in command entrusted to drive the flight operations.

18

19

EAP$8$ !

220

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

Consequently, the efforts of the relevant authorities are mainly dedicated to designing the specific legal framework, while modest attention has been given to the further legal implications resulting from the use of such aircraft. A theme of great importance concerns the regulation of civil liability resulting from the use of UASs. The liability for damage to persons or property that can occur as a result of an incident involving a UAS requires the resolution of various questions such as the applicable law, the identification of the liable party, etc. To this end, it should be decided whether the rules contained in the Rome Convention20 could be considered applicable in such cases. Naturally, this Convention does not contain any reference to UASs, but in some cases its rules have been considered applicable to all kinds of vehicles, including spacecraft, provided they are ‘usable for transport’. Whenever an extensive interpretation of the notion of an aircraft occurs (already adopted in the 1944 Chicago Convention and in Regulation 1592/2002), the set of rules contained in the Rome Convention can be considered applicable. For example, the Italian legislature has recently come to the same conclusion. The reformed Air Navigation Code does not exclude the application of the rules in question to UASs. These regulations, based on the aircraft operator’s strict liability,21 allow for them to benefit from the system of debt limitation, for each incident, and to reduce the amount calculated in proportion to the weight of the aircraft that has caused the damage.22

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
20

Convention on Damage Caused by Foreign Aircraft to Third Parties on the Surface, opened for signature 7 October 1952, 310 UNTS 182 (entered into force 4 February 1958) (‘Rome Convention’). In determining the operator’s liability the subjective state of the party (fraud or serious fault) is left out. Therefore, it is an objective liability based on the risk of a lawful activity. The regulation relating to the liability for damages to third parties on the surface is applicable any time an aircraft fall procures, even for force majeure reasons, damages to persons or property. In these cases the operator is liable on the basis of a strict liability regime (which, anyway, is tempered by some exclusions listed in the same Convention). Article 11 of the Rome Convention states: 1. Subject to the provisions of Article 12, the liability for damage giving a right to compensation under Article 1, for each aircraft and incident, in respect of all persons liable under this Convention, shall not exceed: (a) 500 000 francs for aircraft weighing 1000 kilogramme or less; (b) 500 000 francs plus 400 francs per kilogramme over 1000 kilogramme for aircraft weighing more than 1000 but not exceeding 6000 kilogramme; (c) 2 500 000 francs plus 250 francs per kilogramme over 6000 kilogramme for aircraft weighing more than 6000 but not exceeding 20 000 kilogramme; (d) 6 000 000 francs plus 150 francs per kilogramme over 20 000 kilogramme for aircraft weighing more than 20 000 but not exceeding 50 000 kilogramme; (e) 10 500 000 francs plus 100 francs per kilogramme over 50 000 kilogramme for aircraft weighing more than 50 000 kilogramme.

21

22

$

EAP$9$

The$Regulation$of$Unmanned$Air$System$Use$in$Common$Airspace$

221$

The application to UASs of the same discipline on civil liability for damages caused to third parties raises another question related to the identification of the liable parties. The traditionally adopted framework implies the distribution of the liability between the pilot in command of the aircraft and his or her operator. In the first case, liability normally lies with the pilot in command as the head of the expedition23 as he or she is personally responsible for the observance of such obligations. In contrast, the liability for any other obligations, contractual or extra contractual, is attributed to the operator and in such cases the international regulation, mentioned above, puts the liability for damage to third parties or damage from collision on to the operator. Therefore, it is vitally important, considering the complexity of UASs, to make a clear distinction between the pilot in command of the vehicle and the operator, ie between the person appointed as the crew chief and sole director of manoeuvres and navigation and the person that sets up an organisation for obtaining financial benefit. As anticipated in the previous paragraphs, the relevant EU bodies have clearly suggested considering the entire UAS (aircraft, control stations, etc) while creating a set of rules for this kind of system. As a consequence of this concept, the UAS operator must be likewise recognised as the pilot in command. In such a scenario, the liability for damages caused by the fall of a UAS to the ground should be attributed to the operator, ie to the person or entity that, on the basis of Article 2 of the Rome Convention24 sets up the system, assures its

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
2. The liability in respect of loss of life or personal injury shall not exceed 500 000 francs per person killed or injured (…). The limited value of the amounts indicated and the use of golden franc as a reference currency — replaced in almost all uniform regulations by the Special Withdrawal Rights (SWR) — have forced the revision of the Convention. ICAO is working toward the modernisation of the Rome Convention. Eg see B Izzi, Prospettive di riforma della disciplina internazionale sulla responsabilità per i danni a terzi sulla superficie, in Diritto dei trasporti 2004, 400-401.
23 24

So provides Article 878 of the Italian Air Navigation Code. In this regard it is appropriate to point out that the notion of operator contained in the Rome Convention is partially different from that consolidated in the Italian air navigation Code. The notion of operator in the Rome Convention is connected to the criteria referring to the navigation activity. In fact the Convention attributes the liability to the operator, ie the person who was making use of the aircraft at the time the damage was caused: ‘operator shall mean the person who was making use of the aircraft at the time the damage was caused’ (Article 2.2). The Convention makes a distinction between use and navigation control with the consequence, for example, that in case of abusive use of the aircraft, without the authorisation of the person entitled, the temporary user or abusive user will be the liable party to whom a joint liability of the operator is added, but only for a guarantee purpose. Such difference falls in the case of leasing where, like the Italian regulation, the liability falls on the person who maintains the navigation control (the lessor).

EAP$10$ !

222

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

functioning and provides the publication of their/its position to avoid the presumption that the owner of the aircraft is also the operator. The figure of the pilot can be identified as the subject to whom is entrusted the command of one or more aircraft owned or at the disposal of the operator.25

7

Other International Regulations Applicable to UASs

The principle already accepted at the EU and international level, which applied to UASs the international rules adopted for conventional aircraft, especially those relating to safety, encourages the application of the same international set of rules like the Convention for the Suppression of Unlawful Acts Against the Safety of Civil Aviation,26 and the more recent Cape Town Convention.27 Both Conventions are not applicable to military, customs or police aircraft. The second above mentioned Convention aims at creating a specific international guarantee, fully applicable to all member States, concerning assets that normally, for business purposes, move from one State to another, like aircrafts, spacecraft, whose regulation is contained in special protocols, necessary to adapt it to specific needs. In our case, on the occasion of the approval of the Convention, an aeronautical protocol has been signed and the regulations created for conventional aircraft are also considered applicable to UASs.28

8

Initiatives Taken by the EU and some Non-EU Countries

In response to the increasing demand to use UASs for many civil applications, some countries have taken initiatives permitting the deployment of UASs under certain conditions. In some cases the first step has been to update the existing ATC regulations; in other cases a separate set of rules has been designed. Europe is very active in this field and is making progress, notwithstanding the extreme prudence of ICAO.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
25

In this respect the document produced by EASA (Advance – Notice of proposed amendment (NPA) No 16/2005 – Policy for Unmanned Aerial Vehicle (UAV) certification, 25, cit.) has defined the UAV commander as ‘A suitably qualified person responsible for the safe and environmentally compatible operation of a UAV System during a particular flight and who has the authority to direct a flight under her/his command’. Convention for the Suppression of Unlawful Acts Against the Safety of Civil Aviation, opened for signature 23 September 1971, 974 UNTS 177 (entered into force 26 January 1973). Convention on International Interests in Mobile Equipment, opened for signature 16 November 2001. Eg see, Il protocollo aeronautico annesso alla convenzione relativa alle garanzie internazionali su beni mobili strumentali (Città del Capo, 16 Novembre 2001) L Tullio (ed), Padova, 2005.

26

27

28

$

EAP$11$

The$Regulation$of$Unmanned$Air$System$Use$in$Common$Airspace$

223$

The most relevant experiences of a few countries are briefly described below: Canada has established a working group to amend existing Canadian aviation regulations to incorporate UAS operations into Canadian airspace with minimal changes. At present in the United States a civil UAS operator may have access to the National Airspace System (NAS) if they have a special airworthiness certificate. The Federal Aviation Administration (FAA) is making efforts to enable small UASs to operate in certain portions of the NAS. In Australia, the Civil Aviation Safety Regulations consolidate rules governing all unmanned aeronautical activities into one body of legislation. Guidelines are published for manufacturers and controllers. At the European level, we have the following initiatives and projects: EASA published the Advanced Notice of Proposed Amendment (A-NPA) in 2005, followed by its Comment Response Document (CRD) on 6 December 2007. The main findings are now explained in sufficient detail in a ‘policy document’ published in 2009 and discussed with stakeholders, including EUROCAE WG-73. The objectives of this ‘policy’ are to facilitate UAS applications and to ensure a level of safety/environmental protection at least equivalent to comparable manned aircraft. JARUS, European National Authorities under the leadership of the Netherlands and EASA are developing operational and technical regulations for UASs. EUROCAE WG 73 is developing a requirements framework that would enable unmanned aircraft to operate within the constraints of the existing Air Traffic Management (ATM) environment in airspace without segregation from other airspace users. INOUI Project (Innovative Operational UAS Integration) funded by the 6th Framework Programme of the European Commission, focuses on the integration of Unmanned Aerial Systems (UAS) in nonrestricted airspace in the context of SES.

9

Conclusions

The market for UAS civil use is emerging and offers a wide range of applications, including security services. The existing regulatory framework is limited and permits UASs to operate in segregated airspace only. To unlock this market it is necessary to design a new regulatory framework allowing UASs to operate in the common airspace. The basic principles for airworthiness, certification and licensing have already been identified. Today’s technology is sufficient advanced to allow UASs to offer the same safety standards as manned aircraft. The path to design the new regulatory framework is long and we should start now to establish the necessary initiatives. ICAO, with its Circular 328 of October 2010, has expressed the intention to proceed towards the insertion of UASs in the common airspace. The EU took the lead in this process, setting up the High Level Group announced during the UAS International Conference of 1 July 2010 in Brussels. This group is working on the introduction of important new technical and legal regulations for the use of UASs in common airspace.

EAP$12$ !

Guns, Ships, and Chauffeurs: The Civilian Use of UV Technology and its Impact on Legal Systems
COMMENT BY UGO PAGALLO * Abstract
This article focuses on the civilian use of unmanned vehicle (UV) technology and its impact on legal systems. By distinguishing a panoply of UV applications such as unmanned aerial vehicles for policing and border security, unmanned water-surface and underwater vehicles for remote exploration works and repair, down to unmanned ground vehicles for urban transport, the aim is to pinpoint new legal cases affecting international humanitarian law, contractual obligations, strict-liability rules, etc. Special attention is paid to new models of limited responsibility and insurance for distributing risk in extra-contractual relations and in connection with tortious claims. All in all, we should strike a fair balance between people’s claim not to be ruined by the civilian use (rather than, say, the production and design) of UV technology, and the interest of UV counter-parties to safely interact with a new generation of autonomous ‘guns,’ ‘smart ships,’ and AI ‘chauffeurs.’

1

Introduction

In their work The Laws of Man over Vehicles Unmanned,1 Brendan Gogarty and Meredith Hagger examine how the civilian use of Unmanned Vehicle (UV) technology may affect legal frameworks.2 Although ‘the technology is currently less prominent in the civilian’ than the military sector, two exceptions already exist: agricultural UVs and unmanned vehicles employed in undersea operations (UUVs). Moreover, a number of factors such as interagency transfers, increasing international demand, public R&D support and growing access to powerful software and hardware, explain why ‘drone technology’ is rapidly and progressively ‘within the reach of public bodies, private companies and even individuals’.3 This is the case with several UV applications for border security, policing, patrolling and inspection, emergency and hazard management, remote exploration works and repair, urban transport and more. The ‘relative cost savings’ promised by UV

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*

Law School, University of Torino, via s. Ottavio 54, 10124 Turin, Italy ugo.pagallo@unito.it Brendan Gogarty and Meredith Hagger, ‘The Laws of Man over Vehicles Unmanned: The Legal Response to Robotic Revolution on Sea, Land and Air’ (2008) 19 Journal of Law, Information and Science 73. Ibid 103-124. Ibid 105.

1

2 3

!

EAP 1!

Guns, Ships, and Chauffeurs: Civilian Use of UV Technology and its Impact on Legal Systems

225!

technology have indeed ‘excited many commercial operators’,4 so that it is crucial that lawyers assess the regulatory constraints for the ever-growing production and use of this new generation of UVs. Gogarty and Hagger properly pay attention to current principles and rules of the international civil aviation and maritime law as well as norms on civilian motor traffic: on one hand, civil authorities around the world have been reluctant to allow UVs to legally share the air, water or ground, with commercial traffic. On the other hand, it is likely that advancement in UV technology will increasingly compel law-makers to amend relevant provisions of the civilian traffic safety-regime. Even if ‘one would expect that the right body to make such value judgements would be a sovereign legislative body, not a software engineer’,5 lawyers should be ready to tackle the pressure of an ‘avalanche of demand’ for regulatory review. Moreover, the intersection of UVs with legal matters of tort (in particular negligence) and the question of fault, issues of privacy and whether UVs may use force against humans will require careful consideration. Specifically, Gogarty and Hagger argue that ‘drone technology is still likely to create some real challenges to those charged with determining liability in tortious claims’.6 In order to cast further light on current legal loopholes brought on by the civilian use of UV technology, this paper focuses on three cases. In Part 2, I consider the use of unmanned aerial vehicles (UAVs) for border security, policing, patrolling and inspection with a focus on UAV ‘guns.’ In Part 3, I examine cases of UUVs and unmanned water-surface vehicles for emergency and hazard management, remote exploration works and repair at sea, ie ‘ships’. Finally, in Part 4, I take into account legal problems concerning the use of unmanned ground vehicle (UGVs) such as ‘chauffeurs.’ The unifying theme relevant to all UVs under consideration in this comment is the way they are affecting (or will affect) aspects of the civilian traffic safetyregime at national and international levels. Furthermore, this new generation of UVs will also impact on key legal notions of international humanitarian law, contractual obligations, and matters of strict-liability. At the end of the day, I think Gogarty and Hagger are right when claiming that the civilian use of UV technology ‘challenges the boundaries and efficacy of existing legal frameworks and raises a range of social and ethical concerns’.7 Let me examine such challenges in the light of the aforementioned metaphors: those of guns, ships, and chauffeurs.

2

Guns

The first example of the use of UV technology under examination in this comment is provided by UAV applications for border security, policing,

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
4 5 6 7

Ibid 110. Ibid 121. Ibid 124. Ibid 73.

EAP 2!

226

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

patrolling and inspection. These applications raise a number of concerns about whether UVs may use force against humans, eg, fighting against illegal trans-border activities via ‘drone technology.’ Significantly, in their 2010 reports to the UN General Assembly, Philip Alston and Christof Heynes have stressed that current legal provisions are silent on two critical issues: (i) whether certain types of UAVs such as ‘autonomous weapons’ should be considered unlawful, and (ii) the set of parameters and conditions that should regulate the use of these machines. Whilst the analysis of the UN special rapporteurs is centered on how military robotics-technology affects current rules of international humanitarian law (IHL), the civilian use of UAVs ‘guns’ concerns rights and safeguards granted by both constitutional and human rights law. In the case of the European human rights legal framework, which is based on the 1950 Convention for the Protection of Human Rights8 and the decisions of the European Court on Human Rights (ECHR), it is more than likely that the civilian use of UAV ‘guns’ would be severely restricted by clauses such as Article 2 of the ECHR Convention on the legitimate ‘use of force’. As Philip Alston affirms in his 2010 Report to the UN General Assembly on extrajudicial, summary or arbitrary executions, ‘a missile fired from a drone is no different from any other commonly used weapon, including a gun fired by a soldier or a helicopter or gunship that fires missiles. The critical legal question is the same for each weapon: whether its specific use complies with IHL’9 and, moreover, whether the civilian use of UV technology abides by the principles and provisions of the ECHR legal framework.10 Interestingly, some claim that it is feasible to program UAVs to respect these principles and provisions. According to Ronald Arkin, the aim of R&D in UAVs is to create ‘guns’ that comply with principles of conduct such as strict necessity and humanity, while avoiding psychological problems such as the ‘scenario fulfillment’.11 However, crucial problems persist when embedding legal rules in ‘intelligent machines.’ The formalisation of the set of rules does not only have to do with top normative concepts such as notions of validity, obligation, prohibition, or permission. These rules present highly contextdependent normative concepts such as proportionality and discrimination in the use of force, that are ‘much more complex than Asimov’s laws,’ because

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
8

Convention for the Protection of Human Rights and Fundamental Freedoms, opened for signature 4 November 1950, 213 UNTS 221 (entered into force 3 September 1953) (‘ECHR Convention’). Philip Alston, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, United Nations General Assembly, Human Rights Council, A/HRC/14/24/Add.6 (28 May 2010). For the restrictions set up by the ECHR jurisprudence see Gillow v UK (1986) 11 EHRR 335; Leander v Sweden (1987) 9 EHRR 433; Klass et al v Germany (1978) 28 Eur Court HR (ser A) 17, etc. R C Arkin, ‘Governing Lethal Behaviour: Embedding Ethics in a Hybrid Deliberative/Hybrid Robot Architecture’ (Report No GIT-GVU-07-11, Georgia Institute of Technology’s GVU Centre, 2007).

9

10

11

!

EAP 3

Guns, Ships, and Chauffeurs: Civilian Use of UV Technology and its Impact on Legal Systems

227!

provisions in human rights law and constitutional safeguards ‘leave much room for contradictory or vague imperatives, which may result in undesired and unexpected behavior in robots’.12 In the field of military autonomous machines, it is acknowledged that ‘whether or not robotic weaponry will soon be able to surmount the technical challenge of this moral imperative (at least as well as human soldiers)’ ie, not to harm civilians, ‘remains unknown’.13 In research sponsored by the US Navy, Lin, Bekey and Abney even admit that ‘we may paradoxically need to use the first deaths to determine the level of risk’.14 On this basis, today’s state-of-the-art technology and current legal frameworks suggest we should discipline the civilian use of autonomous UAV ‘guns’ according to the principle of precaution.15 In the foreseeable future, UV technology will not be sufficiently developed so as to program machines to grasp what ‘is necessary in a democratic society in the interests of national security’.16 A more urgent threat is presented by use of such machines regardless of previous testing: in this context, focus should be on the civil (rather than political and military) restraints for the legal use of UAV ‘guns.’ Although the deployment of drone technology would be severely restricted by constitutional and human rights law (eg, the aforementioned case law of the ECHR), it does not follow that a number of drone applications for border security, policing, patrolling, etc, are illegitimate. For example, consider the non-lethal engagement of suspects, arrests by drones, monitoring operations and UAVs specifically designed for non-military surveillance, eg, a tiny aerial device for patrolling atomic plants, banks, or even private villas. In most of these cases, the civilian use of UV technology puts forward problems of human responsibility concerning, say, contractual obligations, rather than the protection of human rights, constitutional principles or matters of military and criminal accountability. As a result, legal issues concerning UV technology will remain focused on the technical accuracy of the means, rather than the legitimacy of the goal to be achieved through such devices. Let me clarify the difference with the second example of UVs as ‘ships,’ that is, unmanned water-surface vehicles.

3

Ships

The second case related to the (legal) use of UV technology is offered by water-surface and UUV applications such as in remote exploration work and

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
12

P Lin, G Bekey, and K Abney, ‘Autonomous Military Robotics: Risk, Ethics, and Design’ (Report, Ethics and Emerging Sciences Group, California Polytechnic State University, 20 December 2008). Ibid. Ibid. G Veruggio, ‘Euron Roboethics Roadmap’ (Proceedings of the Euron Roboethics Atelier, Genoa Italy, 27 February – 3 March 2006). ECHR Convention, opened for signature 4 November 1950, 213 UNTS 221 (entered into force 3 September 1953) art. 8 (on people’s privacy).

13 14 15

16

EAP 4!

228

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

repairs of pipelines, oil rigs, and so on.17 Among UV devices, this is one of the most developed fields: some have even spoken of the ‘golden age’ of UUV technology that ‘occurred more than a decade before the UAV revolution’.18 Whilst development in UUVs and the increase of their use in the civil sector is likely to force us to rethink many aspects of today’s legal framework in maritime law (eg, the 1972 IMO COLREGs Convention),19 a new generation of cases involving contractual obligations will emerge as well. Focus should be on whether, from a legal viewpoint, UUVs work within a given set of instructions, so as to achieve specific goals, eg, to undertake repairs to oil rigs in the Caribbean Sea. There are UUVs that ‘autonomously’ undertake such work by preventing damage, alerting controllers, and so on. In cases concerning the production and use of such ‘ships’, legal issues will mostly be entwined with conditions, terms, and clauses that depend both on the voluntary agreement between private individuals that a court will enforce, and the commercial nature of the agreement. Hence, lawyers will have to reflect on whether software and hardware developers, manufacturers or system engineers should be held responsible for the design and production of UUVs: the first step is to ascertain the quality of the vehicle, that is, if it has been properly programmed and constructed in order to attain a given class of results. As shown by the field of military robotics technology, problems of contractual responsibility are at stake any time producers deny liability.20 In the case of UUVs, it is thus crucial to discern matters of contractual responsibility by distinguishing the goal a specific UUV is ‘autonomously’ pursuing. Although ‘determining fault in complex software and hardware is already difficult’,21 complex software and hardware applications for some artificial agents, eg, the da Vinci surgical robot, may raise engineering problems that scholars routinely address as a part of their everyday work and development research. By restricting the range of possible uses of a given artifact (eg, controlling the settings in operating rooms as in the case of da Vinci surgical robots), we can determine the reliability of a new generation of autonomous machines such as today’s UV ‘ships’. On the basis of how likely it is for an event to occur, what consequences it entails and its costs, work on da Vinci robots22 shows that

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
17 18 19 20

Gogarty and Hagger, above n 1, 108-109. Ibid 104. Ibid 114. For instance, it was alleged (but ultimately shown to be unsubstantiated) that Foster-Miller Special Weapons Observation Reconnaissance Detection System (SWORDS) units employed by the US Army experienced unintended movements. The Inside Story of the SWORDS Armed Robot ‘Pullout’ in Iraq: Update (1 October 2009) Popular Mechanics <http://www.popularmechanics.com/technology/gadgets/4258963>. Gogarty and Hagger, above n 1, 123. L S Borden, P K Kozlowski, C R Porter, and J M Corman, ‘Mechanical failure rate of da Vinci robot system’ (2007) 14(2) Canadian Journal of Urology 3499.

21 22

!

EAP 5

Guns, Ships, and Chauffeurs: Civilian Use of UV Technology and its Impact on Legal Systems

229!

only 9 out of 350 interventions (2.6%) could not be completed due to device malfunctions. Likewise, Andonian et al23 claim that only 4.8% of the malfunctions that occurred in a New York urology institute from 2000 to 2007 were related to patient injury. These statistics suggest a spectrum of possible civilian uses of UV technology. Contrary to contractual obligations for the design and production of UAV ‘guns’, it seems that UUV ‘ships’ that autonomously repair oil rigs in the ocean do not really affect today’s conceptual frameworks and legal systems. While the use of autonomous lethal weapons is currently challenging international humanitarian law,24 the legitimacy of other automatised devices such as UV ‘ships’ can be grasped by lawyers using the same concepts developed for previous technological innovations, that is, in terms of the probability of events, their consequences and costs. At the other end of the spectrum, however, all UV applications involve ‘third parties.’ Along with the civilian use of UAV ‘guns’, matters of extracontractual responsibility and issues of faultless or strict-liability concern the use of autonomous UUVs as well, eg, fatalities on oil rigs due to malfunctions of ‘intelligent ships.’ From this perspective, even reliable applications of UV technology such as some types of UUVs may trigger a number of novel and tricky questions, when defining obligations between private persons imposed by the government so as to compensate damage done by wrongdoing. Besides the panoply of strict contractual obligations, there are questions of tortious liability: ‘how will fault be determined when a human and computer are sharing the reigns of a vehicle under traffic legislation? Indeed, who will be at fault if the vehicle has an accident when it is clear only the computer AI was in control?’25 By following the analysis of The Laws of Man over Vehicles Unmanned, let me examine the third case of UV technology, that is, UGVs. A new generation of artificial ‘chauffeurs’ shed further light on a novel aspect of legal responsibility for the production and use of UV technology.

4

Chauffeurs

From a legal point of view, unmanned ground vehicles offer some of the most challenging applications of UV technology. Whether or not future UGVs will need driving licenses, or special licenses, etc, UV ‘cars’ and ‘chauffeurs’ allow us to further understand new legal issues that are raised by the civilian use of UV ‘guns’ and ‘ships.’ To start with, the complexity of the environment designers and producers have to address increases the uncertainty and

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
23

S Andonian, Z Okeke, A Rastinehad, B A Vanderkrink and L Richstone, ‘Device failures associated with patient injuries during robot-assisted laparoscopic surgeries: a comprehensive review of FDA MAIUDE database’ (2008) 15(1) Canadian Journal of Urology 3912. Alston, above n 9. Gogarty and Hagger, above n 1, 120-121.

24 25

EAP 6!

230

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

unpredictability of UGVs automatically driving on the freeways. As a matter of risk, these UGVs are more similar to UV ‘guns’ than ‘ships’ and may even require that we apply the principle of precaution.26 However, contrary to the use of UV ‘guns’, risks for employing UV ‘cars’ mostly concern problems related to extra-contractual responsibility and the field of strict liability, rather than human rights law and constitutional safeguards. Whilst proponents of UV technology ask for ‘a major review and clarification of existing civilian traffic safety regimes, and even the creation of a specific regulatory system for UVs’,27 it is important to distinguish three different kinds of liability, when designing, producing and employing UGV ‘chauffeurs’. Extra-contractual responsibility may in fact depend on intentional torts, negligence-related tortious liability or faultless liability (strict liability). There is liability for an intentional tort when a person has voluntarily performed a wrongful action. Liability comes from a lack of due care when the ‘reasonable’ person fails to guard against ‘foreseeable’ harm. Strict liability is established (eg, liability for defective products), when there is no illicit or culpable behavior but, say, a lack of information about certain features of the artifact. In the case of UV ‘cars’ the difficult part of the legal framework consists in deciding where to cut the chain of responsibility and how to apportion liability in the case of contributory negligence. Significantly, some speak of a ‘failure of causation’ due to the impossibility of attributing responsibility on the grounds of ‘reasonable foreseeability’, since it would be hard to predict what types of harm may supervene.28 A traditional option is to make insurance compulsory for UGVs as has been done in most legal systems with the former generation of ‘cars’. Whether the insurance model illustrated by Curtis Karnow29 or the authentication model of Andrew Katz30 is adopted, the aim should be to avert the risk that people think twice before producing and using UGVs, because they would be liable regardless of their intent or fault. Thus, in order to prevent unwise strict-liability policies in the field, eg, UGV ‘chauffeurs’ as a new legal employee, we should aim to strike a fair balance between people’s desire not to be ruined by producing and using UV technology, and the interest of others to be protected when interacting or transacting with such UGV ‘cars.’ Work on multi agent-systems and distributed ethics, involving

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
26 27 28

Veruggio, above n 15. Gogarty and Hagger, above n 1, 121. C E A Karnow, ‘Liability for distributed artificial intelligence’ (1996) 11 Berkeley Technology and Law Journal 147. Ibid. A Katz, Intelligent agents and internet commerce in ancient Rome (2008) Society for Computers and Law <http://www.scl.org/site.aspx?i=ed1107> accessed 15 August 2010.

29 30

!

EAP 7

Guns, Ships, and Chauffeurs: Civilian Use of UV Technology and its Impact on Legal Systems

231!

both human and artificial agents,31 suggests law-makers should endorse forms of proto-limited responsibility such as the ‘peculium’ in ancient Roman law.32 Consider a form of distributed responsibility (both legal and moral) such as in UGV ‘car sharing’. By adopting insurance mechanisms and determining that the liability of designers, producers, and even users of this technology should be limited to the very value of the artifact’s ‘peculium’ — that is, in the terms of the Digest of Justinian, ‘the sum of money or property granted by the head of the household to a slave or son-in-power’ — the aim is to distribute risk and avert regulatory bases for strict liability, as occurs when employers are held responsible for any illegal action that their employees engage in whilst at work. An effective way to tackle tricky notions of ‘due care’, ‘reasonable person’, and ‘foreseeable harm’ for a ‘human and computer sharing the reigns of a vehicle under traffic legislation’,33 is offered by the forms of limited responsibility and insurance set up by lawyers in ancient Roman law. As an alternative to strict-liability policies, granting a sum of money as the peculium for autonomous machines, eg, accountable AI car sharing ‘chauffeurs’, would solve a number of practical issues when acting and transacting with them, since the peculium guarantees that obligations would be met. However, even such solutions have their limits.34 Imagine a smart UGV car as an expert system that will gain knowledge and skills from its own ‘decisions’, while learning from the features of the environment and from the living beings who inhabit it. This ‘chauffeur’ will respond to stimuli by changing the values of its own properties and, what is more, it will modify these states without external stimuli, while improving the rules through which those very properties change. The result is that the same model of ‘car’ will behave quite differently after a few days or weeks, depending on the ways we treat our UGV machine, that is, my ‘chauffeur’. In the event the ‘car’ causes harm to someone, who is responsible? Remarkably, in the field of robotics, some claim we should frame our (legal) relationship with a novel generation of autonomous machines such as UGV ‘chauffeurs’ as we do with animals rather than tin machines or smart fridges.35 Regardless of whether they are natural, eg, animals, or artificial, eg, robots, we would be dealing with agents that are interactive, autonomous, and adaptable,36 thereby forcing us to co-evolve. Although, from a legal viewpoint,

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
31 32

L Floridi, The ethics of distributed responsibility (forthcoming). Katz, above n 30; U Pagallo, ‘Robotrust and legal responsibility’ (2010) 23 Knowledge, Technology & Policy 367. Gogarty and Hagger, above n 1, 120. U Pagallo, ‘Killers, Fridges, and Slaves: A Legal Journey in Robotics’ (2011) AI & Society, DOI: 10.1007/s00146-010-0316-0. D McFarland, Guilty Robots, Happy Dogs: The Question of Alien Minds (Oxford University Press, 2008). C Allen, G Varner, and J Zinser, ‘Prolegomena to any future artificial moral agent’ (2000) 12 Journal of Experimental and Theoretical Artificial Intelligence 251.

33 34

35

36

EAP 8!

232

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012!

parallelisms between animals and artificial agents may be misleading, I concede that some metaphorical convergences between animals, robots and — why not? — UGV ‘chauffeurs’, are fruitful. The parallelism between animals and artificial agents, on one hand, has its limits because, most of the time, legal systems hold people strictly liable for the behavior of their animals. As noted, this policy seems unwise for the development and further research in UGV technology. On the other hand, if we consider how legal systems provide for limits to such faultless liability, as typically happens to owners of animals when they prove that a fortuitous event occurred, the parallelism casts further light on the event of UGV ‘chauffeurs’ causing harm to someone on the highway. Should we evade responsibility when a fortuitous event happens? Or should we deny liability when the harm was inevitable?

5

Conclusion

The civilian use of UV technology is rapidly impacting on today’s legal framework. Three ideal-typical cases such as UAV applications for national security and police functions (ie, ‘guns’), UUVs and surface vehicles for rescue operations (ie, ‘ships’), and finally UGVs for smart ground transport (ie, ‘chauffeurs’), raise a new generation of problems regarding UVs such as conditions for employing autonomous weapons under international humanitarian law, amendments to current provisions of civilian traffic safetyregimes, different strict-liability policies and burdens of proof in order to establish responsibility before the law, etc. In dealing with the civilian use of UV technology, we should be mindful of the spectrum of its daily applications. At one end of the spectrum, precaution should be required in accordance with today’s state-of-the-art technology and legal frameworks in both constitutional and human rights law.37 The principle especially applies to a number of UAVs used for national security and police functions, besides those UGVs used in the field of public transport. However, at the other end of the spectrum, there is a class of UV applications where the ‘principle of openness’ should prevail over the principle of precaution, eg, UUV ‘ships’ for security and rescue operations. The burden of proof should in other words fall on those who want to prevent scientists and producers from taking action, because risks and unpredictability of these UV machines are already under reasonable control. A 2007 report claimed that out of 135 Predator missions, 50 were lost and 34 had serious accidents.38 However, figures for several UV applications such as rescue operations, ground transportation and so forth, reveal that performance of UVs for these applications are closer to the performance statistics of the da Vinci surgical robot than they are to their Predator ‘cousin’ (otherwise, we would not have had the ‘golden age’ of UUVs in the 1990s).39 Hence, I think Gogarty and Hagger are right, when they

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
37

C E Foster, Science and the Precautionary Principle in International Courts and Tribunals (Cambridge University Press, 2011). N Stafford, ‘Spy in the sky’ (2007) 7130(445) Nature 808. Gogarty and Hagger, above n 1.

38 39

!

EAP 9

Guns, Ships, and Chauffeurs: Civilian Use of UV Technology and its Impact on Legal Systems

233!

claim that UV technology will create an ‘avalanche of demand’ for regulatory review.40 More particularly, among the most challenging issues triggered by the civilian use of UV artifacts, I include ‘anthropological principles’ of civil aviation and maritime law as well as matters of extra-contractual responsibility and policies on strict-liability rules for ground regulation, eg, whether humans might evade responsibility when they prove that a fortuitous event occurred on the highway, or that they could not prevent the plan executed by the autonomous machine, etc. As stated above, we should strike a fair balance between the desire of producers and owners of UV artifacts not to be ruined by the ‘decisions’ of these machines, and the expectations of UV-counterparts (including other UVs) to safely interact with UAV police, UUV rescuers, and UGV smart cars. Although a number of relevant questions are still open, the time is ripe for ‘value judgments’.41 While strict liability rules seem necessary in the field of UAV ‘guns’, models of limited responsibility and insurance policies demonstrate that there is a wiser way to approach the civilian use of today’s UUV ‘ships’ and the new generation of extremely sophisticated UGV ‘cars’. The more they become ‘intelligent machines’ that interact with humans on a daily basis, the more new ways of distributing risk are needed through models of legal accountability that seem, as such, irreducible to an aggregation of human beings as the only relevant source of UV action, eg, the ‘peculium’ of AI drivers in the car sharing sector. By averting any legislation that might prevent the use of UVs due to their unpredictability and the consequent excessive burden on the owner of UVs (rather than, say, on the producers and designers), this twofold regime of liability shows a way to strike a fair balance between openness and precaution.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
40 41

Ibid. Ibid.

EAP 10!

Unmanned Vehicles and US Product Liability Law
COMMENT BY STEPHEN S WU * Introduction
Unmanned vehicles may someday become commonplace in transportation, in industry, and for a wide variety of applications. Just as automobiles and trucks displaced the horse, unmanned vehicles may become the mainstay for transporting cargo, and autonomous vehicles may displace human-driven automobiles and trucks. When unmanned and autonomous vehicles become commonplace, the legal system will need to address liability arising from the accidents that will occur with the use of these vehicles. With its sophisticated system of product liability statutes, doctrines, and guidance, the law of unmanned vehicles in the US may have worldwide influence on the scope and nature of unmanned vehicle liability. Commentators may produce interesting predictions of future legal developments regarding unmanned vehicles. Nonetheless, a look at past case law regarding automatic devices and robots also provides useful insights on the future of unmanned vehicle liability. Courts may face a growing number of novel cases as a result of rapid and accelerating technological developments. They may need to weigh the risk of danger from particular vehicle designs against the benefits of such designs before legislatures can consider possible legislation or regulatory oversight to maintain the safety of unmanned vehicles. Moreover, the trajectory of case law to date provides useful trends to help predict future legal developments. Finally, reviewing past decisions yields intriguing possibilities of future arguments that may someday break new ground in litigation regarding unmanned vehicles. I have already published two earlier versions of this paper. First, I created an initial version to support the discussion of robotics liability during the 2010 American Bar Association Annual Meeting program entitled ‘When Good Robots Do Bad Things: Responsibility and Liability in an Era of Personal & Service Robotics.’ The American Bar Association Section of Science & Technology Law’s Artificial Intelligence and Robotics Committee presented the program on 6 August 2010 in San Francisco. Next, I revised the paper for a program entitled ‘Sudden Acceleration into the Future: Liability from Autonomous Driving and Robotics’ presented at the 51st Annual Meeting of the Association of Defense Counsel of Northern California and Nevada on 9 December 2010. This version of the paper incorporates additional cases and analysis regarding unmanned vehicles.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
*

Stephen S Wu is a partner in the Silicon Valley, California law firm Cooke Kobrick & Wu LLP, where he practices business, technology, and intellectual property law. He served as the 2010-2011 Chair of the American Bar Association Section of Science & Technology Law. In addition, he is a lecturer at law teaching technology law at Santa Clara Law School. He can be reached at swu@ckwlaw.com.

!

EAP$1$

Unmanned$Vehicles$and$US$Product$Liability$Law$

235$

The premise of the paper is to create a summary and overview of the kinds of cases seen to date concerning the liability arising from the use of unmanned vehicles and robots generally. It is not intended as an exhaustive listing of all cases involving the actual or possible liability of the manufacturers of unmanned vehicles or robots. Instead, it provides a sample of typical categories of cases against manufacturers and owners of autonomous devices. I have also included other cases that raise interesting points about liability for failing to use robots. In addition, I discuss cases against parties other than manufacturers as a means of exploring the liability of manufacturers if they had been sued. I conducted the research for this paper on a legal research service, using search terms related to various theories of liability, variations of the word ‘robot’, and variations of the word ‘autopilot.’ Reviewing autopilot cases is useful, because vehicles with autopilots use a form of automation. Autopilot automation is a less advanced form of the kind of automation we see and will continue to see in unmanned vehicles, especially autonomous vehicles. Courts may decide that precedents regarding vehicles with autopilots apply by analogy to unmanned vehicles. Part I highlights robotics and autopilot cases concerning issues of liability under traditional theories of products liability, such as strict liability and negligence. Part II discusses cases shedding light on defences used in robotics and autopilot cases. Part III discusses some cases deciding procedural matters in suits relating to robotics and autopilot liability. The facts of these cases pose interesting scenarios for possible liability despite the fact that the courts rendering these decisions never reached the substance of the disputes. Part IV serves as a discussion of liability generally, noting patterns in the cases and drawing conclusions.

1

Cases Raising Product Liability Issues

Unmanned vehicle and robot manufacturers may face product liability lawsuits raising a number of theories, including strict liability, negligent design, negligent failure to warn, and breach of warranty. Research did not reveal a large number of decided cases involving liability issues under these theories. Nonetheless, the cases revealed by the search raise product liability issues typical of machinery used in industrial manufacturing settings. Moreover, some cases raise product liability issues involving autopilot systems on planes and boats.

1.1 Strict liability and negligent design
A plaintiff asserting a strict liability claim against an unmanned vehicle or robot manufacturer must plead and prove, under a typical state’s law, that the defendant sold a product that was defective and unreasonably dangerous at the time it left the defendant’s hands, the product reached the plaintiff without substantial change, and the defect was the proximate cause of the plaintiff’s injuries. Under a negligent design theory, a plaintiff would seek to show a robotic or unmanned vehicle manufacturer had a duty to exercise

EAP$2$

236

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

reasonable care in manufacturing the machine, the manufacturer failed to exercise reasonable care in making the machine, and the defendant’s conduct proximately caused the plaintiff’s damages. Jones v W + M Automation, Inc,1 is in many ways a typical product liability case. A piece of equipment struck the plaintiff when he entered into an area behind a safety fence, and the main issue in the case concerned the existence of a genuine issue of material fact, on summary judgment, as to whether the system was defective when the defendant sold it. The equipment at issue, however, was a robotic gantry loading system used in a General Motors auto plant. GM had purchased the system and then installed it without an interlock system intended to stop the machine when people are present, thereby allowing employees to work on the system within the danger zone behind the safety fence while the system was operating. Gripper arms of the system hit the plaintiff in the head while he was standing behind the safety fence and he became pinned against a pedestal, injuring his head. The Occupational Safety and Health Administration fined GM for not installing an interlock system and for allowing employees to work behind the safety fence while the system was operating. The plaintiff sued the manufacturers, among other things, under theories of strict liability, negligence, failure to warn, and breach of warranty. The court held that summary judgment was appropriate for the component manufacturer defendants under the ‘component part’ doctrine, which states that a manufacturer of a non-defective component part of a product is not liable if its part is incorporated into another product that might be defective. The defendants were also entitled to summary judgment because the plaintiff failed to introduce evidence in opposition to summary judgment showing that the system was defective. GM’s modifications to the system were the apparent cause of the accident. The plaintiff’s expert said that the system failed to meet robotics standards of the American National Standards Institute (ANSI), but the court found that the system comported with voluntary industry standards, and the ANSI standards did not apply to the system at issue. Similarly, other cases have found no liability when plaintiffs failed to adduce evidence of a defect. Payne v ABB Flexible Automation, Inc,2 (decedent crushed by a robot’s gripper arm in an auto wheel plant while working within the ‘cell’ in which the robot operated, with some evidence of user error). In Payne, the court said that the failure to meet ANSI standards, and the problems with unexpected movements, admitted by the defendant manufacturer, were

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
1

818 N.Y.S.2d 396 (App. Div. 2006), appeal denied, 862 N.E.2d 790 (N.Y. 2007) (‘Jones’). No. 96-2248, 1997 WL 311586 (8th Cir. Jun. 9, 1997) (per curiam unpublished opinion) (‘Payne’).

2

!

EAP$3$

Unmanned$Vehicles$and$US$Product$Liability$Law$

237$

irrelevant, because these problems did not cause the accident. Accordingly, it affirmed summary judgment in favour of the manufacturer. Likewise, in an airplane autopilot case, Ferguson v Bombardier Services Corp,3 the plaintiffs could not succeed in their claims against an autopilot manufacturer, because they had no evidence at trial to prove defects in the autopilot. The plaintiffs brought suit following a catastrophic crash of an Army National Guard plane against the manufacturers of the plane and the manufacturers of the autopilot. Before trial, the district court excluded the testimony of the plaintiffs’ two experts concerning defects in the autopilot, and thus, during trial, the plaintiffs had no evidence of a defect in the autopilot to support claims against the autopilot manufacturer. Consequently, the district court entered judgment for the autopilot manufacturer as a matter of law, and a jury later found for the remaining defendants.4 The appellate court affirmed the exclusion of the testimony and the judgment in favour of the defendants. The case turned on the cause of the accident. The plaintiffs pointed to various defects in the autopilot system. The defendants claimed that the aircraft was improperly loaded, and that when a pilot left the cockpit to walk to the rear of the aircraft, the aircraft became unstable, and a gust of wind caused the plane to lose control.5 The Court of Appeals held that it was proper to exclude an expert the plaintiffs called to testify as to the defects in the autopilot, because on cross-examination at a pre-trial hearing, the expert admitted that the information from the plane’s flight data recorder was equally consistent with the defendants’ theory of improper loading as with the plaintiffs’ theory of autopilot defects, and thus his testimony would not assist the jury in understanding the evidence.6 The plaintiffs’ other expert was to testify about Federal Aviation Regulations that supposedly required the manufacturer to include a warning annunciator with the autopilot system, but the expert could not show that the regulations actually contained such a requirement.7 In another airplane autopilot case, a federal appeals court affirmed a judgment for defendants in a negligence and strict liability case against the manufacturer of a small airplane, the airplane’s seller, and the engine manufacturer. In Moe v Avions Marcel Dassault-Breguet Aviation,8 the plaintiffs pointed to a defect in the autopilot as a cause of the crash, among other causes, saying that if the pilots failed to disengage the autopilot, no warning would sound when the pilots attempted to control the plane manually. In fact, the plaintiffs claimed the autopilot would cause the plane to act against

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
3 4 5 6 7 8

244 Fed. App’x 944 (11th Cir. 2007) (unpublished opinion). Ibid 948. Ibid 947. Ibid 948, 949. Ibid. 727 F.2d 917 (10th Cir. 1984).

EAP$4$

238

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

the pilots’ manual control in this situation. The plaintiffs pointed to this factor as one of the causes of the accident.9 The jury, however, found none of the defendants liable where the plaintiffs could not prove a cause of the accident involving the small airplane. The Court of Appeals held that the jury’s answers to special interrogatories on the verdict form were internally consistent, since neither party convinced the jury of the cause of the accident, and thus the plaintiffs failed to establish negligence or causation by a preponderance of the evidence.10 The Court of Appeals also held that the trial court’s instructions were proper. 11 Accordingly, the Court of Appeals affirmed the verdicts for the defendants. Similarly, in Glorvigen v Cirrus Design Corp,12 a fatal airplane crash case, the court found that the plaintiffs (trustees for the deceased pilot and passenger) could not support a strict liability claim against an airplane manufacturer, because they presented no evidence of a manufacturing defect and could not proceed under strict liability on the basis of inadequate instructions. The plaintiffs’ main claim was that Cirrus voluntarily took on the duty to train the decedent pilot on the use of the autopilot system and failed to deliver the proper training to the pilot, including a failure to provide certain follow-up training. Their theory was that the crash occurred because the pilot did not know how to use the autopilot system when weather required flying by instruments. Thus, the plaintiffs did not allege a manufacturing defect. Since the court did not permit the plaintiffs to proceed in strict liability under an inadequate instructions theory, the court entered summary judgment for Cirrus on the strict liability claim.13 Moreover, the plaintiffs’ implied warranty claim was barred by a disclaimer of implied warranties in the purchase agreement for the plane.14 In addition, the plaintiffs could not support an express warranty theory based on the failure to provide follow-up avionics training.15 The court therefore granted summary judgment on both the implied warranty claim and the express warranty claim, holding that negligence was the only means of proceeding for the plaintiffs.16 As discussed in Section 1.2 below, however, the Court of Appeals of Minnesota later reversed a verdict for the plaintiffs under the

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
9

Ibid 921-22. Ibid 929-30. Ibid 923-28. No. 06-2661 (PAM/JSM), 2008 WL 398814 (D. Minn. Feb. 11, 2008) (‘Glorvigen’). Ibid *5. Ibid. Ibid *5-6. Ibid *6.

10 11 12 13 14 15 16

!

EAP$5$

Unmanned$Vehicles$and$US$Product$Liability$Law$

239$

negligence theory, holding that Cirrus owed the pilot no duty to provide follow up training.17 Boucvalt v Sea-Trac Offshore Services, Inc,18 involved claims of gross negligence against an autopilot manufacturer, whose system allegedly caused a yacht accident. The owner and passengers of the yacht brought suit after it struck a well jacket owned by Chevron. The suit included claims against Raymarine, which manufactured the yacht’s autopilot system.19 Raymarine moved for summary judgment to seek dismissal of the plaintiffs’ claim for punitive damages under general maritime law. In opposition, the plaintiffs pointed to deposition testimony of witnesses who claimed the autopilot’s fluxgate compass acted erratically in the presence of large metal objects, Raymarine refused to test the compass separately from the rest of the autopilot system, and Raymarine knew about the problems with the compass.20 The Court of Appeal, however, held that Raymarine’s alleged conduct did not rise to the level of ‘reckless or callous disregard for the rights of others, or gross negligence’ and therefore did not support a punitive damages claim. 21 The decision did not reach the merits of the negligence claim against Raymarine, but instead simply cut off punitive damages as one possible remedy. Provenzano v Pearlman, Apat & Futterman, LLP,22 provides an interesting ‘case within a case’ in that it concerned a malpractice claim against a law firm that represented the plaintiff in a previous case against the manufacturer of a robotic television camera. The camera had hit her in the head in a television studio, where she worked as a hair stylist. Following a defence verdict in the underlying case, the plaintiff sued her attorneys for malpractice. The court held that the plaintiff’s expert report did not raise a genuine issue of material fact to show that she would have prevailed in the underlying suit and granted summary judgment. The report failed to explain why the accident stemmed from a design defect, as opposed to recent repairs on the camera by the studio. Three other cases pose interesting issues regarding the standard of care for robotics manufacturers as compared to the standard of care for human operators of machines. For instance, Arnold v Reuther,23 involved a driver who hit a pedestrian while making a left turn in his car. The court, affirming dismissal of the suit, held that the defendant driver did not have the ‘last clear

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
17 18 19 20 21 22 23

See Glorvigen v Cirrus Design Corp, 796 N.W.2d 541 (Minn. Ct. App. 2011). 943 So. 2d 1204 (La. Ct. App. 2006). Ibid 1205. Ibid 1208-09. Ibid 1209. 04-CV-5394 (SLT)(RLM), 2008 WL 4724581 (E.D.N.Y. Oct. 24, 2008) (‘Provenzano’). 92 So. 2d 593 (La. Ct. App. 1957).

EAP$6$

240

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

chance’ to avoid the accident, because he could not have prevented the accident after the plaintiff darted out onto the street. In so holding, the court stated: A human being, no matter how efficient, is not a mechanical robot and does not possess the ability of a radar machine to discover danger before it becomes manifest. Some allowance, however slight, must be made for human frailties and for reaction, and if any allowance whatever is made for the fact that a human being must require a fraction of a second for reaction and then cannot respond with the mechanical speed and accuracy such as is found in modern mechanical devices, it must be realized that there was nothing that Reuther, a human being, could have done to have avoided the unfortunate result which the negligence of Mrs. Arnold brought upon herself.24 This decision raises the possibility that once we have autonomous vehicles, the courts will raise the standard of care for manufacturers to avoid collisions, since robots can act faster and more accurately than humans. Moreover, if humans’ driving record is worse than typical autonomous vehicles someday, it may even be negligent for humans to drive themselves. The more general question is whether someday failing to use a robot may be negligence. Indeed, in one case, a court found sufficient evidence of a defect in a conventional ventilator used to supply oxygen to patients, because of testimony that the device used outdated technology and could have included a redundant backup system and robotic monitoring system. That is, the lack of such systems made the machine unreasonably dangerous.25 Likewise, in Mracek v Bryn Mawr Hosp,26 a patient sued a hospital following manual surgery that resulted in alleged damages, which the human doctors performed after the hospital’s ‘da Vinci robot’ malfunctioned and could not perform the surgery. Although it is not clear in the reported decision, the plaintiff apparently alleged that the hospital’s doctors negligently performed the surgery. The plaintiff in that case, however, also sued the manufacturer for providing a malfunctioning device that could not perform the surgery with the precision he needed. In other words, the plaintiff apparently believed that a functioning robot would have performed the surgery better than the human doctors. The decision concerned the manufacturer’s liability after voluntary dismissal of the hospital.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
24 25

Ibid 596. Redfield v Beverly Health & Rehabilitation Servs, Inc, 42 S.W.3d 703, 710 (Mo. Ct. App. 2001) (affirming denial of defendant’s motion for new trial) (the plaintiff’s decedent died after his ventilator was unplugged and his oxygen tube was disconnected). 363 Fed. App’x 925 (3d Cir. 2010) (unpublished opinion), cert. denied, 131 S. Ct. 82 (2010).

26

!

EAP$7$

Unmanned$Vehicles$and$US$Product$Liability$Law$

241$

Although the failure to use a robot may someday create liability, the flip side is that humans may still have a duty of care to avoid accidents, even after delegating some of the operation to a robot. This liability may arise in the use of semi-autonomous machines, like autopilots. In Brouse v United States,27 the court held that a pilot had a duty to be on the lookout to prevent air-to-air collisions while flying under ‘robot control’ (evidently, autopilot). The Army ‘Black Widow’ fighter struck the plaintiff’s Aeronca Cub after the pilot failed to notice it. Likewise, one admiralty case, Shaun Fisheries, Inc v Lanegan,28 found no defect in an autopilot, but apportioned liability to the operator for, among other things, failure to keep a lookout. In Shaun Fisheries, the fishing boat F/V Shaun had the right of way and proceeded straight, while the tug Mary Catherine and her tow, the barge Bandon, also proceeded straight, and the boats headed toward an eventual crossing and collision. Mary Catherine’s owner claimed that the autopilot on the Shaun malfunctioned, causing a sudden swerve, although the suit did not include the autopilot manufacturer. The court found no defect in the autopilot, especially in light of expert testimony that the autopilot was engaged and the position of the rudder was amidships and only slightly canted to port, suggesting that the Shaun was on autopilot, going more or less straight, just as would be expected for a vehicle on a straight course with a functioning autopilot.29 Despite the lack of any defect in the autopilot, the court found each vessel 50% at fault, because the Shaun’s captain did not keep a proper lookout, or avoid an accident once it became apparent that the Mary Catherine would not yield.30 One possible cause was that the Shaun was on autopilot and the operators failed to remain alert enough to disengage the autopilot once the craft were apparently on a collision course. As a result, the owner of the Mary Catherine was able to shift half the liability to Shaun’s owner and captain. In other words, the claim against the operators for failing to disengage the autopilot was more successful than its attempt to blame the autopilot. Moreover, the human lookouts bore some of the liability for the crash.

1.2 Failure to warn
An unmanned vehicle or robot manufacturer may be held liable for a negligent failure to warn if the manufacturer knows or has reason to know that the product is likely to be dangerous for its intended use, it has no reason to believe that users will realise its dangerous condition, and it fails to exercise reasonable care to inform users of its dangerous condition.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
27 28

83 F. Supp. 373 (N.D. Ohio 1949) (post-trial opinion). Shaun Fisheries, Inc v Lanegan, Nos. 82-529, 82-687, 82-6148-C, 1983 WL 699 (D. Ore. Sept. 21, 1983) (‘Shaun Fisheries’). Ibid *2. Ibid *3-5.

29 30

EAP$8$

242

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

Nonetheless, if the danger is open and obvious, the manufacturer has no duty to warn users. For instance, Jones,31 discussed above in Section 1.1, addressed the plaintiff’s failure to warn claim. The court held that the danger involved with the plaintiff of going behind the safety fence while the system was in operation was an open and obvious one. Accordingly, the court held that summary judgment should have been granted to defendants on the failure to warn theory. Also, the Glorvigen32 plane autopilot case discussed in the previous section, involved a failure to warn claim after the federal court’s summary judgment decision. The federal court later remanded the case to a Minnesota state court for lack of jurisdiction. The state trial court then held a jury trial resulting in a verdict in favour of plaintiffs. The plaintiffs’ theory was that Cirrus had a duty to provide adequate instructions concerning the autopilot system on the plane, and Cirrus’ failure to provide sufficient autopilot instructions caused the crash. The Court of Appeals of Minnesota reversed the verdict in favour of the plaintiffs.33 The court held that Cirrus had no duty to provide follow-up training to the pilot regarding the autopilot system. Accordingly, the plaintiffs’ negligent failure to warn theory failed.34 The court also held that the plaintiffs’ negligence claims against Cirrus sounded in educational malpractice, and that such claims are barred as a matter of law.35 The court cited public policy reasons to bar educational malpractice claims, including the lack of a standard of care, uncertainties about causation because students’ conduct may be an intervening cause of damages, the possible flood of litigation against educational institutions, and the desire to avoid the need for court oversight over the daily operations of educational institutions.36

1.3 Claims against operators of vehicles using autopilots
Research revealed cases involving claims against the operators of vehicles using autopilot equipment. A passenger seeking damages against an airline successfully obtained reversal of a trial court judgment against her in Nelson v American Airlines, Inc.37 The plaintiff, Mary Nelson, was a passenger on an American flight to Los Angeles. Shortly after the plane reached its cruising

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
31 32 33 34 35 36

818 N.Y.S.2d 396, 399. No. 06-2661 (PAM/JSM), 2008 WL 398814 (D. Minn. Feb. 11, 2008). 796 N.W.2d 541 (Minn. Ct. App. 2011). Ibid 549-52. Ibid 552-58. Ibid 554 (quoting Alsides v Brown Inst, Ltd, 592 N.W.2d 468, 472 (Minn. Ct. App. 1999)). 263 Cal. App. 2d 742 (1968).

37

!

EAP$9$

Unmanned$Vehicles$and$US$Product$Liability$Law$

243$

altitude, the pilots turned on the autopilot to hold the plane’s altitude, but the autopilot caused the plane to suddenly dive. Nelson claimed damages from injuries allegedly sustained during the sudden dive.38 In the absence of any objection, the trial court applied the doctrine of res ipsa loquitur, which raised an inference of negligence by the airline. The Court of Appeal held that since American was a public carrier, it could rebut the inference of negligence by proving that it exercised the utmost care and diligence through proof that it did not cause the accident or that an unknown and unpreventable cause resulted in the accident.39 The Court of Appeal said that a defect in the autopilot could have caused the accident.40 Nonetheless, the Court of Appeal found no evidence to eliminate negligent maintenance as a cause of the action and therefore reversed the trial court’s judgment in American’s favour.41 Interestingly, Nelson had not sued the manufacturer of the autopilot or the airplane, but nonetheless, obtained reversal of the judgment against her. Similarly, in the Shaun Fisheries42 barge collision case discussed in the previous section, the barge owner had mixed results in blaming a fishing boat’s autopilot for a collision. As discussed above, the court found no defect in the autopilot but found liability against the operator of the fishing boat, apparently for failing to keep a lookout and to disengage the autopilot and avoid the collision.

1.4 Intentional tort claims against employers
One of the common types of case I found on my research concerned the limitations placed on the ability of an injured employee to sue his or her employer due to the exclusivity of recovery under the workers’ compensation system. Employees cannot generally sue their employers for workplace injuries if state law says that workers’ compensation is the exclusive means of recovering compensation for such injuries. Nonetheless, an exception exists if the employer’s conduct rises to the level of intentional conduct, in which case an employee can seek compensation in excess of workers’ compensation by suing his or her employer. I found three workers’ compensation cases involving the use of robots: Miller v Rubbermaid Inc;43 State ex rel Scott Fetzer Co v Industrial Comm’n of Ohio;44 and

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
38 39 40 41 42 43 44

Ibid 744-45. Ibid 746. Ibid. Ibid 747. Nos. 82-529, 82-687, 82-6148-C, 1983 WL 699 (D. Ore. Sept. 21, 1983). No. 23466, 2007 WL 1695109 (Jun. 13, 2007) (unpublished opinion) (‘Miller’). 692 N.E.2d 195 (Ohio 1998) (per curiam) (‘Scott Fetzer’).

EAP$10$

244

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

Edens v Loris Bellini, Spa.45 In Miller, the court affirmed the trial court’s entry of summary judgment in favour of the employer where the record showed that the decedent employee placed himself in danger of being crushed by a robot used in plastic injection moulding, the employer did not know of any dangerous condition of the machine, there was no evidence of safety violations, and the employer had not inadequately trained the decedent. Under these circumstances, there was no genuine issue of fact regarding intentional conduct by the employer. In Scott Fetzer, the Supreme Court of Ohio affirmed an order denying a writ of mandamus to an employer after the Industrial Commission of Ohio approved additional workers’ compensation for an employee injured by a robotic die cast machine that closed and severely injured the worker’s upper body. The employee’s job was to remove bad parts stuck in the die from the danger zone of the machine. The employer had removed safety controls on the machine and when the die unexpectedly closed, the worker sustained injury. The robotic device that normally removed good parts from the die did not always work, requiring workers to manually remove bad parts from the danger zone. The court affirmed a finding of safety violations in light of these circumstances. In Edens, the plaintiff’s decedent was struck and killed by a robotic shuttle used to transport wool to and from dye vats. The decedent’s co-workers had disconnected safety mats that stopped the shuttle if someone stepped on them. They disconnected the mats because the repeated stopping of the shuttle ‘aggravated’ the shuttle operator. When the decedent was checking for leakage from the vats, a co-worker activated the shuttle, which hit and killed him. The court affirmed a dismissal of the employer and co-worker defendants for lack of subject matter jurisdiction, in light of the court’s finding that the decedent was a ‘statutory employee’ subject to workers’ compensation laws, the exclusivity of workers’ compensation, and the lack of evidence that the employer or co-workers intended to harm the decedent.

1.5 Causation issues
A plaintiff suing an unmanned vehicle or robot manufacturer must prove that the defect or failure to warn was the proximate cause of the plaintiff’s damages. Sometimes, the intervening act of a third party breaks the causal chain between the defendant’s act or omission on one hand, and the plaintiff’s damages on the other. For instance, misuse or modification46 of the product may constitute an intervening cause.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
45 46

597 S.E.2d 863 (S.C. Ct. App. 2004) (‘Edens’). For instance, if, in Scott Fetzer, the employee had sued the manufacturer of the robotic die machine, the court may have found that the employer’s modification of the machine to remove safety guards was a superseding cause of the employee’s injuries.

!

EAP$11$

Unmanned$Vehicles$and$US$Product$Liability$Law$

245$

Three of the cases discussed above involved causation issues. In the da Vinci medical robot malfunction case, the court found no evidence in the record that the robot’s malfunction and failure to operate caused the injuries resulting from surgery conducted by human doctors.47 Payne, which involved a robot crushing a worker in its ‘cell,’ held that plaintiff failed to provide evidence that a programming error or lack of a safety feature was a proximate cause of the worker’s injuries.48 In the television camera case, Provenzano, the court found that the plaintiff failed to prove that defects within the television camera caused it to strike the plaintiff, as opposed to negligent repairs.49 The question of causation may arise if a plaintiff tries to sue a manufacturer where, but for an automated machine, the plaintiff would not have been injured, but it is difficult to say that the machine was a substantial factor in the injury. In the following cases, the plaintiff did not sue the manufacturer, but if it had, questions of causation would have arisen. For instance, in Leister v Schwans Sales Enters, Inc,50 the plaintiff pickup truck driver was injured while bringing fencing materials to the defendant’s pizza facility for the purpose of building a fence around a palletizing robot. An employee of the defendant hit the pickup with his delivery truck. The court held that the plaintiff was a statutory employee of defendant and his claims were barred because of the exclusivity of workers’ compensation.51 Imagine, though, if the plaintiff had sued the manufacturer of the robot. He could claim that, but for the robot, he would not have been injured. He was in his pickup truck to build a fence around the robot. Had the defendant not purchased the robot, the plaintiff would not have been on site to build the fence, and he therefore would not have been injured. In that sense, the robot ‘caused’ the accident. Nonetheless, such a claim would almost certainly fail, because the robot did not meaningfully contribute to the accident. Moreover, the conduct of the defendant’s employee driving the truck was a superseding cause of plaintiff’s damages. Likewise, in Romano v Browne, 52 the court reversed an order denying a landlord summary judgment for premises liability after a woman tripped on a power cord used to recharge a mail robot in the mail room. The court blamed the tenant for leaving the cord exposed and found no defect in the premises. If the plaintiff had sued the manufacturer of the robot for having a power cord that could trip people, he or she would also find it difficult to prove a design defect caused the injury. Machines require power cords, and the danger of

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
47

Mracek v Bryn Mawr Hosp, 363 Fed. App’x 925, 927 (3d Cir. 2010) (unpublished opinion), cert. denied, 131 S. Ct. 82 (2010). 1997 WL 311586, *2, *3. 2008 WL 4724581, *5, *6. No. 92-2227-JWL, 1993 WL 105132 (D. Kan. Mar. 4, 1993). Ibid *2-*5. 579 N.Y.S.2d 400 (App. Div. 1992).

48 49 50 51 52

EAP$12$

246

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

tripping is obvious. The owner of the device, the tenant in this case, has control of the device and where to place it while recharging so as to avoid hazards to those walking by. Thus, the tenant’s conduct would be a superseding cause of the injuries.53

2

Defences to Product Liability Cases

The robotics and autopilot cases I found raise five defences to liability claims that will affect liability from the manufacture and use of unmanned vehicles and robots. These defences, of course, are only a subset of the possible defences to a products liability claim. The most common defence was the workers’ compensation exclusivity defence, which bars an employee’s claim against an employer for workplace injuries, including those sustained during the use of a robot. In the workers’ compensation cases discussed above, the plaintiff was unable to overcome this defence.54 Nonetheless, in Scott Fetzer, the employer could not avoid enhanced compensation for the employee in light of the employer’s safety violations. Likewise, Behurst v Crown Cork & Seal USA, Inc, 55 did not foreclose an intentional tort claim against an employer. The case arose out of an accident involving a blank transfer robot (moving metal from die to die) used in a canmaking plant. The plaintiff’s decedent was killed when, apparently, she was trapped in the danger zone when the machine restarted and the access door closed behind her. The court found a jury question as to an intentional tort claim against an employer in light of: • • • The employer’s knowledge of the flawed performance of the machinery and a history of prior accidents. The employer’s alleged refusal to reprogram the machine. The employer’s tolerance or encouragement of unsafe maintenance practices.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
53

See also Royal Ins Co of Am v Crowne Investments, Inc, 903 So.2d 802 (Al. 2004) (reversing denial of motion to set aside default judgment after Royal failed to receive service of process when its robotic mail system malfunctioned; malfunction of the system provided a reason why Royal’s default was excused). If the judgment against Royal had been upheld, and the robot’s malfunction led to a default judgment, it is difficult to see how a court could find that a defect in the robot proximately caused a default judgment. Such a result is not reasonably foreseeable for a general-purpose mail machine. In addition, the Michigan Supreme Court held that summary judgment was appropriate on a manufacturer’s claim for indemnity against an employer after an accident caused by a malfunctioning robotic machine, Williams v Litton Sys, Inc, 449 N.W.2d 669 (Mich. 1989). Given the exclusivity of the workers’ compensation system, and the lack of an express indemnity in the agreement between the manufacturer and the employer, claims that the manufacturer was entitled to implied indemnity for the employer’s failure to train the worker were barred. No. 04-1261-HA, 2007 WL 987452 (D. Ore. Mar. 30, 2007) (‘Behurst’).

54

55

!

EAP$13$

Unmanned$Vehicles$and$US$Product$Liability$Law$

247$

• •

The employer’s insistence on understaffing its production line. The employer’s placement of the decedent alone in the production line without sufficient training.56

The second defence raised by these cases is the component parts doctrine. Under this doctrine, the manufacturer of a non-defective component part is not liable if it is incorporated into a defective product that causes injury to the plaintiff. In Davis v Komatsu America Industries Corp,57 the Tennessee Supreme Court answered a certified question from the US Court of Appeals for the Sixth Circuit and held that Tennessee law recognises the component parts doctrine. The case involved a Sharp plant that manufactures microwaves. The system stamps out metal parts, and robots transfer the parts from press to press. Following a stoppage in the line, the plaintiff was removing a piece of metal waste when a co-worker restarted the line and a press injured the plaintiff’s hand. The manufacturer of the equipment was not liable since the machinery left its hands in a non-defective condition, and the employer had disabled a safety sensor in order to allow the equipment to operate. The alleged defects in the equipment did not concern the robots, but rather the stamping equipment. In Jones 58 discussed above, some of the defendants were component manufacturers that provided non-defective products. The court affirmed summary judgment in favour of these manufacturers. Jones also involved a third defence — subsequent modification of the equipment. The employer installed the robotic system without an interlock safety system. Such a system could have stopped the machine while the plaintiff was in harm’s way. Accordingly, the manufacturers were not liable. The fourth defence appearing in the cases is the government contractor defence. The US Supreme Court recognised the government contractor defence in Boyle v United Technologies Corp.59 The defence immunises suppliers of equipment to the government from state law liability for design defects in the equipment where the federal government approved reasonably precise specifications for the equipment, the equipment conformed to those specifications, and the supplier warned the federal government about the dangers in the use of the equipment known to the supplier, but not the government.60 The purpose for the defence is to protect the exercise of the federal government’s discretion in designing equipment it needs and to

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
56 57 58 59 60

Ibid *6. 42 S.W.3d 34 (Tenn. 2001). 818 N.Y.S.2d 396, 398 (App. Div. 2006), appeal denied, 862 N.E.2d 790 (N.Y. 2007). 487 U.S. 500 (1988) (‘Boyle’). See ibid 512.

EAP$14$

248

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

prevent circumvention of the limitations on federal government liability in the Federal Tort Claims Act.61 A federal district court applied Boyle to immunise an aircraft manufacturer and autopilot manufacturer, following a catastrophic crash of an Air Force plane.62 The plane crashed after a sudden loss of control and loss of electrical power. The plaintiffs’ theory was that the autopilot malfunctioned, causing a loss of control, and other defects prevented the pilots from recovering. The plaintiffs sued the manufacturer of the aircraft and of the autopilot (among other defendants) under theories of strict liability, negligence, and breach of warranty.63 The court held that the airplane manufacturer and autopilot manufacturer were immune from suit under the government contractor defence. The Air Force, in an exercise of its discretion, created specifications for the autopilot and other parts at issue, there was no dispute that the parts in question conformed to Air Force specifications, and the Air Force was warned about safety issues arising from the specifications. 64 Accordingly, the case was ‘precisely the kind of case to which the government contractor defence was intended to apply.’65 Manufacturers of unmanned vehicles and robots may be able to take advantage of the government contractor defence when they are building them to government specifications and warn the government of design deficiencies. The final defence appears in Housand v Bra-Con Industries, Inc.66 In that case, a mechanical arm on an assembly line for GM minivans struck the plaintiff while he was cleaning an oil spill around the machines during a work break. Someone restarted the line, and the accident occurred. The court granted summary judgment because of, among other things, the ‘sophisticated user’ defence. Under this defence, the manufacturer is not liable for supplying a product to a knowledgeable user who has reason to know of any dangerous condition in the product. The court held that GM was a sophisticated user, GM was closely overseeing the use of the machines, and therefore the manufacturer had no duty to prevent or remedy any alleged defect.67

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
61 62

See ibid. In re Aircraft Crash Litig. Frederick, Md., May 6, 1981, 752 F. Supp. 1326 (S.D. Ohio 1990), aff’d sub nom. Darling v Boeing Co, 935 F.2d 269 (6th Cir. 1991) (unpublished opinion affirming for the reasons set forth in the district court’s opinion). See 752 F. Supp. at 1332-33. Ibid 1350-58, 1369-71. Ibid 1370. 751 F. Supp. 541 (D. Md. 1990). Ibid 544-45.

63 64 65 66 67

!

EAP$15$

Unmanned$Vehicles$and$US$Product$Liability$Law$

249$

3

Cases Involving Procedural Issues

Some unmanned vehicles and robotics cases involve claims of product liability, but the court decisions dealt largely with procedural issues. These cases are helpful in showing the factual scenarios that may give rise to liability for manufacturers. Some of these cases, though, do not shed much light on substantive questions of liability. The claims against the manufacturer in Behurst68 discussed above, turned on a procedural issue. Even though the court denied summary judgment to the employer for the plaintiff’s intentional tort claim, the court granted summary judgment to the manufacturer. The court granted the manufacturer’s motion, because of a ten-year statute of repose that requires a product liability claim to be brought within 10 years after the first purchase, and the machine in question had been purchased 12 years before the suit was filed. Adams v Gates Learjet Corp, 69 presented another case turning on a time limitation. The plaintiffs were owners of Learjet Model 24 planes who sued for the cost of retrofitting equipment on the Model 24 planes, which the Federal Aviation Administration (FAA) found to be deficient and in violation of applicable regulations. The FAA issued an airworthiness directive, and the plaintiffs incurred costs, including downtime, to fix the deficiencies. Moreover, some owners sold their aircraft at substantial losses, because they could not afford the cost of the modifications. The plaintiffs sought only economic damages since they found out about the deficiencies before any crash occurred. The court held that, assuming plaintiffs had tort claims, their claims were barred by the two-year Texas tort statute of limitations. The last plane was delivered on 27 April 1979. On 13 June 1983, the FAA issued a proposed airworthiness directive on the Model 24’s deficiencies, and the final directive was published in the Federal Register on September 6, 1984, more than two years before the plaintiffs brought suit on 4 October 1986. Thus, even if the statute did not start to run until the plaintiffs had constructive notice of the defect in September 1984, their October 1986 suit was barred by the two-year statute of limitations.70 Moreover, the court held that any warranty claim would have a limitation of four years, starting on the date of sale. Since the last delivery was in April 1979, any warranty claim would be barred as of April 1983, and the October 1986 complaint was thus time-barred.71 This case is interesting because plaintiffs sought only economic losses associated with fixing the deficiencies in the autopilot system of the airplane. Such suits are possible for defects in unmanned vehicles in the future, before

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
68 69 70 71

No. 04-1261-HA, 2007 WL 987452 (D. Ore. Mar. 30, 2007). 711 F. Supp. 1377 (N.D. Tex. 1989) (‘Adams’). Ibid 1379-83. Ibid 1383.

EAP$16$

250

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

any accident occurs, but the economic loss doctrine may limit plaintiffs to proceeding under contract theories, such as breach of warranty. The Adams court did not reach the merit of plaintiffs’ claims, but regulatory directives and findings of violations may help plaintiffs proceed under a negligence per se theory (if a tort claim is viable) or under laws against unfair and deceptive trade practices. In Bynum v ESAB Group, Inc,72 the Michigan Supreme Court reinstituted a defence verdict in a case involving an injury to a plaintiff that was operating a robotic welding system. The court did not detail the nature of plaintiff’s claims against the defendant manufacturer. Rather, the case involved the plaintiff’s claims of alleged racial biases of some of the jurors, which the court rejected. Some cases involved questions of jurisdiction. For instance, Bou-Matic, LLC v Ollimac Dairy, Inc,73 involved defective robotic dairy milking machines that allegedly did not work properly and failed to work as represented. Unfortunately, the court does not describe the nature of the alleged defects. In any case, the California dairy company filed a state court complaint against the Wisconsin manufacturer and the California dealer. The Wisconsin manufacturer then filed a diversity action in federal court against the California dairy and California distributor seeking a declaration of rights. Diversity jurisdiction in US federal courts exists where the parties are citizens of different states in the US or the proceedings involve a citizen’s case against a non-resident alien. The question in Bou-Matic was whether the parties should be realigned to make the dairy the plaintiff and the manufacturer and dealer defendants, which would destroy diversity jurisdiction. In light of the facts and circumstances, the court did, in fact, realign the parties and dismiss the case for lack of jurisdiction. Rodriguez v Brooks Pari-Automation, Inc,74 concerned a robot the plaintiff was installing in an elevator shaft. While the plaintiff was in the shaft to communicate with co-workers, another co-worker activated the robot, which descended rapidly towards plaintiff. The plaintiff moved his body out of the way of the robot, but it severed his thumb while it passed him. The question in Rodriguez was whether the Texas resident plaintiff fraudulently joined the Texas building owner, Texas Instruments (TI), in order to defeat diversity jurisdiction. After TI removed the case to federal court, the plaintiff moved to remand. The court held that the plaintiff might have viable claims against TI for premises liability and under a contractual duty to control a contractor and make sure the contractor does the work in a safe manner. The court found that plaintiff had not fraudulently joined TI,

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
72 73

651 N.W.2d 383 (Mich. 2002). No. CV-F-05-203 OWW/SMS, 2007 WL 2898675 (E.D. Cal. Sept. 28, 2007) (‘BouMatic’). No. 3:03CV00515–L, 2003 WL 21517851 (N.D. Tex. Jun. 30, 2003) (‘Rodriguez’).

74

!

EAP$17$

Unmanned$Vehicles$and$US$Product$Liability$Law$

251$

granted the motion to remand, and did not reach the plaintiff’s strict liability and negligence claims. Aynesworth v Beech Aircraft Corp,75 is another removal case involving strict liability and negligence claims against a plane manufacturer following a crash of a Beech airplane. Beech brought third party claims against the companies involved with the autopilot system in the plane under the theory that a malfunction in the autopilot system caused the pilot to become disoriented and lose control of the plane. The plaintiff’s theory, however, was that the plane crashed because of a malfunction in the right propeller, which would have placed liability entirely on Beech.76 The plaintiffs originally filed suit in a Texas state court and, after a trial that resulted in a jury deadlock and mistrial, Beech removed the action to the US District Court for the Western District of Texas on the basis of diversity of citizenship. The case turned on whether the Texas defendants defeated diversity, in light of the Texas residency of the plaintiffs. The court held that the Texas defendants were not nominal parties, and the plaintiffs did not abandon their claims against the Texas autopilot companies, even though in closing argument, plaintiffs’ counsel urged the jury to reject Beech’s contention that an autopilot malfunction caused the accident. The plaintiffs’ counsel said, as an alternative to Beech’s liability under the propeller theory, that if the jury were to find the autopilot companies liable, they expected a verdict in their favour.77 The Court also held that Beech waived its right to remove, because it went all the way through trial in state court and then, only after a mistrial failed to produce the result Beech wanted, removed the case to federal court.78 Accordingly, the Texas defendants defeated diversity jurisdiction and the court remanded the case to the Texas state court. Since the case turned on jurisdiction, the court did not deal with the merits of the claims against the autopilot companies. Consequently, the only result on liability reported by the court was that the state court jury could not decide whether or not liability existed, whether on the propeller theory or the autopilot theory. Finally, some reported cases concerned the preclusive effect of prior judgments without reaching the merits of the dispute. For instance, in Casey v Palmer Johnson Inc,79 the plaintiff boat owner sued a company that repaired his boat, claiming that the defendant failed to repair a defective autopilot. The plaintiff alleged that the day after he picked up his boat, he put it on autopilot, and its defect caused the boat to run aground.80 In a previous

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
75 76 77 78 79 80

604 F. Supp. 630 (W.D. Tex. 1985). See ibid 634-35. Ibid 633-37. Ibid 637. 506 F. Supp. 1361 (E.D. Wis. 1981). Ibid 1363.

EAP$18$

252

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

collection action, Palmer Johnson sued Casey for the unpaid bill on the repairs. In that action, Casey claimed that the repair work was inadequate.81 The court held that the parties had not previously litigated the effectiveness of the repairs on the autopilot, and therefore Casey was not barred by res judicata or collateral estoppel from claiming defective repairs to the autopilot in a later action.82 The court never reached the merits of Casey’s claim of defective autopilot repairs. Likewise, in United States v Athlone Industries, Inc, 83 the court reviewed a summary judgment in favour of a defendant manufacturer of robotic pitching machines in a case brought by the federal government following an earlier suit brought by the US Consumer Product Safety Commission (CPSC). The court described the defects as follows: In our semi-robot age, as a substitute for the batting practice pitcher, inanimate machines have been manufactured which confront the player in the batter's box. In this case, some of the machines were defective and more wild than an erratic pitcher. In fact some of the machines were mysterious and unpredictable; even when disconnected from their power source, these machines retained such a high degree of tension in the spring and cable that with the slightest vibration, the pitching arm would unexpectedly swing forward and downward at great speed, striking any unsuspecting person within its range, allegedly causing injuries that were as serious as fractured skulls and loss of eye sight.84 Unfortunately, the court did not describe the alleged defects in any more detail and did not decide the merits of the government’s claims of defects. Instead, the case turned on whether the prior CPSC action barred the government’s action seeking civil money penalties under res judicata. The court held that the district court’s summary judgment for the defendant based on res judicata was in error, because the two suits involved different conduct, different wrongs, and different evidence. As a side note, the court stated in passing that the government brought suit against the manufacturer ‘[s]ince robots cannot be sued.’ 85 The court’s assumption that robots cannot be sued is certainly true today. Nonetheless, this assumption may not hold forever, as robots become more sophisticated, intelligent, and autonomous.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
81 82 83 84 85

Ibid. Ibid 1363-65. 746 F.2d 977 (3d Cir. 1984). Ibid 978-79. Ibid 979.

!

EAP$19$

Unmanned$Vehicles$and$US$Product$Liability$Law$

253$

4

Patterns and Conclusions

The cases identified in the research show a number of noteworthy patterns. First, a substantial number of the reported cases involve robots in auto plants and those used in die press or moulding equipment used to stamp out parts. It makes sense that a large number of reported cases involve these kinds of robots, because both auto/auto parts manufacturers and companies creating metal or plastic parts using die press/moulding equipment have been early adopters of robotics, the equipment used in these industries is highly dangerous to employees, and accidents involving this kind of equipment can be catastrophic. I expect that a substantial portion of robotics litigation will continue to arise from these industries in the short run. Nonetheless, over the long run, as personal and service robots enter the mainstream and are on sale in the mass market, the share of cases comprised of industrial accident in these industries will likely decline relative to cases involving robots purchased by consumers. Many more cases will arise from malfunctions or defects in personal and service robots, as the consuming public begins to use them in large numbers. Second, I found a substantial number of cases involving alleged defects in autopilot devices. Again, it makes sense for parties to litigate these cases following catastrophic crashes and the loss of life, given the stakes involved. If unmanned vehicle cases are similar to autopilot cases, we will likely see products liability litigation arising from catastrophic crashes. The autopilot decisions to date, however, seem to focus on classic issues in products liability cases, such as what was the real cause of the accident, and whether the device had any defects. I expect the courts to analyse unmanned vehicle cases in the same way. Third, a substantial number of cases in this research involved efforts by employees to obtain compensation from employers beyond that offered by workers’ compensation. State workers’ compensation systems provide compensation without the need for litigation against the employer, but also limit compensation. It is not surprising, therefore, that employees injured in accidents involving industrial robots seek to obtain extra compensation. Again, however, the percentage of robotics cases involving workers’ compensation will decline as personal and service robots become massmarket products and consumers start bringing cases against manufacturers. Finally, these cases reflect our current times, in which unmanned vehicles and personal and service robots are not commonplace household or consumer items. The cases raise some interesting possibilities that standards of care will change and that the failure to use a robot for certain tasks may give rise to liability. In the Mracek da Vinci surgical robot case, for instance, the bad surgical outcome allegedly occurred because the robot was not used to complete the surgery. As robots become better at precision or dangerous work than humans, we are likely to see more cases in which plaintiffs blame humans for not using robots. Likewise, as autonomous vehicles become better drivers than humans, we are likely to see cases in which plaintiffs blame humans for not using autopilot functions or fully autonomous vehicles.

EAP$20$

254

JLIS Special Edition: The Law of Unmanned Vehicles

Vol 21(2) 2011/2012$

Ultimately, the future is open as to the types of unmanned vehicle and robotics liability claims we will see in future decades. As unmanned vehicles and personal and service robots enter the mass market, and perform work formerly carried out by humans, we will see many kinds of claims we cannot even imagine today. We will then see courts applying and modifying old liability doctrines for use in cases involving new kinds of unmanned vehicles and robots.

!

EAP$21$

Sign up to vote on this title
UsefulNot useful