You are on page 1of 2

Conclusion

 If a military robot refuses an order, e.g., if it has better situational awareness, then who would be responsible for its subsequent actions?  How stringent should we take the generally-accepted ‘eyes on target’ requirement, i.e., under what circumstances might we allow robots to make attack decisions on their own?  What precautions ought to be taken to prevent robots from running amok or turning against our own side, whether through malfunction, programming error, or capture and hacking?  To the extent that military robots can help reduce instances of war crimes, what is the harm that may arise if the robots also unintentionally erode squad cohesion given their role as an ‘outside’ observer?  Should robots be programmed to defend themselves—contrary to Arkin’s position—given that they represent costly assets?  Would using robots be counterproductive to winning the hearts and minds of occupied populations or result in more desperate terrorist-tactics given an increasing asymmetry in warfare?

1. Creating autonomous military robots that can act at least as ethically as human soldiers appears to be a sensible goal, at least for the foreseeable future and in contrast to a greater demand of a perfectly-ethical robot. However, there are still daunting challenges in meeting even this relativelylow standard, such as the key difficulty of programming a robot to reliably distinguish enemy combatants from non-combatants, as required by the Laws of War and most Rules of Engagement.

2. While a faster introduction of robots in military affairs may save more lives of human soldiers and reduce war crimes committed, we must be careful to not unduly rush the process. Much different than rushing technology products to commercial markets, design and programming bugs in military robotics would likely have serious, fatal consequences. Therefore, a rigorous testing phase of robots is critical, as well as a thorough studies of related policy issues, e.g., how the US Federal Aviation Administration (FAA) handles UAVs flying in our domestic National Airspace System (which we have not addressed here).

3. Understandably, much ongoing work in military robotics is likely shrouded in secrecy; but a balance between national security and public disclosure needs to be maintained in order to help accurately anticipate and address issues of risk or other societal concerns. For instance, there is little information on US military plans to deploy robots in space, yet this seems to be a highly strategic area in which robots can lend tremendous value; however, there are important environmental and political sensitivities that would surround such a program.

4. Serious conceptual challenges exist with the two primary programming approaches today: topdown (e.g., rule-following) and bottom-up (e.g., machine learning). Thus a hybrid approach should be considered in creating a behavioural framework. To this end, we need to a clear understanding of what a ‘warrior code of ethics’ might entail, if we take a virtue-ethics approach in programming.

7. While this is a technical challenge and resolvable depending on advances in programming and AI. whether through the basic framework we offer in section 6 or some other framework depends on identifying potential issues in risk and ethics. we expect that accidents will continue to occur. Given technical limitations. or even creating robots with only non-lethal or less-than-lethal strike capabilities. or designing a robot to target only other machines or weapons. 8. Specifically. to forward-looking questions about giving rights to truly autonomous robots. as we wait for technology to sufficiently advance in order to create a workable behavioural framework. Product liability laws are informative but untested as they relate to robotics with any significant degree of autonomy. More work needs to be done to clarify the chain of responsibility in both military and civilian contexts. at least initially until they are proven to be reliable. These issues vary from: foundational questions of whether autonomous robotics can be legally and morally deployed in the first place. it may be an acceptable proxy to program robots to comply with the Laws of War and appropriate Rules of Engagement. the challenge of creating a robot that can properly discriminate among targets is one of the most urgent. such as: limiting deployment of lethal robots to only inside a ‘kill box’. Assessing technological risks. 6. particularly if one believes that the (increased) deployment of war robots is inevitable. this too is much easier said than done. there are some workaround policy solutions that can be anticipated and further explored. such as programming a robot with the ability to sufficiently discriminate against valid and invalid targets.5. In the meantime. or not giving robots a self-defence mechanism so that they may act more conservatively to prevent. . which raise the question of legal responsibility. and at least the technical challenge of proper discrimination would persist and require resolution. These discussions need to be more fully developed and expanded. However. to theoretical questions about adopting precautionary approaches.

n €€  ° f°¾°– n°–½ n¾n¯¯ nf¯f ¾  ¾–°f° ½–f¯¯°– –¾°¯f  n¾  f ¾ ¾ €ffn°¾  °n ¾ @  € f–¾ ¾°–½f¾ € ¾ ¾nnf f¾ f¾f–¾  ¾€ f ½n¾¾ ¾  –  D ff°  ¯°¾f°%%f°  ¾DI¾€°–° ¯ ¾n-f°f¾½fn ¾ ¯%n f  °f  ¾¾   %      D° ¾f° f  ¯n°–°–°¯f n¾¾ ¾ °¾ n n  f ff°n   °°f°f¾ nf° ½ n ¾n¾ ° ¾ ¯f°f° °  ½fnnf  f°n½f f° f  ¾¾¾¾ ¾€¾ ¾n fn°n °¾ °¾f°n   ¾  °€¯f°°D¯f½f°¾ ½ ¾°¾½fn  ¾¾ ¯¾ f–¾f –n f f°n ¾nf° °  ¯ ° ¾f      f ¯½f° °°¯ °ff°  ½nf¾ °¾ ¾f ¾° ¾nf½–f¯       ¾n°n ½fnf °– ¾ ¾ ½¯f½–f¯¯°–f½½fn ¾ f ½ °% –  €°–%f°  ¯ ½% – ¯fn°  f°°–% @¾f  f½½fn¾  n°¾  °n f°–f ff€f¯  @¾ °  ° fn f° ¾f° °– €ff#fn € n¾#¯– °f € f f n¾f½½fn°½–f¯¯°–    . f°–f°¯¾¯f ¾fnf°fnf f¾f¾ nff¾¯f°¾  ¾f½½ f¾ f¾ °¾  –f f f¾€ € ¾ f  € f° °n°f¾f– f  ¯f° €f ½ € n nf       f ¾ f°°–nf °– ¾°¯ °–  °¾ f  ¾f° f ¾nf¾   €€n€½–f¯¯°–f  f  ¾°–¾ ° ¯ n¯ ff°¾€¯°° n¯ ff°¾ f¾    f¾€Jff° ¯¾ ¾€°–f– ¯ °      J f€f¾ ° n°€ ¾°¯ff€€f¾¯f¾f ¯  ¾€¯f°¾  ¾f°   n fn¯ ¾n¯¯  ¯¾ nf €°° ¾ ½n ¾¾ .

  ° ¯ f°¯ f¾ f€ n°–¾€€n °f f°n ° n f ff   ff€f¯  ¯f f°fnn ½f  ½½–f¯ ¾n¯½ f¾ €Jff° f½½½f  ¾€°–f– ¯ °    ¾¾¯n f¾ ¾f f° ° f°  f f¾  n°nfnf °– €½½  ¾n¯°f° ½ ¾¾f°    ¾°       ° n°nf¯f°¾ ¾nf¾½–f¯¯°–f  f ¾€€n ° ¾n¯°f  f–f°¾f f° °f f– ¾   ½ nffnn °¾n°° nn nf¾    ¾°€ –f ¾½°¾  . ° ¾  ° nf€ nf°€ ¾½°¾ ° ¯ff° nf°n° ¾ 9 nf f¾f °€¯f  ° ¾ f¾  f   n¾f°¾–°€nf° – €f°¯      ¾¾ ¾¾°– n°–nf¾¾   –  f¾n€f¯  €€ °¾ n°¾¯   €f¯  ½ ° ¾° °€°–½ °f¾¾ ¾°¾f°  n¾ @ ¾ ¾¾ ¾f€¯  €° f°f ¾°¾€  f°¯¾ n¾nf°  –ff° ¯f ½ °  €¾½fn   nf ¾°¾f f ½°–½ nf°ff½½fn ¾ €f °–  ¾°¾f –°––¾f°¯¾ ¾ @ ¾  ¾n¾¾°¾°  ¯ €  ½ f°  ½f°      ½ n€nf  nf °– €n f°–f fnf°½½  ¾n¯°f f¯°–f– ¾¾°  € ¯¾– ° ½fnf€°    ¾f %°n f¾ % ½¯ °€f ¾¾ ° f  J ¾¾f n°nfnf °– f°  ¾f   ½ ° °–°f f°n ¾° ½–f¯¯°–f°    f ¾¯ f° ½n¾°¾fnf° f°n½f f°  €  ½ ¾nf¾ ¯°– ½¯ °€ f ¾°°¾ f# #  ¾–°°–f f– ° ¯fn° ¾ f½°¾ °–°– ¾f¾ € € °n  ¯ nf°¾¯¾f ¯ffn¯ n°¾ f ½  °   °n f°– ¾° °°  f ¾¾ f°  f¾ nf½f  ¾ f f¾°f° f ½ °  f    .