You are on page 1of 13

DAMODARAM SANJIVAYYA NATIONAL LAW UNIVERSITY

SABBAVARAM, VISAKHAPATNAM, A.P., INDIA

PROJECT TITLE
INTERNATIONAL HUMANITARIAN LAW & ARTIFICIAL INTELLIGENCE

SUBJECT- International Humanitarian Law

NAME OF THE FACULTY


Dr. C. H. Lakshmi

Name of the Candidate-PRIYANKA SINGH


Roll No.-2017068
Semester-8th

1|Page
ABSTRACT

In present times, the technology is developing at a very fast rate & with every new
technology coming up, there comes a need to develop rules & regulations to regulate it. One
such technology is Artificial Intelligence technique which can be used in warfare. This has
raised some serious questions relating to violation of International Humanitarian Law. The
scholars have opposing opinions as to its use, regulation & impact. There are two broad – and
distinct – areas of application of AI and machine learning in which the researcher has a
particular interest: its use in the conduct of warfare or in other situations of violence; and its
use in humanitarian action to assist and protect the victims of armed conflict.

This paper sets out the researcher’s perspective on the use of AI and machine learning in
armed conflict, the potential humanitarian consequences, and associated legal obligations and
ethical considerations that should govern its development and use.

2|Page
TABLE OF CONTENTS

1. Synopsis………………..……………………………………………………………...4

 Objectives of the Study………..……………………………………………..4

 Significance of the Study………………………………………………….....4

 Scope of the Study……………………………………………………………4

 Research Methodology………………………………………………………4

 Research Question……………………………………………………………4

 Review of Literature…………………………………………………………4

2. Introduction…………………………………………………………………………..5

3. Legal review before employment of AI in Warfare………………………………..5

4. Precautions during employment…………………………………………………….6

5. Accountability after employment…………………………………………………...6

6. Ethical aspect…………………………………………………………………………7

7. Problems of Attribution and of Accountability…………………………………….9

8. A human-centered approach……………………………………………………….10

 Ensuring human control and judgment…………………………………...11

 Legal basis for human control in armed conflict…………………………12

9. Conclusion…………………………………………………………………………...13

10. Bibliography………………………………………………………………………...13

3|Page
SYNOPSIS

OBJECTIVE OF THE STUDY:


To study about the extent and scope of the application International Humanitarian Law in
Artificial Intelligence Machines & Automated Weapons. To understand legal & ethical issues
related to it.

SIGNIFICANCE OF THE STUDY:


In this project, we will learn about the difficulties that are there in application of International
Humanitarian Law in Artificial Intelligence & the development of principles of IHL in this
field.

SCOPE OF THE STUDY:


The scope of the project is limited to the study of International Humanitarian Law in
Artificial Intelligence & the legal & ethical issues related to the same.

RESEARCH METHODOLOGY:
The researcher has adopted doctrinal method of research and the entire paper is in the form of
analysis of established procedures, following the analytical research style. The sources are
books, articles and web sources.

RESEARCH QUESTION:
1. Whether there are clear laid out rules of International Humanitarian Law in Artificial
Intelligence?
2. Whether the ethical concerns related to International Humanitarian Law in Artificial
Intelligence can be tackled through stricter rules & regulations?

REVIEW OF LITERATURE:
This research paper is prepared by referring many books, articles from magazines, journals,
newspapers, internet sources etc.

4|Page
Introduction
International humanitarian law (IHL) evolves with the development of emerging
technologies. The history of IHL demonstrated that any adoption of new technologies
presents challenges to this body of law. With the advent of artificial intelligence (AI), the
tendency has become even more apparent when humans attempt to achieve the military use of
such technology. When it comes to weapons, a combination of weapons and AI technology
has increasingly drawn attention from the international community. Following the high-tech
weapon systems such as cyber-attack software and armed drones, combat robots of various
types have been developed and employed. Potentially, artificial intelligence will not only
significantly increase the efficiency and lethal effect of modern kinetic weapons, but also will
partially restrict or even completely eliminate human interventions in all aspects of strategy
design, battle organization and tactics implementation.1
AI weapons—also known as autonomous weapon systems (AWS), which have been defined
by the ICRC as weapons that can independently select and attack targets, i.e., with autonomy
in the ‘critical functions’ of acquiring, tracking, selecting and attacking targets—have raised
a series of issues both legally and ethically. It is debatable whether such weapons/weapon
systems with the functions of learning, reasoning, decision-making and the ability to act
independent of human intervention should be employed in future battlefields. In all
circumstances, they must be employed in accordance with the principles and rules of IHL.

Legal review before employment of AI in Warfare

The First Additional Protocol to Geneva Conventions (AP I), provides that States must fulfil
their obligations to determine whether the employment of a new weapon, means or method of
warfare would be prohibited by IHL or any other relevant rules of international law in some
or all circumstances in the study, development, acquisition or adoption of that weapon (Art
36 API). More specifically, the legality of new weapons must be assessed using the following
criteria2:

1
Tess Bridgeman, The viability of data-reliant predictive systems in armed conflict detention, 2019,
https://blogs.icrc.org/law-and-policy/2019/04/08/viability-data-reliant-predictive-systems-armed-conflict-
detention/.
2
Dustin Lewis,  Legal reviews of weapons, means and methods of warfare involving artificial intelligence: 16
elements to consider, 2020.

5|Page
 First, are the new weapons prohibited by specific international conventions, such as
the Chemical Weapons Convention, Biological Weapons Convention or Convention on
Certain Conventional Weapons?
 Second, would such weapons cause superfluous injury or unnecessary suffering, or
widespread, long-term and severe damage to the natural environment (Art 35 API)?
 Third, will such weapons likely have the effects of indiscriminate attacks (Art
51 API)?
 Lastly, will such weapons accord with the principles of humanity and dictates of
public conscience—the Martens Clause (Art 1(2) API)?
This means that AI weapons must be incorporated in the legal framework of IHL with no
exceptions. The principles and rules of IHL should and shall be applied to AI weapons.3

Precautions during employment

Humans will make mistakes. The same is true for machines, however ‘intelligent’ they are.
Since AI weapons are designed, manufactured, programmed and employed by humans, the
consequences and legal responsibilities arising from their illegal acts must be attributed to
humans. Humans should not use the ‘error’ of AI systems as an excuse to dodge their own
responsibilities. That would not be consistent with the spirit and value of the law.
Accordingly, AI weapons, or weapon systems, should not be characterized as ‘combatants’
under IHL and consequently take on legal responsibility. In any circumstance, the wrongful
targeting made by AI weapon systems is not a problem of the weapon itself. Therefore, when
employing AI weapons systems, programmers and end users are under a legal obligation to
take all feasible precautionary measures ensuring such employment in accordance with the
fundamental rules of IHL (Art 57 API)4. 

Accountability after employment

If humans are responsible for the employment of AI weapons, who, of these humans, holds
responsibility? Is it the designers, the manufacturers, the programmers or the operators (end
users)? In the view of many Chinese researchers, the end users must take primary
responsibilities for the wrongful targeting of AI weapons. Such an argument derives from
the Article 35(1) of AP I which provides ‘in any armed conflict, the right of the Parties to the
3
Hin-Yan Liu, Categorization and Legality of Autonomous and Remote Weapons Systems, 94
INTERNATIONAL REVIEW OF THE RED CROSS 627, 635–36 (2012).
4
https://blogs.icrc.org/law-and-policy/2019/05/02/ai-weapon-ihl-legal-regulation-perspective/.

6|Page
conflict to choose methods or means of warfare is not unlimited’. In the case of full
autonomy of AI weapon systems without any human control, those who decide to employ AI
weapon systems—normally senior military commanders and civilian officials—bear
individual criminal responsibility for any potential serious violations of IHL. Additionally,
the States to which they belong incur State responsibility for such serious violations which
could be attributable to them.

Moreover, the targeting of AI weapon systems is closely tied to their design and
programming. The more autonomy they have, the higher the design and programming
standards must be in order to meet the IHL requirements. For this purpose, the international
community is encouraged to adopt a new convention specific to AI weapons, such as the
Convention on Conventional Weapons and its Protocols, or the Convention against Anti-
personnel Mines and Convention on Cluster Munitions. At the very least, under the
framework of such a new convention, the design standards of AI weapons shall be
formulated, States shall be responsible for the designing and programming of those weapons
with high levels of autonomy, and those States that manufacture and transfer AI weapons in a
manner inconsistent with relevant international law, including IHL and Arms Trade Treaty,
shall incur responsibility. Furthermore, States should also provide legal advisors to the
designers and programmers. In this regard, the existing IHL framework does not fully
respond to such new challenges. For this reason, in addition to the development of IHL rules,
States should also be responsible for developing their national laws and procedures, in
particular transparency mechanisms. On this matter, those States advanced in AI technology
should play an exemplary role.5 

Ethical aspect

AI weapons—especially the lethal autonomous weapon systems—pose a significant


challenge to human ethics. AI weapons do not have human feelings and there is a higher
chance that their use will result in violations of IHL rules on methods and means. For
example, they can hardly identify the willingness to fight of a human, or understand the
historical, cultural, religious and humanistic values of a specific object. Consequently, they
are not expected to respect principles of military necessity and proportionality. They even
significantly impact universal human values of equality, liberty and justice. In other words,
5
ICRC, “Expert views on the frontiers of artificial intelligence and conflict”, ICRC Humanitarian Law & Policy
Blog, 19 March 2019: https://blogs.icrc.org/law-and-policy/2019/03/19/expert-views-frontiers-artificial-
intelligence-conflict.

7|Page
no matter how much they look like humans, they are still machines. It is almost impossible
for them to really understand the meaning of the right to life. This is because machines can be
well repaired and programmed repeatedly, but life is given to humans only once. From this
perspective, even though it is still possible when employing of non-lethal AI weapons, highly
lethal AI weapons must be totally prohibited on both international and national levels in view
of their high-level autonomy. However, it should be acknowledged that this may not be
persuasive reasoning, because it is essentially not a legal argument, but an ethical one. 

Emerging applications of AI and machine learning have also brought ethical questions to the
forefront of public debate. A common aspect of general “AI Principles” developed and
agreed by governments, scientists, ethicists, research institutes and technology companies is
the importance of the human element to ensure legal compliance and ethical acceptability.6

For example, the 2017 Asilomar AI Principles emphasize alignment with human values,
compatibility with “human dignity, rights, freedoms and cultural diversity”, and human
control; “humans should choose how and whether to delegate decisions to AI systems, to
accomplish human-chosen objectives”.7 The European Commission’s High-Level Expert
Group on Artificial Intelligence stressed the importance of “human agency and oversight”,
such that AI systems should “support human autonomy and decision-making”, and ensure
human oversight through human-in-the-loop, humanon-the-loop, or human-in-command
approaches. The Organisation for Economic Co-operation and Development (OECD)
Principles on Artificial Intelligence – adopted in May 2019 by all 36 member States, together
with Argentina, Brazil, Colombia, Costa Rica, Peru and Romania – highlight the importance
of “human-centred values and fairness”, specifying that users of AI “should implement
mechanisms and safeguards, such as capacity for human determination, that are appropriate
to the context and consistent with the state of art”. The Beijing AI Principles, adopted in May
2019 by a group of leading Chinese research institutes and technology companies, state that
“continuous efforts should be made to improve the maturity, robustness, reliability, and
controllability of AI systems” and encourage “explorations on Human-AI coordination that
would give full play to human advantages and characteristics”. A number of individual
technology companies have also published AI Principles highlighting the importance of

6
Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Report, ¶ 39, U.N. Doc. A/HRC/23/47
(Apr. 9, 2013) (by Christof Heyns).
7
Future of Life Institute, Asilomar AI Principles, 2017: https://futureoflife.org/ai-principles.

8|Page
human control, especially for sensitive applications presenting the risk of harm, and
emphasizing that the “purpose of AI is to augment – not replace – human intelligence”.8

Problems of Attribution and of Accountability

Only human beings are subject to legal rules. In the case of autonomous weapons, IHL is
addressed to those human beings who devise, produce and program them, as well as those
who decide upon their use. I reject the idea that IHL is inadequate to regulate autonomous
weapons because they would be situated somewhere between weapon systems and
combatants, and further reject the suggestion that a new category with new rules should be
created to regulate them. The difference between a weapon system and a human being is not
quantitative but qualitative; the two are not situated on a sliding scale, but on different levels
—subjects and objects. A combatant is a human being, only he or she is an addressee of legal
obligations. However far we go into the future and no matter how artificial intelligence will
work, there will always be a human being at the starting point. In my understanding, an
autonomous weapon system will always operate within the limits of its software; software
designed by humans.9 It is the human being who will decide whether a machine will be
created and who will create it. Even if one day robots construct other robots, there will still be
the need for a human being to develop the first robot and instruct it as to how to construct
new robots. This human being is bound by the law; the machine is not bound by the law.

Human Rights Watch writes that it would be unclear who would be held accountable for
unlawful actions a robot commits: “Options include the military commander that deployed it,
the programmer, the manufacturer, and the robot itself, but all are unsatisfactory. It would be
difficult and arguably unfair to hold the first three actors liable and the actor that actually
committed the crime—the robot—would not be punishable.”10 I agree with the last part of
this statement and I find some suggestions that robots could be scrapped or disabled as a kind
of punishment absurd. As for the first options, it is as fair to hold a commander of a robot
accountable as it would be to hold accountable a commander who instructs a pilot to bomb a
target he describes as a military headquarters, but which turns out to be a kindergarten. It is
obvious that a commander deploying autonomous weapons must understand how they
8
8 IBM, “IBM’s Principles for Trust and Transparency”, 30 May 2018:
https://www.ibm.com/blogs/policy/trust-principles.
9
DEFENSE SCIENCE BOARD, U.S. DEPARTMENT OF DEFENSE, TASK FORCE REPORT: THE ROLE
OF AUTONOMY IN DOD SYSTEMS 1, 21 (July 2012).
10
HUMAN RIGHTS WATCH, LOSING HUMANITY: THE CASE AGAINST KILLER ROBOTS (2012),
available at http://www.hrw.org/reports.

9|Page
function, just as for any other means and method of warfare. 11 In my view, the responsibility
of such a commander is not a case of—nor is it analogous to—command responsibility, but a
case of direct responsibility, just as that of a soldier firing a mortar believing that it can land
only on the targeted tank, but which will kill civilians he knows are following the tank. This
is a question of the mens rea, intent and recklessness with which criminal lawyers are
familiar, just as it is for a surgeon using a medical robot or, for that matter, prescribing a
medicine. Based on their Protocol I, Article 36 assessment, States deploying robots must give
military commanders and operators clear instructions as to when and under what
circumstances the robots may be actually be used.

The further question of whether a robot could distinguish lawful from unlawful orders is
equivalent to that of whether they are able to apply rules to a complex situation without
human intervention. If they cannot, they may not be used. If they can, it will be easy to
program them not to follow unlawful orders. None of the reasons for which soldiers often
obey unlawful orders apply to them.12

A HUMAN-CENTRED APPROACH

As a humanitarian organization working to protect and assist people affected by armed


conflict and other situations of violence, deriving its mandate from international humanitarian
law and guided by the Fundamental Principle of humanity, 13 the ICRC believes it is critical to
ensure a genuinely human-centred approach to the development and use of AI and machine
learning. This starts with consideration of the obligations and responsibilities of humans and
what is required to ensure the use of these technologies is compatible with international law,
as well as societal and ethical values.

Ensuring human control and judgment


11
ICRC, Artificial intelligence and machine learning in armed conflict: A human-centred approach, June 6,
2019.
12
Ronald C. Arkin, Ethical Robots in Warfare, GEORGIA INSTITUTE OF TECHNOLOGY (Jan. 20, 2009),
http://www.cc.gatech.edu/ai/robot-lab/online-publications/arkin-rev.pdf.
13
ICRC & IFRC, The Fundamental Principles of the International Red Cross and Red Crescent Movement:
Ethics and Tools for Humanitarian Action, November 2015: https://shop.icrc.org/les-principes-fondamentaux-
de-la-croix-rouge-et-du-croissant-rouge-2757.html.

10 | P a g e
The ICRC believes it is essential to preserve human control over tasks and human judgement
in decisions that may have serious consequences for people’s lives in armed conflict,
especially where they pose risks to life, and where the tasks or decisions are governed by
specific rules of international humanitarian law. AI and machine learning must be used to
serve human actors, and augment human decision-makers, not replace them. Given that these
technologies are being developed to perform tasks that would ordinarily be carried out by
humans, there is an inherent tension between the pursuit of AI and machine-learning
applications and the centrality of the human being in armed conflict, which will need
continued attention. 14

Human control and judgment will be particularly important for tasks and decisions that can
lead to injury or loss of life, or damage to, or destruction of, civilian infrastructure. These will
likely raise the most serious legal and ethical questions, and may demand policy responses,
such as new rules and regulations. Most significant are decisions on the use of force,
determining who and what is targeted and attacked in armed conflict. However, a much wider
range of tasks and decisions to which AI might be applied could also have serious
consequences for those affected by armed conflict, such as decisions on arrest and detention.
In considering the use of AI for sensitive tasks and decisions there may be lessons from
broader discussions in the civil sector about the governance of “safety-critical” AI
applications – those whose failure can lead to injury or loss of life, or serious damage to
property or the environment.15

Another area of tension is the discrepancy between humans and machines in the speed at
which they carry out different tasks. Since humans are the legal – and moral – agents in
armed conflict, the technologies and tools they use to conduct warfare must be designed and
used in a way that enables combatants to fulfil their legal and ethical obligations and
responsibilities. This may have significant implications for AI and machine-learning systems
that are used in decision-making; in order to preserve human judgement, systems may need to
be designed and used to inform decision-making at “human speed”, rather than accelerating
decisions to “machine speed” and beyond human intervention.

14
United Nations, Report of the 2018 session of the Group of Governmental Experts on Emerging Technologies
in the Area of Lethal Autonomous Weapons Systems, CCW/GGE.1/2018/3, 23 October 2018, Section III. A.
26(b) & III. C. 28(f): http://undocs.org/en/CCW/GGE.1/2018/3.
15
The Partnership on AI, Safety-Critical AI: Charter, 2018: https://www.partnershiponai.org/working-group-
charters-guiding-our-exploration-of-ais-hard-questions.

11 | P a g e
Legal basis for human control in armed conflict

For conflict parties, human control over AI and machine-learning applications employed as
means and methods of warfare is required to ensure compliance with the law. The rules of
international humanitarian law are addressed to humans. It is humans that comply with and
implement the law, and it is humans who will be held accountable for violations. In
particular, combatants have a unique obligation to make the judgements required of them by
the international humanitarian law rules governing the conduct of hostilities, and this
responsibility cannot be transferred to a machine, a piece of software or an algorithm.

These rules require context-specific judgements to be taken by those who plan, decide upon
and carry out attacks to ensure: distinction – between military objectives, which may lawfully
be attacked, and civilians or civilian objects, which must not be attacked; proportionality – in
terms of ensuring that the incidental civilian harm expected from an attack will not be
excessive in relation to the concrete and direct military advantage anticipated; and to enable
precautions in attack – so that risks to civilians can be further minimized.

Where AI systems are used in attacks – whether as part of physical or cyber-weapon systems,
or in decision-support systems – their design and use must enable combatants to make these
judgements. With respect to autonomous weapon systems, the States party to the Convention
on Certain Conventional Weapons (CCW), have recognised that “human responsibility” for
the use of weapon systems and the use of force “must be retained”, and many States,
international organisations – including the ICRC – and civil society organisations, have
stressed the requirement for human control to ensure compliance with international
humanitarian law and compatibility with ethical values. 16 Beyond the use of force and
targeting, the potential use of AI systems for other decisions governed by specific rules of
international humanitarian law will likely require careful consideration of necessary human
control, and judgement, such as in detention.

CONCLUSION

16
Organisation for Economic Co-operation and Development (OECD), Recommendation of the Council on
Artificial Intelligence, OECD/LEGAL/0449, 22 May 2019:
https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.

12 | P a g e
We cannot predict whether AI will completely replace human resources and so-called robotic
wars will emerge. However, it must be observed that there exists a huge gap between nations
in terms of AI technological capabilities. It is still an unreachable goal for most countries to
procure and militarily use such capabilities. In other words, some States may have the
potential to employ AI weapons on the battlefield, while others may not. In such cases, it will
inevitably be required to assess the legality of AI weapons and their employment, and IHL
will be resorted to. And as a result, the imbalance in military technologies will probably
cause the divergency in the interpretation and application of existing IHL rules. Nevertheless,
it is important to note that the applicability of IHL to AI weapon systems is beyond all doubt.

AI and machine-learning systems could have profound implications for the role of humans in
armed conflict, especially in relation to: increasing autonomy of weapon systems and other
unmanned systems; new forms of cyber and information warfare; and, more broadly, the
nature of decision-making.17 In the view of the ICRC, governments, militaries and other
relevant actors in armed conflict must pursue a genuinely human-centred approach to the use
of AI and machine-learning systems.

BIBLIOGRAPHY

1. www.heinonline.com
2. www.jstor.org
3. www.lexisnexis.com
4. www.manupatrafast.com
5. www.nrega.nic.in
6. www.scconline.com

17
Bridgeman, T., “The viability of data-reliant predictive systems in armed conflict detention”, ICRC
Humanitarian Law & Policy Blog, 8 April 2019: https://blogs.icrc.org/law-and-policy/2019/04/08/viability-data-
reliant-predictive-systems-armed-conflict-detention.

13 | P a g e

You might also like