You are on page 1of 5

A Report on

Artificial Intelligence & International Humanitarian Law


As we see the macro impact of geopolitics become increasingly significant, to have boundaries
set on armed conflict on an international scale grows important as each day passes. Having
sustained two major world wars and other armed conflicts, discourses on peace and how to
maintain it is paramount. With that view in mind, the Red Cross and other international
organizations set up conventions and laws regarding armed conflicts which are popularly
referred to as the Geneva Conventions. Henri Dunant, the founder of the Red Cross, initiated
the international negotiations that enabled The Convention for the Amelioration of the
Wounded in Time of War1. The protection of those not involved in armed International
Humanitarian Law.

International Humanitarian Law (IHL)


To enable discourses concerning the protection of such concerns, IHL has been created as a set
of rules which seek to limit the effects of armed conflicts. Specifically speaking, it protects those
not or no longer participating in conflicts and restricts the means of warfare, for humanitarian
reasons. It is also sometimes referred to as the law of war or the law of armed conflict.

The notion of IHL has its roots deeply set in history, in the rules of ancient civilizations and
cultures. IHL is now accepted as customary law, which means all states are bound to comply
with it. The set of rules were written carefully striking a balance between military requirements
and humanitarian concerns. It does not, however, regulate whether a state may use force, only
the methods of how to. As the international community grows and people become more aware
and concerned with the moral integrity of actions, it is paramount that unnecessary loss of life
and health or moral degradation be addressed on an international platform. A major part of IHL
is to be found in the 1949 Geneva Conventions to which nearly all states agreed to comply by.

1
https://www.britannica.com/event/Geneva-Conventions#:~:text=The%20development%20of%20the
%20Geneva,Time%20of%20War%20in%201864.
Other agreements prohibit the use of certain weapons and military tactics and protect certain
categories of people and goods. These agreements include:

1. the 1954 Convention for the Protection of Cultural Property in the Event of Armed
Conflict, plus its two protocols;
2. the 1972 Biological Weapons Convention;
3. the 1980 Conventional Weapons Convention and its five protocols;
4. the 1993 Chemical Weapons Convention;
5. the 1997 Ottawa Convention on anti-personnel mines;
6. the 2000 Optional Protocol to the Convention on the Rights of the Child on the
involvement of children in armed conflict.

Initially, the conventions distinguish between international and non-international


circumstances. It only applies to armed conflict, but not to internal tensions, disturbances or
isolated acts of violence within a state. The law applies only when a conflict is existent, and to
all sides, regardless of which side initiated said conflict.

It is important to denote that International Humanitarian Law and Human Rights Law are
distinct conventions. Although some of the rules may be similar in the two aspects, the bodies
of law have been developed separately, and are contained in separate treaties. Specifically,
Human Rights Law applies in peacetime, unlike IHL, which applies in wartime. International
Humanitarian Law entails the notions of:

1. the protection of those not, or no longer taking part in the conflict;


2. restrictions placed on the means of warfare, in weaponry and methods of warfare such
as military tactics or use of certain weapons.

This body of law aims to protect those not involved in the conflict, such as civilians, medical
personnel, aid workers and religious people. It also protects those wounded in the battlefield,
shipwrecked soldiers, sick combatants and prisoners of war. It protects the physical, moral and
mental integrity and legal guarantee of people in said circumstances. To be specific, the body of
law forbids killing or wounding enemies who surrender or are unable to fight. It also says that
the sick and wounded will have to be cared for by the party that sends them to the battlefield.
Medical personnel, hospitals, prisoners of war, civilians and other culturally or historically
important places must not be damaged in any and all circumstances. Prisoners of war must be
provided with food, shelter and the right to exchange messages with their families. The law
recognizes a number of symbols which are used to help identify the protected personnel or
areas. The main emblems are inter alia the red cross, the red crescent.

International humanitarian law prohibits all means and methods of warfare which:

1. fail to discriminate between those taking part in the fighting and those, such as civilians,
who are not, the purpose being to protect the civilian population, individual civilians and
civilian property;
2. cause superfluous injury or unnecessary suffering;
3. cause severe or long-term damage to the environment.

On these grounds, the IHL has set restrictions on the use of exploding bullets, biological and
chemical weapons, laser blinding weapons and anti-personnel mines2.

Artificial Intelligence (AI) and its Use for the IHL


On the discussion of contemporarily significant subjects, the topic of artificial intelligence
reserves a special spot. The ground-breaking technology that enables machine learning, and for
technology to evolve has applications in almost all aspects of our lives. AI generally refers to the
simulation of human intelligence in machines that are programmed to think and make decisions
like humans do. It is equipped with learning and problem-solving capabilities. 3

Like many other organizations, the ICRC (International Committee of the Red Cross) is also
propelling towards enabling the use of AI & Machine Learning for its work. The technological
tools and algorithms could be applied to the work fields of the ICRC, although the intricacies of
such a tool is yet to be fully understood.

The ICRC is concerned with two distinct areas of application of AI:

1. its use in conduct of warfare or in other violent situations;


2
https://www.icrc.org/en/doc/assets/files/other/what_is_ihl.pdf
3
https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp#:~:text=Artificial%20intelligence%20(AI)
%20refers%20to,as%20learning%20and%20problem%2Dsolving.
2. its use in humanitarian efforts to assist and protect the victims of conflict.

AI and machine-learning have a groundbreaking effect on the role of humans in warfare. The
increase in the autonomy of weapon-systems and other unmanned devices, the ever-changing
nature of warfare to information and cyber warfare could see immense implications of the use
of AI. However useful or important the use of AI may be, the ICRC emphasizes the need for a
human-centered approach to any and all use of technologies that are essentially capable of
making its own decisions. The notion is that AI could be used to help and expedite the decision-
making process of humans, but never replace the needs of human judgement. That human
control and judgement remains unscathed in such areas of war-concerns, is vital to preserving
integrity and furthering the goals of the International Humanitarian Law.

The evolution of technology entails its uses in almost every aspect of our lives, including the
development of the IHL. History demonstrates that with the emergence of new technology, and
the implementation & adoption of its uses, new challenges present themselves. With the
advent of AI, humans’ tendency to incorporate the use of new technology into military tactics
and operations. The significant increase in the lethal effectiveness of these new technology-
based weapon-systems, typically referred to as Artificial Weapon Systems (AWS), is potentially
able to partially or entirely eliminate human intervention in decision-making-processes.

In accordance with the advent of such new technology and its potential military application, the
IHL has implemented a new set of rules that apply to the employment of AWS. The IHL has set a
number of reviews4 in place prior to the deployment of such AWS. These include:

1. Legal Review: The First Additional Protocol to Geneva Conventions (AP I) dictates that
States must fulfill certain obligations before deploying a new weapon system or any
means or methods of warfare. The legality of the new weapons system in question will
have to be assessed under the following criteria:
● It must not be prohibited by the IHL;
● It cannot be used to cause superfluous injury or unnecessary suffering to people
or the environment;

4
https://blogs.icrc.org/law-and-policy/2019/05/02/ai-weapon-ihl-legal-regulation-chinese-perspective/
● It must not have the effects of indiscriminate attacks;
● It will have to accord with the principles of human rights and public conscience.
The rules entails that the new weapon system will have to be incorporated by the legal
framework of the IHL with no exceptions.
2. Precautions During Employment: That the errors of AI can be attributed to the party
that employs it, is an absolute must in this case. The IHL strives to ensure that no party
can dodge their responsibilities by using ‘machine error’ as an excuse.
3. Accountability: Since humans are the ones to deploy AWS in warfare, IHL has set forth
rules that encompass responsibilities of mistakes occurred in its uses. However, in the
use of AWS, there are a lot of people involved: the manufacturers, designers, and
operators (end users). Therefore, it has to be specifically set who is responsible. Many
researchers have concluded that the primary responsibilities lie with the operators
which dictate that ‘in any armed conflict, the right of the Parties to the conflict to
choose methods or means of warfare is not unlimited’. Nevertheless, it is not that
simple. AWS usually has a certain extent of autonomy in place. The more autonomy is
disposed within the authority of AWS, the higher the design and programming
standards must be to ensure effectiveness.
4. Ethical Review: Lethal AWS, as an autonomous weapon system, pose a serious risk to
the integrity of its uses. However intelligent these machines may be, it will never be
completely possible for them to understand the right to life. Hence, lives will appear to
machines only as numbers in its process. AI will never fully realize universally accepted
human values and principles and is therefore never to be expected to understand
military priorities, necessities and proportionalities. How much they look like humans
cannot negate the fact that they are not human.

The intervention of lethal AWS must be meticulously assessed before their deployment since it
is obvious that not all States are privy to equal access to technology. Therefore, the legal review
must be extremely thorough in each assessment. Otherwise, moral integrity will be at stake.

You might also like