You are on page 1of 3

Cyber warfare

One of the main issues when it comes to cyber attacks is that the requirement to initiate a cyber attack
is a computer and an internet connection. This can make anyone with the required skills and an internet
connection a potential “cyber terrorist”, and even more importantly the cyber attack could originate
from anywhere on the planet.

It is estimated that a worldwide figure of 7 damage caused by cyber attacks sums up to 445 billion
dollars, which is 1% of the world’s total income.

Besides the attacks on the cybernetic infrastructure of the country, civilians, 8 economy and
governments, the dangers of cyber-attacks increase in the most dramatic way when it comes to the
militaries of the world. Modern military intelligence includes the integration of state-of-the-art
computers an electronics, leaving militaries around the globe susceptible to cyber-attacks. Going a step
further, the development of UAVs, or drones, has experienced a boom in the last decade.

One of the major problems with guaranteeing cybersecurity is the sheer amount of data that makes up
cyberspace and, coincidentally, the difficulty in monitoring it all. The United States has been better able
to monitor cyberspace than many other nations, but this has created some difficulties within the
international system. Some nations have viewed America as the greatest protector of cyberspace while
others view it as its greatest threat. US spying practices on foreign leaders have only increased this
worry . Also, since most of the servers which contain the Internet reside within the United States, there
is concern that the US has an unfair monopoly in cyberspace ownership.

Increasingly, it has been argued that the Internet needs to be governed by an international agency
which is responsible to answering to the international system as whole and not individual parties. The
Non-Aligned Movement has expressly stated the need for independent control of some parts of their
internet to guarantee the protection of defense secrets as well as the ability to guarantee internet use
for the growth of their economy. However, the makeup of such a body is still being debated.

Another major problem with guaranteeing cybersecurity is the issue concerning how to hold nations and
international actors accountable for their actions. Nations like Russia18 and China19 believe cyberspace
should be controlled locally by various national governments and should respect cultural norms and
national policy agenda if a state determines the need for this. In much of the West, people believe in a
free Internet, but in less democratic countries leaders may feel threatened by a free internet and wish
to control it directly.

Coincidentally, this has sparked debate around the world about how much freedom individuals are
willing to give up in order to maintain security online. Originally, the Internet was a completely free
place where individuals could express themselves and feel free to come up with applications never
thought of before. As the technology has become more widespread and available, dangers have arisen.
There is a large debate concerning how much freedom should be allowed in cyberspace. If governments
took more control over cyberspace, they could most assuredy be more effective in improving
cybersecurity, but there is a risk they would also decrease the level of freedom permissible on the
Internet. This debate is especially pertinent in the European Union where individuals are asking where
to draw the line between security and freedom of expression.

There are many challenges to creating an international framework for cybersecurity. Though the
challenges are great, the potential danger of not doing anything is far greater. The problems posed by
cybercrime are serious, but they are solvable. It is hoped the international community can put aside
their differences and create a free and open Internet which is safe from cybercrime

 Possible Solution

Exchanging information on national policies, organizations and structures, good practices, and certain
processes when concerning cyber security. For instance, the United States exchanged white papers
about the subject of cyber defense with Russia in 2012. Similarly, Germany conducted an exchange with
Russia in 2013. Creating bilateral or multilateral agreements about cybersecurity and managing the
cyber domain. In example, the US established the firstever bilateral agreement concerning cybersecurity
with Russia in 2013,

Humanitarian aspect

I weapons must be incorporated in the legal framework of IHL with no exceptions. The principles and
rules of IHL should and shall be applied to AI weapons.

Precautions during employment

Humans will make mistakes. The same is true for machines, however ‘intelligent’ they are. Since AI
weapons are designed, manufactured, programmed and employed by humans, the consequences and legal
responsibilities arising from their illegal acts must be attributed to humans. Humans should not use the
‘error’ of AI systems as an excuse to dodge their own responsibilities. That would not be consistent with
the spirit and value of the law. Accordingly, AI weapons, or weapon systems, should not be characterized
as ‘combatants’ under IHL and consequently take on legal responsibility. In any circumstance, the
wrongful targeting made by AI weapon systems is not a problem of the weapon itself. Therefore, when
employing AI weapons systems, programmers and end users are under a legal obligation to take all
feasible precautionary measures ensuring such employment in accordance with the fundamental rules of
IHL (Art 57 API).

Accountability after employment

If humans are responsible for the employment of AI weapons, who, of these humans, holds responsibility?
Is it the designers, the manufacturers, the programmers or the operators (end users)? In the view of many
Chinese researchers, the end users must take primary responsibilities for the wrongful targeting of AI
weapons. Such an argument derives from the Article 35(1) of AP I which provides ‘in any armed conflict,
the right of the Parties to the conflict to choose methods or means of warfare is not unlimited’. In the case
of full autonomy of AI weapon systems without any human control, those who decide to employ AI
weapon systems—normally senior military commanders and civilian officials—bear individual criminal
responsibility for any potential serious violations of IHL. Additionally, the States to which they belong
incur State responsibility for such serious violations which could be attributable to them.
Moreover, the targeting of AI weapon systems is closely tied to their design and programming. The more
autonomy they have, the higher the design and programming standards must be in order to meet the IHL
requirements. For this purpose, the international community is encouraged to adopt a new convention
specific to AI weapons, such as the Convention on Conventional Weapons and its Protocols, or the
Convention against Anti-personnel Mines and Convention on Cluster Munitions. At the very least, under
the framework of such a new convention, the design standards of AI weapons shall be formulated, States
shall be responsible for the designing and programming of those weapons with high levels of autonomy,
and those States that manufacture and transfer AI weapons in a manner inconsistent with relevant
international law, including IHL and Arms Trade Treaty, shall incur responsibility. Furthermore, States
should also provide legal advisors to the designers and programmers. In this regard, the existing IHL
framework does not fully respond to such new challenges. For this reason, in addition to the development
of IHL rules, States should also be responsible for developing their national laws and procedures, in
particular transparency mechanisms. On this matter, those States advanced in AI technology should play
an exemplary role.

Ethical aspect

AI weapons—especially the lethal autonomous weapon systems—pose a significant challenge to human


ethics. AI weapons do not have human feelings and there is a higher chance that their use will result in
violations of IHL rules on methods and means. For example, they can hardly identify the willingness to
fight of a human, or understand the historical, cultural, religious and humanistic values of a specific
object. Consequently, they are not expected to respect principles of military necessity and proportionality.
They even significantly impact universal human values of equality, liberty and justice. In other words, no
matter how much they look like humans, they are still machines. It is almost impossible for them to really
understand the meaning of the right to life. This is because machines can be well repaired and
programmed repeatedly, but life is given to humans only once. From this perspective, even though it is
still possible when employing of non-lethal AI weapons, highly lethal AI weapons must be totally
prohibited on both international and national levels in view of their high-level autonomy. However, it
should be acknowledged that this may not be persuasive reasoning, because it is essentially not a legal
argument, but an ethical one.

Where should responsibility for errors of design and use lie in the spectrum between
1) the software engineers writing the code that tells a weapons system when and
against whom to target an attack, 2) the operators in the field who carry out such
attacks, and 3) the commanders who supervise them?

You might also like