You are on page 1of 3

Weaponisation

Of Artificial Intelligence

Technological development has become a


rat race. In the competition to lead the
emerging technology race and the futuristic
warfare background, artificial intelligence
(AI) is rapidly becoming the center of
global power play. As seen across many
nations, the development in autonomous
weapon systems is progressing rapidly and
this progress seems to have become a highly
destabilizing development. It not only
threatens the world community as a whole
but also brings complex challenges for
computer scientists and engineers along
with being a security threat to future of
humanity.

It is always a
simple piece
- “AI will probably most likely
of code that lead to the end of the world,
needs to be but in the meantime, there'll be
inserted into great companies.” - Sam
Altman
the bots at
work.
Role of Programmers and Programming
Amidst complex security
challenges and the sea of unknowns
coming our way, what remains
fundamental for the safety and
security of the human race is the
role of programmers and
programming along with the
integrity of semiconductor chips.
The reason behind this is
programmers can define and
determine the nature of AWS (at
least in the beginning) until AI
begins to program itself.

However, if and when a programmer who intentionally


or accidently programs an autonomous weapon to
operate in violation of the current and future
humanitarian laws, how will humans control the
weaponisation of AI? Furthermore, as AWS is software
centered, where should the responsibility for errors and
manipulation of AWS systems design and use lie? This
brings us to the very fundamental question – When and
if an autonomous system kills, who is responsible for
the killing and is such a technology really worth it? Is
that the world we want our children to be part of? So, as
future engineers we need to be cautions while dealing
with ever developing technologies such as Artificial
Intelligence.
The Bigger Picture - the future of AI in
Warfare and Ethics
The future of AI in military
systems is directly tied to the AI weapons especially the lethal
ability of engineers to design autonomous weapon systems
autonomous systems that pose a significant challenge to
demonstrate independent capacity human ethics. AI weapons do not
for knowledge- and expert-based have human feelings and there is
reasoning. There are no such a higher chance that their use will
autonomous systems currently in result in violations of IHL rules
operation. Most ground robots are on methods and means. For
teleoperated, essentially meaning example, they can hardly identify
that a human is still directly the willingness to fight of a
controlling a robot from some human, or understand the
distance away as though via a historical, cultural, religious and
virtual extension cord. Most humanistic values of a specific
military UAVs are only slightly object. Consequently, they are
more sophisticated: they have some not expected to respect principles
low-level autonomy that allows of military necessity and
them to navigate, and in some cases proportionality.
land, without human intervention, It is almost impossible for them
but almost all require significant to really understand the meaning
human intervention to execute of the right to life. This is because
their missions. machines can be well repaired
and programmed repeatedly, but
life is given to humans only once.
From this perspective, even
though it is still possible when
employing of non-lethal AI
weapons, highly lethal AI
weapons must be totally
prohibited on both international
Atharva Kulkarni and national levels in view of
COMP B 26 their high-level autonomy.

You might also like