You are on page 1of 44

nhsdlc.

com
NHSDLC Summer National 2022 Main Research Packet

NHSDLC Summer National 2022 Main Research Packet


Resolved, The United Nations ought to ban the use of lethal autonomous weapons

1
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet

Table of Contents
Table of Contents 2
Topic Analysis 3
What are lethal autonomous weapons? 3
What progress has the United Nations already made? And what are some possible
arguments? 4
The question of quantified impacts 6
PRO Arguments 7
Don’t Let Robots Pull the Trigger 8
A.I. drones used in the Ukraine war raise fears of killer robots wreaking havoc across future
battlefields 10
Killer robots are fast becoming a reality – we must stop this from happening if we want to
stop a global AI arms race 14
Robots that Kill: The Case for Banning Lethal Autonomous Weapon Systems 16
Autonomous weapons that kill must be banned, insists UN chief 22
Lethal autonomous weapons and World War III: it's not too late to stop the rise of 'killer
robots' 24
Why we need to regulate non-state use of arms 27
CON Arguments 32
The world just blew a ‘historic opportunity’ to stop killer robots—and that might be a good
thing 33
Banning autonomous weapons is not the answer 37
Banning Autonomous Weapons is not the Solution 39
Do Killer Robots Save Lives? 42

2
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet

Topic Analysis
The Nationals resolution this year is: “Resolved: The United Nations should ban the use of lethal
autonomous weapons.” At first glance, this topic is an interesting one. It deals with warfare, a
theme becoming more common every year, and it directly engages with the United Nations, the
organization expected to solve world problems when they come up.
But it’s clear that we don’t have a ban in place right now, and that’s why we’re having this
debate.

What are lethal autonomous weapons?


According to the Future of Life Institute, lethal autonomous weapons (“LAWs”) are “weapons
systems that use artificial intelligence to identify, select, and kill human targets without human
intervention.”1 These weapons use computer algorithms to “calculate” the targets–the enemies–
and kill them in warfare.
Based on this definition, LAWs have both benefits and harms. On the one hand, scholars have
argued that “autonomous weapons systems would create military advantage [sic] because fewer
warfighters would be needed, the battlefield could be expanded into previously inaccessible
areas, and less casualties could occur by removing humans from dangerous missions.”2 And this
reasoning makes sense too. If there’s less people fighting in wars – instead, just robots fighting
each other – then less human lives are going to be lost, and that’s beneficial, for there’s less
death.
But on the other hand, scholars also argue that:
[LAWs] do not require costly or hard-to-obtain raw materials, making them extremely
cheap to mass-produce. They’re also safe to transport and hard to detect. Once
significant military powers begin manufacturing, these weapons systems are bound to
proliferate. They will soon appear on the black market, and then in the hands of terrorists
wanting to destabilise nations, dictators oppressing their populace, and/or warlords
wishing to perpetrate ethnic cleansing.3

1Future of Life Institute 2021 (Future of Life Institute, 11-30-2021, https://futureoflife.org/lethal-autonomous-


weapons-systems/, Lethal Autonomous Weapons Systems)
2 Coley Felt 2020 (Coley Felt, 2-14-2020, https://jsis.washington.edu/news/autonomous-weaponry-are-killer-
robots-in-our-future/, Henry M. Jackson School of International Studies, Autonomous Weaponry: Are Killer Robots
in Our Future?)
3 Future of Life Institute 2021 (Future of Life Institute, 11/27/2021, https://futureoflife.org/2021/11/27/10-reasons-
why-autonomous-weapons-must-be-stopped/?cn-reloaded=1, Future of Life Institute, 10 Reasons Why Autonomous
Weapons Must be Stopped)

3
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
This line of reasoning also makes sense, though there’s a little bit more complexity. If countries
manufacture autonomous weapons, which is happening in the status quo, then parts used in these
weapons will appear on the black market, and terrorists or non-state-actors might find a way to
purchase these illegally sold parts. They can then make their own lethal autonomous weapons.
The nuance here is that even if using lethal autonomous weapons gets banned by the UN, these
parts may still circulate on the black market, as banning the usage of LAWs doesn’t necessarily
entail banning the manufacturing of them.
And if terrorists are the only parties using lethal autonomous weapons, it should be quite clear
what the harm is: they gain more global power and influence and use violence to harm others.
Arguments both for and against LAWs go way further than just these two examples. That’s your
job as a debater to research. But even stacking just these two points against each other, this
resolution will be challenging, both to research and to debate. That’s what makes Nationals
difficult.

What progress has the United Nations already made? And what are some possible
arguments?
Here are two examples of international diplomatic progress.
First, the UN Convention on Certain Conventional Weapons (CCW) “regulates incendiary
devices, blinding lasers and other armaments thought to be overly harmful.” This convention
debated the issue of LAWs in 2010. And second, The Campaign to Stop Killer Robots. This is a
“coalition of 89 nongovernmental organizations from 50 countries that has pressed for a [...]
prohibition [of LAWs].”4
These would be integral in solving any harms inherent to LAWs. The Human Rights Watch
writes:
A legally binding instrument is the optimal framework for dealing with the many serious
challenges raised by fully autonomous weapons. A new international ban treaty could lay
down explicit rules to ensure appropriate constraints on autonomy in weapons systems
and resolve differing views on human control over the use of force. Most importantly, a
new treaty would show that states are serious about responding appropriately and with
urgency to this existential threat to humanity5

4 Scientific American 2019 (Scientific American, 3-1-2019, https://www.scientificamerican.com/article/dont-let-


robots-pull-the-trigger/, Scientific American, Don’t Let Robots Pull the Trigger)
5 Human Rights Watch 2020 (Human Rights Watch, 8-10-2020, https://www.hrw.org/report/2020/08/10/stopping-
killer-robots/country-positions-banning-fully-autonomous-weapons-and, Stopping Killer Robots)

4
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Both of these are directions for you to conduct your research. When doing so, specific emphasis
should be placed on why progress is currently at a standstill–that is, emphasis should be placed
on why the usage of LAWs still hasn’t been banned by the UN.
That’s where world powers like the United States and Russia come into play. The Human Rights
Watch writes: “A handful of military powers, most notably Russia and the United States, have
firmly rejected proposals to negotiate a new CCW protocol or standalone international treaty.” 5

Specifically, this is a direct piece of reasoning as to why progress cannot be made right now. It
explains that these military powers DO NOT want a ban of lethal autonomous weapons. And
given that they have veto power, nothing will happen.
On both sides, this leads to an important point to take note of: fiat. Given that this is a policy
resolution, shown by the word “should” and the presence of an actor (the UN), the PRO team has
the ability to “fiat” the resolution. That is, they can assume that as soon as the judge votes pro,
using lethal autonomous weapons is now banned. No gatekeeping from world powers. Yet, this
leads to a question. Given that world powers are opposing this ban right now, will a ban on using
LAWs really stop these countries from doing what they want to do? The UN can’t sanction them
– they have veto power – so how will they enforce this ban?
You can use this line of reasoning to think deeper about the world created on your side. Is it
really better than con’s? You have to show the judge that it is.
As CON, a similar line of reasoning applies. If you, perhaps, argue that such a ban actually
harms developing countries because they aren’t able to get away with violating international law
in a way that more powerful countries can, you can show that voting PRO might actually sway
power more towards global powers, and you can argue that this is bad. Or, if you think critically,
you can also approach this resolution by asking the question: why the UN? If you can prove that
the UN is incapable of banning the usage of LAWs and a different actor can do a way better job
(and that the two are mutually exclusive), that can be another reason for the judge to vote CON.
Whichever side you’re on, there are countless ways to argue. Remember, this debate is not
happening in a vacuum. Countries have their own incentives, and United Nations action might,
or might not, change these incentives. With more dying due to war, this resolution will be a
chance to explore these deadly battles that, being fortunate enough to live in wealthy families,
we don’t face on a daily basis.

5
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
The question of quantified impacts
Recent debates often come down to the idea of “quantified impacts.” For some resolutions, this
is a valid strategy, such as for those which engage with lots of economic impacts.
Especially for this resolution, don’t fall for this trap. There’s no way for any researcher to answer
the question “how many lives will banning lethal autonomous weapons save?” or “how many
more people would have died if [insert a country] used lethal autonomous weapons against
[insert another country]?” because these can’t be answered without, well, simulating warfare.
War is far too nuanced for any impact to simply be a number – these are lives at stake.
If you can find an impact with numbers, great. You can explain to the judge why your impact is
large and why it outweighs your opponent’s. But if you can’t, you should not let your opponents
tell the judge that this is a reason you lose. To some, “a child dies every 10 minutes as a result of
destroyed health infrastructure” loses to “462,000 children are starving” because “we don’t know
the exact number of children who died”; to others, “some people undergo forced sterilization”
loses to “there are 5000 new cases of cholera per day” because the former doesn’t have numbers.
Tell the judge that these are human lives being ruined and dignity being trampled and just
because your opponent’s number appears bigger, doesn’t mean your impact is any less
important. Your evidence not having “big fancy numbers” should not disqualify you from
winning any debate round. And especially in a debate over warfare, even one additional death is
a big impact because that’s a person with their own experiences, families, children – it’s a
community now in shambles, or one more already-painful funeral that has to be planned.
In previous seasons, such as that of patents, it was easy to find evidence suggesting, say, an
increase in crop yields caused by GMO patents. Research for these numbers doesn’t kill people.
For this resolution, you will likely struggle to find evidence suggesting that banning lethal
autonomous weapons will save [insert number] lives. Research for this is impossible if the
researcher doesn’t, say, breach ethical protocol and assign people randomly to either be bombed
or not be bombed.
Whichever direction your research takes you, do not be discouraged by a lack of numbers; rather,
think of it as another, arguably more engaging, way to debate.

6
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet

PRO Arguments

7
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Don’t Let Robots Pull the Trigger
Scientific American 2019 (Scientific American, 3-1-2019,
https://www.scientificamerican.com/article/dont-let-robots-pull-the-trigger/, Scientific American,
Don’t Let Robots Pull the Trigger)

The killer machines are coming. Robotic weapons that target and destroy without human
supervision are poised to start a revolution in warfare comparable to the invention of gunpowder
or the atomic bomb. The prospect poses a dire threat to civilians—and could lead to some of the
bleakest scenarios in which artificial intelligence runs amok. A prohibition on killer robots, akin
to bans on chemical and biological weapons, is badly needed. But some major military powers
oppose it.
The robots are no technophobic fantasy. In July 2017, for example, Russia's Kalashnikov Group
announced that it had begun development of a camera-equipped 7.62-millimeter machine gun
that uses a neural network to make “shoot/no-shoot” decisions. An entire generation of self-
controlled armaments, including drones, ships and tanks, is edging toward varying levels of
autonomous operation. The U.S. appears to hold a lead in R&D on autonomous systems—with
$18 billion slated for investment from 2016 to 2020. But other countries with substantial arms
industries are also making their own investments.
Military planners contend that “lethal autonomous weapons systems”—a more anodyne term—
could, in theory, bring a detached precision to war fighting. Such automatons could diminish the
need for troops and reduce casualties by leaving the machines to battle it out. Yet control by
algorithm can potentially morph into “out of control.” Existing AI cannot deduce the intentions
of others or make critical decisions by generalizing from past experience in the chaos of war.
The inability to read behavioral subtleties to distinguish civilian from combatant or friend versus
foe should call into question whether AIs should replace GIs in a foreseeable future mission. A
killer robot of any kind would be a trained assassin, not unlike Arnold Schwarzenegger in The
Terminator. After the battle is done, moreover, who would be held responsible when a machine
does the killing? The robot? Its owner? Its maker?
With all these drawbacks, a fully autonomous robot fashioned using near-term technology could
create a novel threat wielded by smaller nations or terrorists with scant expertise or financial
resources. Swarms of tiny, weaponized drones, perhaps even made using 3-D printers, could
wreak havoc in densely populated areas. Prototypes are already being tested: the U.S.
Department of Defense demonstrated a nonweaponized swarm of more than 100 micro drones in
2016. Stuart Russell of the University of California, Berkeley, a prominent figure in AI research,
has suggested that “antipersonnel micro robots” deployed by just a single individual could kill
many thousands and constitute a potential weapon of mass destruction.

8
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Since 2013 the United Nations Convention on Certain Conventional Weapons (CCW), which
regulates incendiary devices, blinding lasers and other armaments thought to be overly harmful,
has debated what to do about lethal autonomous weapons systems. Because of opposition from
the U.S., Russia and a few others, the discussions have not advanced to the stage of drafting
formal language for a ban. The U.S., for one, has argued that its policy already stipulates that
these military personnel retain control over autonomous weapons and that premature regulation
could put a damper on vital AI research.
A ban need not be overly restrictive. The Campaign to Stop Killer Robots, a coalition of 89
nongovernmental organizations from 50 countries that has pressed for a such a prohibition,
emphasizes that it would be limited to offensive weaponry and not extend to antimissile and
other defensive systems that automatically fire in response to an incoming warhead.
The current impasse has prompted the campaign to consider rallying at least some nations to
agree to a ban outside the forum provided by the CCW, an option used before to kick-start
multinational agreements that prohibit land mines and cluster munitions. A preemptive ban on
autonomous killing machines, with clear requirements for compliance, would stigmatize the
technology and help keep killer robots out of military arsenals.
Since it was first presented at the International Joint Conference on Artificial Intelligence in
Stockholm in July, 244 organizations and 3,187 individuals have signed a pledge to “neither
participate in nor support the development, manufacture, trade, or use of lethal autonomous
weapons.” The rationale for making such a pledge was that laws had yet to be passed to bar killer
robots. Without such a legal framework, the day may soon come when an algorithm makes the
fateful decision to take a human life.

9
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
A.I. drones used in the Ukraine war raise fears of killer robots wreaking havoc across
future battlefields
Jeremy Kahn 2022 (Jeremy Kahn, 3-29-2022, https://fortune.com/2022/03/29/artificial-
intelligence-drones-autonomous-weapons-loitering-munitions-slaughterbots-ukraine-war/,
Fortune, A.I. drones used in the Ukraine war raise fears of killer robots wreaking havoc across
future battlefields)

The explosive-packed drone lay belly-up, like a dead fish, on a Kyiv street, its nose crushed and
its rear propeller twisted. It had crashed without its deadly payload detonating, perhaps owing to
a malfunction or because Ukrainian forces had shot it down.
Photos of the drone were quickly uploaded to social media, where weapons experts identified it
as a KUB-BLA “loitering munition” made by Zala Aero, the dronemaking arm of Russian
weapons maker Kalashnikov. Colloquially referred to as a “kamikaze drone,” it can fly
autonomously to a specific area and then circle for up to 30 minutes.
The drone’s operator, remotely monitoring a video feed from the craft, can wait for enemy
soldiers or a tank to appear below. In some cases, the drones are equipped with A.I. software that
lets them hunt for particular kinds of targets based on images that have been fed into their
onboard systems. In either case, once the enemy has been spotted and the operator has chosen to
attack it, the drone nose-dives into its quarry and explodes.
The war in Ukraine has become a critical proving ground for increasingly sophisticated loitering
munitions. That’s raised alarm bells among human rights campaigners and technologists who
fear they represent the leading edge of a trend toward “killer robots” on the battlefield—weapons
controlled by artificial intelligence that autonomously kill people without a human making the
decision.
Militaries worldwide are keeping a close eye on the technology as it rapidly improves and its
cost declines. The selling point is that small, semiautonomous drones are a fraction of the price
of, say, a much larger Predator drone, which can cost tens of millions of dollars, and don’t
require an experienced pilot to fly them by remote control. Infantry soldiers can, with just a little
bit of training, easily deploy these new autonomous weapons.
“Predator drones are superexpensive, so countries are thinking, ‘Can I accomplish 98% of what I
need with a much smaller, much less expensive drone?’ ” says Brandon Tseng, a former Navy
SEAL who is cofounder and chief growth officer of U.S.-based Shield AI, a maker of small
reconnaissance drones that use A.I. for navigation and image analysis.
But human rights groups and some computer scientists fear the technology could represent a
grave new threat to civilians in conflict zones, or maybe even the entire human race.

10
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
“Right now, with loitering munitions, there is still a human operator making the targeting
decision, but it is easy to remove that. And the big danger is that without clear regulation, there is
no clarity on where the red lines are,” says Verity Coyle, senior adviser to Amnesty
International, a participant in the Stop Killer Robots campaign.
The global market for A.I.-enabled lethal weapons of all kinds is growing quickly, from nearly
$12 billion this year to an expected $30 billion by the end of the decade, according to Allied
Market Research. In the U.S. alone, annual spending on loitering munitions, totaling about $580
million today, will rise to $1 billion by the end of the decade, Grand View Research said.
Dagan Lev Ari, the international sales and marketing director for UVision, an Israeli defense
company that makes loitering munitions, says demand had been inching up until 2020, when war
broke out between Armenia and Azerbaijan. In that conflict, Azerbaijan used advanced drones
and loitering munitions to decimate Armenia’s larger arsenal of tanks and artillery, helping it
achieve a decisive victory.
That got many countries interested, Lev Ari says. It also helps that the U.S. has begun major
purchases, including UVision’s Hero family of kamikaze drones, as well as the Switchblade,
made by rival U.S. firm AeroVironment. The Ukraine war has further accelerated demand, Lev
Ari adds. “Suddenly, people see that a war in Europe is possible, and defense budgets are
increasing,” he says.
Although less expensive than certain weapons, loitering munitions are not cheap. For example,
each Switchblade costs as much as $70,000, after the launch and control systems plus munitions
are factored in, according to some reports.
The U.S. is said to be sending 100 Switchblades to Ukraine. They would supplement that
country’s existing fleet of Turkish-made Bayraktar TB2 drones, which can take off, land, and
cruise autonomously, but need a human operator to find targets and give the order to drop the
missiles or bombs they carry.
Loitering munitions aren’t entirely new. More primitive versions have been around since the
1960s, starting with a winged missile designed to fly to a specific area and search for the radar
signature of an enemy antiaircraft system. What’s different today is that the technology is far
more sophisticated and accurate.
In theory, A.I.-enabled weapons systems may be able to reduce civilian war casualties.
Computers can process information faster than humans, and they are not affected by the
physiological and emotional stress of combat. They might also be better at determining, in the
heat of battle, whether the shape suddenly appearing from behind a house is an enemy soldier or
a child.

11
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
But in practice, human rights campaigners and many A.I. researchers warn, today’s machine-
learning–based algorithms can’t be trusted with the most consequential decision anyone will ever
face: whether to take a human life. Image recognition software, while equaling human abilities in
some tests, falls far short in many real-world situations—such as rainy or snowy conditions, or
dealing with stark contrasts between light and shadow.
It can often make strange mistakes that humans never would. In one experiment, researchers
managed to trick an A.I. system into thinking that a turtle was actually a rifle by subtly altering
the pattern of pixels in the image.
Even if target identification systems were completely accurate, an autonomous weapon would
still pose a serious danger unless it were coupled with a nuanced understanding of the entire
battlefield. For instance, the A.I. system may accurately identify an enemy tank, but not
understand that it’s parked next to a kindergarten, and so should not be attacked for fear of
killing civilians.
Some supporters of a ban on autonomous weapons have evoked the danger of “slaughterbots,”
swarms of small, relatively inexpensive drones, configured either to drop an antipersonnel
grenade or as loitering munitions. Such swarms could, in theory, be used to kill everyone in a
certain area, or to commit genocide, killing everyone with certain ethnic features, or even use
facial recognition to assassinate specific individuals.
Max Tegmark, a physics professor at MIT and cofounder of the Future of Life Institute, which
seeks to address “existential risks” to humanity, says swarms of slaughterbots would be a kind of
“poor man’s weapon of mass destruction.” Because such autonomous weapons could destabilize
the existing world order, he hopes that powerful nations—such as the U.S. and Russia—that
have been pursuing other kinds of A.I.-enabled weapons, from robotic submarines to
autonomous fighter jets, may at least agree to ban these slaughterbot drones and loitering
munitions.
But so far, efforts at the United Nations to enact a restriction on the development and sale of
lethal autonomous weapons have foundered. A UN committee has spent more than eight years
debating what, if anything, to do about such weapons and has yet to reach any agreement.
Although as many as 66 countries now favor a ban, the committee operates by consensus, and
the U.S., the U.K., Russia, Israel, and India oppose any restrictions. China, which is also
developing A.I.-enabled weapons, has said it supports a ban, but absent a treaty, will not
unilaterally forgo them.
As for the dystopian future of slaughterbots, companies building loitering munitions and other
A.I.-enabled weapons say they are meant to enhance human capabilities on the battlefield, not
replace them. “We don’t want the munition to attack by itself,” Lev Ari says, although he

12
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
acknowledges that his company is adding A.I.-based target recognition to its weapons that would
increase their autonomy. “That is to assist you in making the necessary decision,” he says.
Lev Ari points out that even if the munition is able to find a target, say, an enemy tank, it doesn’t
mean that it is the best target to strike. “That particular tank might be inoperable, while another
nearby may be more of a threat,” he says.
Noel Sharkey, emeritus professor of computer science at the University of Sheffield in the U.K.,
who is also a spokesperson for the Stop Killer Robots campaign, says automation is speeding the
pace of battle to the point that humans can’t respond effectively without A.I. helping them
identify targets. And inevitably one A.I. innovation is driving the demand for more, in a sort of
arms race. Says Sharkey, “Therein lies the path to a massive humanitarian disaster.”

13
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Killer robots are fast becoming a reality – we must stop this from happening if we want to
stop a global AI arms race
Ariel Conn 2018 (Ariel Conn, 9-2-2018, https://metro.co.uk/2018/09/02/killer-robots-are-fast-
becoming-a-reality-we-must-stop-this-from-happening-if-we-want-to-stop-a-global-ai-arms-
race-7903717/, Metro, Killer robots are fast becoming reality)

Killer robots. It’s a phrase that’s terrifying and one that most people think of as still in the realm
of science fiction.
Yet weapons built with artificial intelligence (AI) – weapons that could identify, target, and kill a
person all on their own – are quickly moving from sci-fi to reality.
To date, no weapons exist that can specifically target people. But there are weapons that can
track incoming missiles or locate enemy radar signals, and these weapons can autonomously
strike these non-human threats without any person involved in the final decision.
Experts predict that in just a few years, if not sooner, this technology will be advanced enough to
use against people.
Over the last few years, delegates at the United Nations have debated whether to consider
banning killer robots, more formally known as lethal autonomous weapons systems (LAWS).
This week, delegates met again to consider whether more meetings next year could lead to
something more tangible – a political declaration or an outright ban.
Meanwhile, those who would actually be responsible for designing LAWS – the AI and robotics
researchers and developers – have spent these years calling on the UN to negotiate a treaty
banning LAWS.
More specifically, nearly 4,000 AI and robotics researchers called for a ban on LAWS in 2015;
in 2017, 116 CEOs of AI companies asked the UN to ban LAWS; and in 2018, more than 150
AI-related organisations and nearly 3,100 individuals took that call a step further and pledged not
to be involved in LAWS development.
And AI researchers have plenty of reasons for their consensus that the world should seek a ban
on lethal autonomous weapons. Principle among these is that AI experts tend to recognise how
dangerous and destabilising these weapons could be.
The weapons could be hacked. The weapons could fall into the hands of ‘bad actors’. The
weapons may not be as ‘smart’ as we think and could unwittingly target innocent civilians.
Because the materials necessary to build the weapons are cheap and easy to obtain, military
powers could mass-produce these weapons, increasing the likelihood of proliferation and mass

14
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
killings. The weapons could enable assassinations or, alternatively, they could become weapons
of oppression, allowing dictators and warlords to subdue their people.
But perhaps the greatest risk posed by LAWS is the potential to ignite a global AI arms race.
For now, governments insist they will ensure that testing, validation, and verification of these
weapons is mandatory. However, these weapons are not only technologically novel, but also
transformative; they have been described as the third revolution in warfare, following gun
powder and nuclear weapons. LAWS have the potential to become the most powerful types of
weapons the world has seen.
Varying degrees of autonomy already exist in weapon systems around the world, and levels of
autonomy and advanced AI capabilities in weapons are increasing rapidly.
If one country were to begin substantial development of a LAWS program – or even if the
program is simply perceived as substantial by other countries – an AI arms race would likely be
imminent.
During an arms race, countries and AI labs will feel increasing pressure to find shortcuts around
safety precautions. Once that happens, every threat mentioned above becomes even more likely,
if not inevitable.
As stated in the Open Letter Against Lethal Autonomous Weapons:
The key question for humanity today is whether to start a global AI arms race or to
prevent it from starting. If any major military power pushes ahead with AI weapon
development, a global arms race is virtually inevitable, and the endpoint of this
technological trajectory is obvious: autonomous weapons will become the Kalashnikovs
of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw
materials, so they will become ubiquitous and cheap for all significant military powers to
mass-produce. It will only be a matter of time until they appear on the black market and
in the hands of terrorists, dictators wishing to better control their populace, warlords
wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such
as assassinations, destabilising nations, subduing populations and selectively killing a
particular ethnic group.
Most countries have expressed their strong desire to move from talking about this topic to
reaching an outcome. There have been many calls from countries and groups of countries to
negotiate a new treaty to either prohibit LAWS and/or affirm meaningful human control over the
weapons. Some countries have suggested other measures such as a political declaration.
But a few countries – especially Russia, the United States, South Korea, Israel, and Australia –
are obfuscating the process, which could lead us closer to an arms race. This is a threat we must
prevent.

15
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Robots that Kill: The Case for Banning Lethal Autonomous Weapon Systems
Matthew Anzarouth 2022 (Matthew Anzarouth, 6-20-2022, https://harvardpolitics.com/robots-
that-kill-the-case-for-banning-lethal-autonomous-weapon-systems/, Harvard Political Review,
Robots that Kill: The Case for Banning Lethal Autonomous Weapon Systems)

In the days leading up to its withdrawal from Afghanistan, the U.S. military conducted a drone
strike that killed 10 civilians in Kabul. The timing of this tragedy, in the midst of the mass
evacuation from Afghanistan, casts doubt on the U.S. military’s promise to stop serving as a
global policeman. The Biden administration has not ended the “forever wars” — it has simply
elected to fight them with robots in the sky rather than boots on the ground.
Pointing to the drone strike in Kabul as prime evidence, many experts warn of the dangers of
Biden’s ‘over-the-horizon’ counterterrorism strategy, which uses imprecise semi-autonomous
drones to replace human soldiers and combat terrorists from afar. Little attention, however, is
being paid to an even more threatening weapon that may define the coming decades of war.
Soon, guided missiles and semi-autonomous drones may be replaced by fully autonomous
weapons that have the ultimate say over who lives and who dies.

What Are Lethal Autonomous Weapon Systems?


Lethal autonomous weapon systems are being introduced into military arsenals, and the United
States, Russia, South Korea, Israel and the United Kingdom have shown a keen interest in their
development. Unlike semi-autonomous drones, LAWS can select targets and attack them without
any human intervention.
These weapons are still in their infancy and, over time, will likely develop greater autonomy and
more capabilities. One type of autonomous weapon would, after being activated by a human
operator, fly around the world, identify its targets and fire missiles at them. An existing
preliminary version of this weapon is the Israeli Harpy, which is programmed to roam around in
the air in a predetermined loitering area, detecting and attacking enemy radar emitters. Political
scientist Michael C. Horowitz posits in Dædalus that as technology progresses militaries may
even use LAWS that serve as operations planning systems, autonomous battle systems that
“could decide the probability of winning a war and whether to attack, plan an operation and then
direct other systems — whether human or robotic — to engage in particular attacks.”
The appeal of LAWS to countries like the U.S. and Russia is quite intuitive. If a country can
fight wars with ruthless efficiency, accurately pick out terrorists from hundreds of feet in the sky,
and spare the lives of thousands of soldiers, why wouldn’t it do so? A closer inspection reveals
that the costs of this technology vastly outweigh the benefits.

16
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
The Danger in Killer Robots
The use of LAWS would lower the threshold for states going to war, increasing the likelihood of
conflict. Many philosophers, political scientists and governments have expressed the concern
that militaries will resort to conflict more often if they do not need to rely on soldiers and can use
LAWS instead. Domestic populations will be less wary of conflict if it no longer means seeing
fellow citizens risk their lives on the battlefield.
The threshold-lowering effect of LAWS is particularly relevant in the context of a current
bipartisan trend in the U.S. against intervention. It is plausible that without LAWS, the era of
U.S. unilateral interventions and the war on terror would come to an end. Recognizing the
failures of wars in Vietnam, Iraq and Afghanistan, politicians on both sides of the political
spectrum are pushing not to send troops abroad to risk their lives. But the option of using LAWS
and sidestepping the costs to a country’s soldiers threatens to reverse this anti-war trend and
provide militaries with a politically palatable way of fighting wars. There could be catastrophic
consequences if we liberate militaries from political constraints preventing them from going to
war.
The first wave of the proliferation of LAWS may simply look like the natural progression of our
current drone capabilities. For instance, Russia may have already used autonomous drones to
attack targets in Syria, but these weapons are only different from current semi-autonomous
drones in the greater degree of risk assumed by eliminating human intervention. In other
instances, however, the use of LAWS will present substantial advantages that make them
different in kind from drones as we know them. Consider, for example, Azerbaijan’s use of
Israeli-supplied IAI Harop drones in the war with Armenia in 2020. The loitering munition
system used by the military allowed tiny and hardly-detectable autonomous drones to circle over
the enemy’s defense line, pick out targets and attack them, an ability that proved decisive in
Azerbaijan’s victory in the war.
To understand what a world with LAWS will look like in the long term requires a bit of
imagination. Perhaps a post-withdrawal Afghanistan will involve weapons like the Harop drones
constantly roaming the skies and diving into the ground to take out targets. Or maybe we will see
the chilling predictions of science fiction come true. In their book AI 2041, writers Chen Qiufan
and Kai-Fu Lee express their fear that LAWS will fall into the hands of armed groups and
terrorists. They describe a “Unabomber-like scenario in which a terrorist carries out the targeted
killing of business elites and high-profile individuals,” using autonomous drones that rely on
facial recognition to identify their targets. Leading expert in artificial intelligence Toby Walsh
warns of these weapons falling into the hands of dictators and being used as tools of ethnic
cleansing.

17
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Even if we assume that LAWS are operated primarily by legitimate militaries, additional
complications arise when we consider what happens in the case of unjust killings. Philosopher
Robert Sparrow argues that the autonomy of LAWS makes it impossible to hold anyone
accountable for illegitimate killings they commit. If the robot acted autonomously, tracing
accountability back to another agent seems morally objectionable and legally infeasible. But it
would also be unjust to not punish illegitimate killings. This dilemma presents a so-called
‘responsibility gap’, where no one can be held responsible for illegitimate killings, and wrongful
acts of war go undeterred.

Preventing The Next Arms Race


Despite these grave concerns, countries are pushing ahead in the research and development of
LAWS. With large military powers leading the race, there are two potential outcomes if this
trend goes uninterrupted. One is that LAWS become tools with which powerful militaries
destabilize other regions, starting a new chapter of the ‘forever wars’ without boots on the
ground. The second potential outcome is that LAWS become front and centre in conflict between
the large military powers leading the race. They may drag us into a new war between
superpowers without the mutually assured destruction that prevents nuclear warfare since LAWS
can engage in a series of smaller, yet still extremely impactful, attacks that will not be deterred
by the threat of retaliation.
The movement against LAWS is small, but it is growing. More and more countries have
expressed concern about the destabilizing effects of these weapons and stressed the need for a
collective agreement to rule them out, much like existing treaties that limit chemical, biological
and intermediate-range nuclear weapons. However, military powers like the U.S. and Russia
have blocked regulations on LAWS at the Convention on Conventional Weapons and are quietly
leading what some are calling the third revolution in warfare. The challenge in regulating or
banning LAWS, as with many forms of international cooperation, is overcoming collective
action problems. The development of LAWS seems like a textbook example of a “security
dilemma,” wherein one country perceives heightened security measures by another as a threat
and decides to adopt similar measures in response. Together, these factors increase the risk of
escalation to an outcome neither party desires. Our best hope in confronting this dilemma is to
foster discussions in international negotiations that expose to military superpowers the great risks
that LAWS present. While many countries may fear falling behind if they make the first move to
disarm and de-escalate, it is possible that when the stakes are sufficiently high and it is clear that
nobody, including dominant powers, is immune to the dangers of LAWS, we may see sufficient
international will to address them.

18
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
While LAWS still appear to be in their infancy, we are running out of time to prevent their
uncontrolled proliferation. Once one country uses these weapons to significantly tilt the playing
field in its favor, others may have no choice but to follow suit. It is therefore imperative that we
switch off the robots before they take over the battlefield and the horrors of science fiction
become reality.

19
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Four Reasons To Ban Lethal Autonomous Weapon Systems (LAWS)
Robotics Biz 2019 (Roboticsbiz, 11-4-2019, https://roboticsbiz.com/four-reasons-to-ban-lethal-
autonomous-weapon-systems-laws/, RoboticsBiz, Four reasons to ban lethal autonomous
weapon systems (LAWS)) // AL 6-20-2022

LAWS (lethal autonomous weapon systems), also called “killer robots”, is a special kind of
weapon that uses sensors and algorithms to autonomously identify, engage, and destroy a target
without manual human intervention. To date, no such weapons exist. But surely, some weapons
can track incoming missiles and autonomously strike those threats without any person’s
involvement. According to some people, in just a few years, if not sooner, this technology will be
advanced enough to use LAWS against people.
Therefore, a growing number of nations, states such as the United Nations (UN) Secretary-
General, the International Committee of the Red Cross (ICRC), and non-governmental
organizations are appealing to the international community for regulation or a ban on LAWS due
to a host of fundamental ethical, moral, legal, accountability and security concerns. The demands
for a ban killer robots have firm support from more than 70 countries, over 3,000 experts in
robotics and artificial intelligence, including leading scientists including Stephen Hawking, Elon
Musk (Tesla) and Demis Hassabis (Google), and 116 artificial intelligence and robotics
companies, 160 religious leaders and 20 Nobel Peace Laureates. China is the first permanent
member of the UN Security Council to call for a legally binding law, similar to Blinding Laser
Weapons, within the CCW.
But why? Why should we ban LAWS in the first place? What are the risks posed by lethal
autonomous weapons? In this post, we will look at four reasons why we should ban lethal
autonomous weapon systems (LAWS) worldwide.

Predictability & Reliability


Fully autonomous weapons systems remain a source of fear, mostly because LAWS can contain
inherent imperfection and can never be entirely predictable or reliable. There is a level of
uncertainty in LAWS, especially with technologies such as machine learning being used to guide
the decision-making processes within LAWS, leaving that no one can guarantee desirable results.
Besides, at a technical level, the question arises whether and how ethical standards and
international law could be incorporated into the algorithms guiding the weapons systems.
Experts argue that the technology needs to have a certain level of trust and confidence before it
can be used for military purposes. Highly unpredictable weapons systems are most likely not to
be used if they can not guarantee their successful outcome. These weapons could be highly

20
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
dangerous because of their nature, particularly in their interaction with other autonomous
systems and if they are capable of self-learning.

Arms Race And Proliferation


Many fear that the development of LAWS might lead to a global arms race with nations who
might not be able to prevent proliferation over time as the technology is relatively cheap and
simple to copy. This increases proliferation risks and thus might enable dictators, nonstate armed
actors or terrorists to acquire fully autonomous weapons. As fully autonomous weapons react
and interact with each other at speeds beyond human control, these weapons could also lead to
accidental and rapid escalation of conflict.
In addition to their proliferation, some people raise concerns about their domestic use against
populations and by terrorist groups. In 2014, AI specialist Steve Omohundro warned that “an
autonomous weapons arms race is already taking place. Elon Musk and Stephen Hawking signed
the “Lethal Autonomous Weapons Pledge” in 2018, calling for a global ban on autonomous
weapons. In 2017, the Future of Life Institute had organized an even bigger pledge, cautioning
the start of a potential arms race between the global powers.

Humanity In Conflict: Ethical Concerns


Many argued that the machine is unable to replace human judgment and should not be allowed to
decide life and death. Making such decisions requires human attributes such as compassion and
intuition, which the robots do not possess. Giving robots to decide on human life goes against the
principles of human dignity and the right to life. This decision cannot be left to an algorithm.
Outsourcing this decision would mean outsourcing morality, they argue. LAWS is perhaps good
at making quick and precise decisions, but it is not as good as human judgment in evaluating
contexts.

Responsibility And Accountability


Fully autonomous weapons create an accountability vacuum regarding who is responsible for an
unlawful act. If an autonomous weapon carries out a lethal attack, who is responsible for this
attack? The robot, the developer, or the military commander? As LAWS encompasses many
nodes in a military chain of responsibility, it might be challenging to pinpoint who’s accountable,
and there are fears that unclear accountability could lead to impunity. Law is addressed to
humans, and the legal responsibility accountability cannot be transferred to a machine.

21
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Autonomous weapons that kill must be banned, insists UN chief
United Nations 2019 (United Nations, 3-25-2019,
https://news.un.org/en/story/2019/03/1035381, UN News, Autonomous weapons that kill must
be banned, insists UN chief)

In a message to the Group of Governmental Experts, the UN chief said that “machines with the
power and discretion to take lives without human involvement are politically unacceptable,
morally repugnant and should be prohibited by international law”.
No country or armed force is in favour of such “fully autonomous” weapon systems that can take
human life, Mr Guterres insisted, before welcoming the panel’s statement last year that “human
responsibility for decisions on the use of weapons systems must be retained, since accountability
cannot be transferred to machines”.
Although this 2018 announcement was an “important line in the sand” by the Group of
Governmental Experts - which meets under the auspices of the Convention on Certain
Conventional Weapons (CCW) – the UN chief noted in his statement that while some Member
States believe new legislation is required, while others would prefer less stringent political
measures and guidelines that could be agreed on.
Nonetheless, it is time for the panel “to deliver” on LAWS, the UN chief said, adding that “it is
your task now to narrow these differences and find the most effective way forward…The world
is watching, the clock is ticking and others are less sanguine. I hope you prove them wrong.”
The LAWS meeting is one of two planned for this year, which follow earlier Governmental
Expert meetings in 2017 and 2018 at the UN in Geneva.
The Group’s agenda covers technical issues related to the use of lethal autonomous weapons
systems, including the challenges the technology poses to international humanitarian law, as well
as human interaction in the development, deployment and use of emerging tech in LAWS.
In addition to the Governmental Experts, participation is expected from a wide array of
international organizations, civil society, academia, and industry.
The CCW’s full name is the 1980 Convention on Prohibitions or Restrictions on the Use of
Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have
Indiscriminate Effects, entered into force on 2 December 1983.
The Convention currently has 125 States Parties. Its purpose is to prohibit or restrict the use of
specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to
combatants or to affect civilians indiscriminately.

22
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
In previous comments on AI, the Secretary-General likened the technology to “a new frontier”
with “advances moving at warp speed”.
“Artificial Intelligence has the potential to accelerate progress towards a dignified life, in peace
and prosperity, for all people,” he said at the AI for Good Global Summit in 2017, adding that
there are also serious challenges and ethical issues which must be taken into account – including
cybersecurity, human rights and privacy.

23
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Lethal autonomous weapons and World War III: it's not too late to stop the rise of 'killer
robots'
Toby Walsh 2021 (Toby Walsh, 8-1-2021, https://newsroom.unsw.edu.au/news/science-
tech/lethal-autonomous-weapons-and-world-war-iii-its-not-too-late-stop-rise-killer, UNSW
Newsroom, Lethal autonomous weapons and World War III: it's not too late to stop the rise of
'killer robots')

Last year, according to a United Nations report published in March, Libyan government forces
hunted down rebel forces using “lethal autonomous weapons systems” that were “programmed to
attack targets without requiring data connectivity between the operator and the munition”. The
deadly drones were Turkish-made quadcopters about the size of a dinner plate, capable of
delivering a warhead weighing a kilogram or so.
Artificial intelligence researchers like me have been warning of the advent of such lethal
autonomous weapons systems, which can make life-or-death decisions without human
intervention, for years. A recent episode of 4 Corners reviewed this and many other risks posed
by developments in AI.
Around 50 countries are meeting at the UN offices in Geneva this week in the latest attempt to
hammer out a treaty to prevent the proliferation of these killer devices. History shows such
treaties are needed, and that they can work.
The lesson of nuclear weapons
Scientists are pretty good at warning of the dangers facing the planet. Unfortunately, society is
less good at paying attention.
In August 1945, the United States dropped atomic bombs on the Japanese cities of Hiroshima
and Nagasaki, killing up to 200,000 civilians. Japan surrendered days later. The second world
war was over, and the Cold War began.
The world still lives today under the threat of nuclear destruction. On a dozen or so occasions
since then, we have come within minutes of all-out nuclear war.
Well before the first test of a nuclear bomb, many scientists working on the Manhattan Project
were concerned about such a future. A secret petition was sent to President Harry S. Truman in
July 1945. It accurately predicted the future:
The development of atomic power will provide the nations with new means of destruction.
The atomic bombs at our disposal represent only the first step in this direction, and there
is almost no limit to the destructive power which will become available in the course of
their future development. Thus a nation which sets the precedent of using these newly

24
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
liberated forces of nature for purposes of destruction may have to bear the responsibility
of opening the door to an era of devastation on an unimaginable scale.
If after this war a situation is allowed to develop in the world which permits rival powers
to be in uncontrolled possession of these new means of destruction, the cities of the
United States as well as the cities of other nations will be in continuous danger of sudden
annihilation. All the resources of the United States, moral and material, may have to be
mobilized to prevent the advent of such a world situation …
Billions of dollars have since been spent on nuclear arsenals that maintain the threat of mutually
assured destruction, the “continuous danger of sudden annihilation” that the physicists warned
about in July 1945.
A warning to the world
Six years ago, thousands of my colleagues issued a similar warning about a new threat. Only this
time, the petition wasn’t secret. The world wasn’t at war. And the technologies weren’t being
developed in secret. Nevertheless, they pose a similar threat to global stability.
The threat comes this time from artificial intelligence, and in particular the development of lethal
autonomous weapons: weapons that can identify, track and destroy targets without human
intervention. The media often like to call them “killer robots”.
Our open letter to the UN carried a stark warning.
The key question for humanity today is whether to start a global AI arms race or to
prevent it from starting. If any major military power pushes ahead with AI weapon
development, a global arms race is virtually inevitable. The endpoint of such a
technological trajectory is obvious: autonomous weapons will become the Kalashnikovs
of tomorrow.
Strategically, autonomous weapons are a military dream. They let a military scale its operations
unhindered by manpower constraints. One programmer can command hundreds of autonomous
weapons. An army can take on the riskiest of missions without endangering its own soldiers.
Nightmare swarms
There are many reasons, however, why the military’s dream of lethal autonomous weapons will
turn into a nightmare. First and foremost, there is a strong moral argument against killer robots.
We give up an essential part of our humanity if we hand to a machine the decision of whether a
person should live or die.
Beyond the moral arguments, there are many technical and legal reasons to be concerned about
killer robots. One of the strongest is that they will revolutionise warfare. Autonomous weapons
will be weapons of immense destruction.

25
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Previously, if you wanted to do harm, you had to have an army of soldiers to wage war. You had
to persuade this army to follow your orders. You had to train them, feed them and pay them.
Now just one programmer could control hundreds of weapons.
In some ways lethal autonomous weapons are even more troubling than nuclear weapons. To
build a nuclear bomb requires considerable technical sophistication. You need the resources of a
nation state, skilled physicists and engineers, and access to scarce raw materials such as uranium
and plutonium. As a result, nuclear weapons have not proliferated greatly.
Autonomous weapons require none of this, and if produced they will likely become cheap and
plentiful. They will be perfect weapons of terror.
Can you imagine how terrifying it will be to be chased by a swarm of autonomous drones? Can
you imagine such drones in the hands of terrorists and rogue states with no qualms about turning
them on civilians? They will be an ideal weapon with which to suppress a civilian population.
Unlike humans, they will not hesitate to commit atrocities, even genocide.
Time for a treaty
We stand at a crossroads on this issue. It needs to be seen as morally unacceptable for machines
to decide who lives and who dies. And for the diplomats at the UN to negotiate a treaty limiting
their use, just as we have treaties to limit chemical, biological and other weapons. In this way,
we may be able to save ourselves and our children from this terrible future.

26
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Why we need to regulate non-state use of arms
Stuart Russell 2022 (Stuart Russell, 5-18-2022,
https://www.weforum.org/agenda/2022/05/regulate-non-state-use-arms/, World Economic
Forum, Non state actors can now create lethal autonomous weapons from civilian products)

Using open-source AI and lightweight onboard processing, civilian devices, such as drones used
for photography, can be converted into lethal autonomous weapons.
Once the software of commercial unmanned aerial vehicles (UAVs) is adapted for lethal
purposes, global dissemination is inevitable and there is no practical way to prevent the spread of
this code.
States must agree on anti-proliferation measures so that non-state actors cannot create very large
numbers of weapons by repurposing civilian products.
An emerging arms race between major powers in the area of lethal autonomous weapons systems
is attracting a great deal of attention. Negotiations on a potential treaty to ban such weapons have
stalled while the technology rapidly advances.
Less attention has been paid to the fact that open-source artificial intelligence (AI) capabilities
and lightweight, low-power onboard processing make it possible to create “home-made”
autonomous weapons by converting civilian devices, such as camera-equipped quadcopters.

Home-made lethal autonomous weapons


Non-state actors can now deploy home-made, remotely piloted drones, as well as weapons that,
like cruise missiles, can pilot themselves to designated target locations and deliver explosive
materials. Examples include an attack on Russian bases in Syria involving 13 drones and an
assassination attempt against the Prime Minister of Iraq. The Washington Post reports that “[t]he
[Iranian] Quds Force has supplied training for militants on how to modify commercial UAVs
[unmanned aerial vehicles] for military use.”
Will home-made, fully autonomous weapons be the next logical step? Such weapons could evade
or destroy defensive systems, locate and attack targets based on visual criteria or hunt down
individual humans using face or gait recognition.
Already, commercial UAVs can manoeuvre through fields of dense obstacles and lock onto
specific humans. Once such software is adapted for lethal purposes, global dissemination is
inevitable. There is no practical way to prevent the spread of this code. The only plausible
countermeasure is to prevent the proliferation of the underlying physical platforms.

27
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet

There seems to be no obvious way to stop the manufacture and use of small numbers of home-
made autonomous weapons. These would not present a very different threat from small numbers
of remotely piloted weapons, which non-state actors can easily deploy. With either technology,
we might expect civilian casualties in “atrocity” attacks to rise from two to three figures. On the
other hand, with no need for communication links, which can be jammed or traced, remote
assassination could become more of a problem.
The real threat, however, is from large numbers of weapons: swarms capable of causing tens of
thousands to, eventually, millions of casualties. Already, aerobatic displays involving more than
3,000 small, centrally controlled UAVs are becoming routine at corporate events. At present,
making such UAVs fully autonomous requires onboard processing and power requirements that
can be carried only by relatively large (50cm or so) quadcopters. However, ASICs – low-power,
special-purpose chips with all the AI built-in – could lead to lethal autonomous weapons only a
few centimetres in diameter. A million such devices could fit inside a standard shipping
container. In other words, entities able to put together large numbers of small autonomous
weapons can create weapons of mass destruction.
It would make sense, therefore, for nation-states to agree on anti-proliferation measures so that
non-state actors cannot create very large numbers of weapons by repurposing civilian products.
There are ample precedents for this: for example in the Nuclear Non-Proliferation Treaty’s rules
on ownership and transfer of nuclear materials, reflected in the Nuclear Regulatory
Commission’s detailed rules on how much uranium individuals can purchase or possess; and in

28
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
the procedures of the Organization for the Prohibition of Chemical Weapons to ensure that
industrial chemicals are not diverted into the production of weapons.
Specifically, all manufacturers of civilian UAVs such as quadcopters and small, fixed-wing
planes should implement measures such as know-your-customer rules, geofencing and hardware
kill switches. Nation-states can also use intelligence measures to detect and prevent attempts to
accumulate components in large quantities or build assembly lines. It would also be helpful for
global professional societies in AI – including the Association for Computing Machinery, the
Association for the Advancement of Artificial Intelligence and the Institute of Electrical and
Electronics Engineers – to adopt policies and promulgate codes of conduct prohibiting the
development of software that can choose to kill humans.

29
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Lethal autonomous weapons have low support

As remotely piloted weapons become widespread tools of war, it is also important to ensure that
they cannot easily be converted to autonomous operation via software changes. A small terrorist
group can deploy only a small number of remotely piloted weapons, but it can deploy a very
large number of autonomous weapons, for the simple reason that they do not require human
pilots. Conversion can be made much more difficult if these weapons are designed with no
connection between onboard processors and firing circuits.
Of course, global efforts to prevent the large-scale diversion of civilian technology to create
lethal autonomous weapons are pointless if non-state actors can easily obtain lethal autonomous
weapons direct from the manufacturer. For example, Turkey’s STM sells the Kargu drone,
announced in 2017 with a 1.1kg warhead and claimed to possess “autonomous hit” capability,
face recognition, and so on. Kargus have been delivered to non-state actors and used in 2020 in
Libya despite an arms embargo, according to the UN.
If it makes sense to prevent non-state actors from building their own weapons of mass
destruction, then it also makes sense to prevent arms manufacturers from doing it for them. In
other words, the world's major powers have every reason to support, rather than block, a treaty
banning the development, manufacture, deployment and use of lethal autonomous weapons.

30
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet

CON Arguments

31
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
The world just blew a ‘historic opportunity’ to stop killer robots—and that might be a
good thing
Jeremy Kahn 2021 (Jeremy Kahn, 12-22-2021, https://fortune.com/2021/12/22/killer-robots-
ban-fails-un-artificial-intelligence-laws/, Fortune, The world just blew a 'historic opportunity' to
stop killer robots)

It was billed as “a historic opportunity” to stop killer robots. It failed.


That was the alarming news out of a United Nations disarmament committee held in Geneva at
the end of last week. The committee had spent eight years debating what, if anything, to do about
the rapid development of weapons that use artificial intelligence to locate, track, attack, and kill
targets without human intervention. Many countries want to see such weapons banned, but the
UN group operates by consensus and several states, including the U.S., Russia, the United
Kingdom, India, and Israel, were opposed to any legally binding restrictions.
In the end, the committee could agree only to keep talking, with no clear objective for their
future discussions. This kicking-of-the-can was an outcome that the U.S. representative to the
UN committee called “a dream mandate” for all countries because it did not foreclose any
particular outcome.
Many other nations and activists saw that outcome as something quite different—a woefully
inadequate response.
“It was a complete failure, a disaster really,” said Noel Sharkey, an emeritus professor of
robotics and A.I. at the University of Sheffield, in the U.K., and one of several spokespersons for
the Stop Killer Robots campaign.
“It was a car crash,” Verity Coyle, a senior adviser to Amnesty International focused on its
campaign for a ban of the weapons, said of the UN committee’s meeting.

Not science fiction any more


A.I.-guided weapons were once the stuff of science fiction—and were still largely in that realm
when the UN committee first began talking about autonomous weapons in 2014. But real
systems with the ability to select targets without human oversight are now starting to be
deployed on battlefields around the globe. Civil society groups and scientists who are convinced
that they pose a grave danger have dubbed them “killer robots.” Their more technical moniker is
lethal autonomous weapons, or LAWS.
A UN report on the Libyan civil war said earlier this year that an autonomous weapon, the
Kargu-2 quadcopter, produced by a company in Turkey, was likely used to track and target

32
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
fleeing fighters in one engagement. There have been reports of similar autonomous munitions
being used by Azerbaijan in its recent war with Armenia, as well as autonomous weapons guns
being deployed by Israel in its most recent conflict with Hamas.
These developments have led many to fear that the world is running out of time to take action to
stop or slow the widespread use of these weapons. “The pace of technology is really beginning to
outpace the rate of diplomatic talks," said Clare Conboy, another spokesperson for the Stop
Killer Robots campaign.

Silver lining
While campaigners were bitterly disappointed with results of this month’s meetings of the UN
committee, some say its failure may counterintuitively present the best opportunity in years for
effective international action to restrict their development. That’s because it provides an
opportunity to move the discussion of an international treaty limiting LAWS to a different
diplomatic venue where a handful of states won’t be able to thwart progress.
“I think it is an exciting moment for those states calling for a legally binding instrument to come
together and think about what is the best next step,” Coyle said. “The 60-odd countries that are in
favor of a ban need to take a decision on starting a parallel process, somewhere where consensus
rules could not be used to block the will of the majority.”
Coyle said that discussions at the UN committee, although failing to reach an agreement, had at
least helped to raise awareness of the danger posed by autonomous weapons. A growing number
of nations have come out in favor of a ban, including New Zealand, Germany, Austria, Norway,
the Netherlands, Pakistan, China, Spain, and the 55 countries that make up the African Union.
In addition, thousands of computer scientists and artificial intelligence researchers have signed
petitions calling for a ban on autonomous weapons and pledged not to work on developing them.
Now, Coyle said, it was important to take that momentum and use it in another forum to push for
a ban.
Coyle said that an advantage of taking the discussion outside the UN’s Convention on Certain
Conventional Weapons, or CCW—the UN committee that has been discussing LAWS regulation
for the better part of a decade—is that the use of these weapons by law enforcement agencies and
in civil wars is outside the scope of that UN committee’s mandate. Those potential uses of killer
robots are of grave concern to many civil society groups, including Amnesty International and
Human Rights Watch.
Campaigners say there are several possible alternatives to the CCW. Coyle said that the Group of
Eight industrialized nations might become a forum for further discussion. Another option would

33
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
be for one of the states currently in favor of a ban to try to push something through the UN
General Assembly.
Coyle and Sharkey also both pointed to the process that led to the international treaty prohibiting
the use of anti-personnel land mines, and a similar convention barring the use of cluster
munitions, as potential models for how to achieve a binding international treaty outside the
formal UN process.
In both of those examples, a single nation—Canada for land mines, Norway for cluster
munitions—agreed to host international negotiations. Among the countries that some
campaigners think might be persuaded to take on that role for LAWS are New Zealand, whose
government in November committed the country to playing “a leadership role” in pushing for a
ban. Others include Norway, Germany, and the Netherlands, whose governments have all made
similar statements over the past several months.
Hosting such negotiations requires a large financial commitment from one country, running into
perhaps millions of dollars, Sharkey said, which is one reason that the African Union, for
instance, which has come out in favor of a ban, is unlikely to volunteer to host an international
negotiation process.
Another drawback of this approach is that whatever treaty is developed would be binding only
on those countries that choose to sign it, rather than something that might cover all UN members
or all signatories to the Geneva Convention. The U.S., for instance, has not acceded to either the
land mine or the cluster munitions treaties.
But advocates of this approach note that these treaties establish an international norm and exert a
high degree of moral pressure even on those countries that decline to sign them. The land mine
treaty resulted in many arms makers ceasing production of the weapons, and the U.S.
government in 2014 promised not to use the munitions outside of the Korean Peninsula, where
American military planners have argued land mines are essential to the defense of South Korea
against a possible invasion by North Korea.

“Slaughterbots”
Not everyone is convinced a process outside the CCW will work this time around. For one thing,
countries that fear a particular adversary will acquire LAWS are unlikely to agree to unilaterally
abandon the deterrent of having that capability themselves, said Robert Trager, a professor of
international relations at the University of California, Los Angeles, who attended last week’s
Geneva discussions as a representative of the Center for the Governance of AI.
Trager also noted that in the case of land mines and cluster munitions, the use—and, critically,
the limitations—of those technologies were well established at the time treaties were negotiated.

34
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Countries understood exactly what they were giving up. That is not the case with LAWS, which
are only just being developed and have barely ever been deployed, he said.
Max Tegmark, a physics professor at MIT and cofounder of the Future of Life Institute, which
seeks to address “existential risks” to humanity, and Stuart Russell, an A.I. researcher at the
University of California at Berkeley, have proposed that some countries currently standing in the
way of binding restrictions on killer robots might be persuaded to support a ban on autonomous
weapons below a certain size or weight threshold that are designed to primarily target individual
people.
Tegmark said that these “slaughterbots,” which might be small drones that could attack in
swarms, would essentially represent “a poor man’s weapon of mass destruction.” They could be
deployed by terrorists or criminals to either commit mass murder or assassinate individuals, such
as judges or politicians. This would be highly destabilizing to the existing international order and
so existing powers, such as the U.S. and Russia, ought to be in favor of banning or restricting the
proliferation of slaughterbots, he said.
Progress toward a ban had been made more difficult by the conflation of these small autonomous
weapons with larger systems designed to attack ships, aircraft, tanks, or buildings. Established
powers would be more hesitant to give up these weapons, he said.
Sharkey said that Tegmark’s and Russell’s position represented a fringe view and was not the
position of the Campaign to Stop Killer Robots. He said civil society groups were just as
concerned about larger autonomous weapons, such as A.I.-piloted drones that could take off
from the U.S. and fly across oceans to bomb targets, possibly killing civilians in the process, as
they were about smaller systems that could target individual people.
Campaigners also say they worry about the sincerity of some of the countries that have said they
support a binding legal agreement restricting LAWS. For instance, China has supported a
binding legal agreement at the CCW, but has also sought to define autonomous weapons so
narrowly that much of the A.I.-enabled military equipment it is currently developing would fall
outside the scope of such a ban. China has sold drones with autonomous attack capabilities to
other countries, including Pakistan. Pakistan has also been a leading proponent of a binding
agreement at the CCW, but has taken the position that without a ban that would cover all
countries, it needs such weapons to counter India’s development of similar systems.

35
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Banning autonomous weapons is not the answer
John Letzing 2018 (John Letzing, 5-2-2018,
https://www.weforum.org/agenda/2018/05/banning-autonomous-weapons-is-not-the-answer/,
World Economic Forum, Banning autonomous weapons is not the answer)

Over-hyped, catastrophic visions of the imminent dangers from ‘killer robots’ stem from a
genuine need to consider the ethical and legal implications of developments in artificial
intelligence and robotics when they are applied in a military context.
Despite this, international agreement on the best way to address these challenges is hampered,
partly by the inability to even converge on a definition of the capabilities which cause concern.
One of the biggest issues is ensuring that human operators retain control over the application of
lethal force, but even taking this specific-sounding approach means that a number of ways that
control could be lost are grouped together.
A machine gun triggered with a heat sensor would be a misuse of simple automation. Systems
which overload human operators with information and don’t give them sufficient time to make
decisions could be considered technically impossible to control. The human crew responsible for
deciding the targets of a remotely piloted aircraft might feel sufficiently disassociated from their
decisions to lose effective control of them.
These and other possibilities were considered at the UN’s Group of Government Experts (GGE)
meeting last week discussing lethal autonomous weapon systems. The forum is the right place to
address the implications of new technology, but it clearly needs to be more specific about the
risks it is considering.

Calls for a ban


Automated systems, which respond to operator inputs with predictable actions, have existed for a
long time. But as automation gets more sophisticated, concerns – and calls for a prohibition on
the development of lethal autonomous systems – are growing, including from states which do not
have the intent or internal capability to develop highly automated military equipment.
These concerns have already come to the attention of many states who independently undertake
efforts to identify and mitigate the risks posed by new technologies and ensure they are used in
compliance with international humanitarian law, and also in accordance with national policies
which are often more exacting.
In states which develop weapons responsibly, a huge range of control measures on weapons
procurement and use exist. These aim to culminate in ensuring military commanders and

36
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
operators are competent, informed and that their control over the use of force is enhanced, not
undermined, by the tools they use.
Positive benefits
Some have pointed out that well-designed automated systems used under effective human
control can enhance adherence to principles of international humanitarian law. These positive
benefits are part of the reason why a prohibition is not the right answer.
Developments in AI mean it is feasible to imagine an unpredictable system which gives ‘better’
outcomes than human decision-making. A machine like this would not actually be making
decisions in the same way humans do, however strong our tendency to anthropomorphize them
might be. They have no moral agency.
Without such a moral agent it is not possible to ensure compliance with international
humanitarian law. The principle of proportionality, for example, requires assessments of whether
harm and loss to civilians is excessive in relation to anticipated military advantage. However
sophisticated new machines may be, that is beyond their scope; so international humanitarian law
already effectively prohibits the use of any such systems. If the legal argument is compelling, the
tactical one is even more so – common sense alone precludes allowing automated execution of
unpredictable outcomes when applying lethal force.
The UK comes under fire in the report of the House of Lords Select Committee on AI published
last Monday for maintaining a definition of autonomy describing a system capable of
independently understanding higher intent and orders – something that has been widely criticized
for being unrealistic. In fact, in the complex and intrinsically human arena of warfare, it is the
idea of deliberately ceding responsibility to a machine at all that is unrealistic. The UK’s
definition represents the minimum standard at which this could be considered – it almost
certainly does not now, and may well never, exist.
Thus, a prohibition on the development and use of lethal autonomous weapons systems is not the
simple solution it appears to be.
This is not, however, where the discussion should end. Sophisticated automation does present
new challenges to human control. The GGE is a forum that can help states surmount these
challenges by sharing best practices and garnering input from experts, academics and civil
society. It also gives an opportunity to consider the possible impacts raised by the scale and risk
balance associated with automated weapons on the norms of war.
First and foremost, though, states must take responsibility for applying their already existing
international legal and ethical obligations in the development and use of new weapons
technology.

37
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Banning Autonomous Weapons is not the Solution
Saahil Dama 2018 (Saahil Dama, 4-26-2018, https://script-ed.org/blog/banning-autonomous-
weapons-is-not-the-solution/, No Publication, Banning Autonomous Weapons is not the
Solution)

Over recent years, there have been growing demands[2] for banning lethal autonomous weapon
systems (“LAWS”) by means of an international treaty. These demands are based on the
apprehension that LAWS would lead to an arms race between nations, resulting in violent wars
that are fought on a larger scale. There are also fears that LAWS would be acquired or developed
by rogue nations and terrorist groups. The international community, therefore, considers it
imperative to introduce a treaty that mandates States to eschew LAWS.
This post will argue that instead of banning LAWS, an international framework should be
developed to encourage safe and responsible use of such weapons.
LAWS are defined as weapons that can select and engage targets without human intervention.[3]
Upon activation, all decisions are made by the weapon autonomously including detecting
locations, selecting targets and executing attacks. Humans can exercise supervisory control, but
LAWS are largely expected to function with humans out of the loop.
Arms treaties can be classified into two types – those that have seen almost universal
ratification/accession and those that have failed to do so. The Chemical Weapons Convention
and the Biological Weapons Convention fall squarely in the former category. Due to the pain and
suffering that these weapons can inflict, they are globally perceived as being morally abhorrent
and have been banned under the 1925 Geneva Protocol.[4]
Nuclear weapons are treated differently. Even though the Nuclear Non-Proliferation Treaty
(“NPT”) has been signed by most countries, it would be a stretch to call the NPT a success.[5]
There are two reasons for this. First, superpowers such as the US, the UK, France, Russia and
China remain lawfully in possession of nuclear weapons despite being parties to the NPT.
Second, and this is largely due to the first reason, nuclear-positive countries such as India and
Pakistan have refused to sign the NPT.
The NPT restricts “nuclear-weapon States” from transferring nuclear weapons to non-nuclear-
weapons States and also prevents non-nuclear-weapons States from diverting nuclear energy
from peaceful uses to nuclear weapons. However, the problem is that it does not place any
restriction on possession of nuclear weapons by “nuclear-weapon States”, which are defined
under Article IX as States that manufactured and exploded a nuclear weapon before 1 January
1967. There are five countries that meet this highly selective criteria – the US, the UK, France,
Russia and China.

38
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
In effect, the NPT prevents all countries other than these five from possessing nuclear weapons
thus creating a form of minority rule, as was noted by a South African delegate in the 2015 NPT
Review Conference.[6] It is due to the discriminatory nature of the treaty that India and Pakistan
have refused to sign it.
As the NPT shows, the powers of the world do not want to give up nuclear weapons. This is
made even more evident by the fact that no nuclear-weapon State has expressed support for the
recently introduced Treaty on Prohibition of Nuclear Weapons.[7] However, these countries’
desire to retain nuclear weapons is understandable since nuclear weapons provide them with
significant military and strategic advantages. The fear of mutually assured destruction deters
nuclear warfare, and given that countries such as North Korea might be developing nuclear
warheads, it becomes necessary for other countries to possess nuclear weapons to prevent
attacks.
A treaty banning LAWS seems doomed to take the NPT route in the sense that it is unlikely to be
accepted by the powers that be. As of 16 November 2017, only twenty-two countries had
expressed support for banning LAWS,[8] and the list does not include major stakeholders such
as the US, the UK, and China.
The reason for such reluctance is clear. Like nuclear weapons, LAWS will give States a distinct
military advantage – greater efficiency, cost-effectiveness, convenience, and fewer human
failures. Studies indicate that LAWS are increasingly being able to perform functions that were
previously attributable solely to humans, including detecting possession of weapons[9] and
minimizing collateral damage.[10] If trained and used legitimately, LAWS will reduce military
casualties and injuries along with saving soldiers from mental trauma resulting from armed
conflicts. It would make little sense to send soldiers to risk their lives in wars when LAWS can
take their place and do the job as well, if not better.
Further, unlike biological and chemical weapons, LAWS can be legitimately used in several
cases, including serving as a deterrence. Critics have argued that it would not be long before
these weapons become available in the black market and are obtained by rogue groups because
they are relatively inexpensive and easy-to-produce.[11] If that indeed is the case, then it would
become essential for countries to have their own armoury of autonomous weapons to deter
attacks. Imposing a ban would only create a power imbalance between nations that abide by such
a ban versus those that refuse to and militant groups.
Instead of a ban, the world would be better served by a treaty that lays down minimum standards
for training, developing, testing, and operating LAWS. Such an approach has already been
adopted in the US through a Department of Defense Directive which, inter alia, establishes
conditions for operational testing, exercise of human judgement, minimisation of failures and
prevention against tampering.[12] A treaty of this nature would ensure that LAWS are developed

39
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
and used in compliance with international law with a view to protecting human rights. Such a
treaty would also garner more support than a treaty that bans LAWS since States would be
reluctant to accept an outright ban given the advantages LAWS would provide them with.

40
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Do Killer Robots Save Lives?
Michael C. Horowitz and Paul Scharre 2014 (Michael C. Horowitz and Paul Scharre, 11-19-
2014, https://www.politico.com/magazine/story/2014/11/killer-robots-save-lives-113010/,
POLITICO Magazine, Do Killer Robots Save Lives?)

When the robots start thinking for themselves in the movies or on TV, it’s never a good sign for
the humans. Whether the setting is Earth or Caprica, letting robots think and giving them
weapons is a precursor to human destruction. Fearing such a result, a coalition of NGOs is
advocating that the international community ban lethal autonomous weapons systems, which
these groups call “killer robots.” It’s a striking phrase, but what exactly is a killer robot? Homing
munitions, like torpedoes, incorporate some autonomy and have been in existence since World
War II. Today, nearly every modern military uses them. Are they “killer robots”?
As technology advances, it’s important that we recognize the difference between “killer robots”
and merely “smart weapons.” It’s a line that some activists have blurred—pushing to ban not just
the sci-fi robot warriors of the future, but also the precision-guided munitions that have
dramatically reduced civilian casualties in wars over the past 6 decades. Such a move would be a
mistake. Precision-guided munitions lie at the heart of efforts to make the use of force safer for
civilians and more effective at quickly achieving its objectives.

Smarter Bombs, Saving Civilians


One of the most significant developments in the twentieth century toward making warfare more
humane and reducing civilian casualties came not in the form of a weapon that was banned, but a
new weapon that was created: the precision-guided munition. In World War II, in order to have a
90 percent probability of hitting an average-sized target, the United States had to drop over 9,000
bombs, using over 3,000 bombers to conduct the attack. This level of saturation was needed
because the bombs themselves were wildly inaccurate, with only a 50/50 chance of landing
inside a circle 1.25 miles in diameter. The result was the widespread devastation of cities as
nations blanketed each other with bombs, killing tens of thousands of civilians in the process.
Aerial warfare was deemed so inhumane, and so inherently indiscriminate, that there were
attempts early in the twentieth century to ban bombardment from the air, efforts which obviously
failed.
By Vietnam, most US bombs had a 50/50 chance of landing inside an 800-foot diameter circle, a
big improvement over 1.25 miles. Even still, over 150 bombs launched from over 40 aircraft
were required to hit a standard-sized target. It is not surprising that civilian casualties from air
bombing still occurred frequently and in large numbers.

41
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
The Gulf War was the first conflict where the use precision-guided weapons entered the public
consciousness. Video footage from “smart bombs” replayed on American televisions provided a
dramatic demonstration of how far military power had advanced in a half century.
Today, the weapons that the United States and many advanced militaries around the world use
are even more precise. Some are even accurate to within 5 feet, meaning targets are destroyed
with fewer bombs and, importantly, fewer civilian casualties. Militaries prefer them because they
are more effective in destroying the enemy, and human rights groups prefer them because they
save civilian lives. In fact, Human Rights Watch recently asserted that the use of unguided
munitions in populated areas violates international law.

How Smart is Too Smart?


Lethal autonomous weapon systems (LAWS) stand in stark contrast to homing munitions and
“smart” bombs, which use automation to track onto targets selected by humans. Instead, LAWS
would choose their own targets. While simple forms of autonomous weapons are possible today,
LAWS generally do not currently exist—and, as far as we know, no country is actively
developing them.
Yet fearing that the pace of technological advancement means that the sci-fi future may not be
far off, in 2013, NGOs launched a Campaign to Stop Killer Robots. Led by Jody Williams and
some of the same activists that led the Ottawa and Oslo treaties banning land mines and cluster
munitions, respectively, the Campaign has called for an international ban on autonomous
weapons to preempt their development.
The NGO campaign against “killer robots” has generally focused, up to this point, on the
autonomous weapons of the future, not the smart bombs of today. Campaign spokespersons have
claimed that they are not opposed to automation in general, but only to autonomous weapons that
would select and engage targets without human approval.
Recent moves by activists suggest their sights may be shifting, however. Activists have now
raised concerns about a number of next-generation precision-guided weapons, including the UK
Brimstone missile, the U.S. long-range anti-ship missile (LRASM), and Norway’s Joint Strike
Missile. While defense contractors love to pepper the descriptions of their weapons with the
word “ autonomous,” emphasizing their advanced features, actual technical descriptions of these
weapons indicate that a person selects the targets they are engaging. They’re more like the
precision-guided weapons that have saved countless civilian lives over the last generation, not
the self-targeting “killer robots” of our nightmares.
Nevertheless, some activists seem to think that these further enhancements to weapons’ accuracy
go too far towards creating “killer robots.” Mark Gubrud, of the International Committee for

42
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
Robotic Arms Control, described LRASM in a recent New York Times article as “pretty
sophisticated stuff that I would call artificial intelligence outside human control.” Similarly, the
Norwegian Peace League, a member of the Campaign to Stop Killer Robots, has spoken out
against development of the Joint Strike Missile.

The Wrong Target


Are activists genuinely opposed to increased automation in any form, even if humans are
choosing the targets? Defining the precision-guided munitions used by countries around the
world as the same thing as the Terminator-style robots of the movies is a mistake. We, as a
civilized society, want weapons technology to continue to improve to be more precise and
humane.
Leaders in next-generation military unmanned systems, like France, the United Kingdom and the
United States, have shown a surprising willingness to engage the Campaign to Stop Killer
Robots. Recognizing that fully autonomous weapons raise challenging issues, the Convention on
Certain Conventional Weapons (CCW), a United Nations forum for states to discuss weapons-
related issues, has held talks on autonomous weapons, and just last week, the CCW agreed to
hold a second round of discussions on lethal autonomous weapons next spring.
This international dialogue has hinged, however, on the understanding that what is on the table
for discussion are not today’s precision-guided weapons, but potential future self-targeting
autonomous weapons. An expansion of the agenda to include precision-guided weapons would
most likely end CCW discussions. NGOs could try to launch an independent effort to ban the
technology, as they did with land mines and cluster munitions. Unlike those campaigns,
however, if the Campaign to Stop Killer Robots decides to strike out on their own in an attempt
to ban existing precision-guided weapons, they are unlikely to find serious state partners.
Even if they could achieve a ban, however, eliminating precision-guided weapons with any
degree of automation would likely have the opposite effect to that desired by activists—it would
increase civilian suffering in war, and not just because of the immediate effects of less-accurate
bombs.
As weapons have become more precise, social norms about what level of civilian casualties are
acceptable in war have shifted as well. Today, we debate about whether even one or two civilian
casualties, in the context of a drone strike, is too many. It’s a valid discussion, but it is also
striking in contrast to the massive devastation wrought on Dresden, Hamburg, London, Tokyo
and many other cities in World War II when tens of thousands of civilians were killed with
unguided weapons. We should celebrate the progress we’ve made.

43
nhsdlc.com
NHSDLC Summer National 2022 Main Research Packet
The scrutiny that activists bring to bear on new weapons is useful because it helps us think about
the functions and consequences of new weapons. Autonomous weapons that would select their
own targets do raise serious issues worthy of further debate, but knee-jerk reactions to any new
degree of autonomy are likely to do more harm than good. Ultimately, such protests could
deprive the world of a key means of reducing civilian casualties in war.

44

You might also like