You are on page 1of 22

DATE DOWNLOADED: Sat May 20 14:14:04 2023

SOURCE: Content Downloaded from HeinOnline

Citations:
Please note: citations are provided as a general guideline. Users should consult their preferred
citation format's style manual for proper citation formatting.

Bluebook 21st ed.


James Dawes, Speculative Human Rights: Artificial Intelligence and the Future of the
Human, 42 HUM. Rts. Q. 573 (2020).

ALWD 7th ed.


James Dawes, Speculative Human Rights: Artificial Intelligence and the Future of the
Human, 42 Hum. Rts. Q. 573 (2020).

APA 7th ed.


Dawes, J. (2020). Speculative human rights: artificial intelligence and the future of
the human. Human Rights Quarterly, 42(3), 573-593.

Chicago 17th ed.


James Dawes, "Speculative Human Rights: Artificial Intelligence and the Future of the
Human," Human Rights Quarterly 42, no. 3 (August 2020): 573-593

McGill Guide 9th ed.


James Dawes, "Speculative Human Rights: Artificial Intelligence and the Future of the
Human" (2020) 42:3 Hum Rts Q 573.

AGLC 4th ed.


James Dawes, 'Speculative Human Rights: Artificial Intelligence and the Future of the
Human' (2020) 42(3) Human Rights Quarterly 573

MLA 9th ed.


Dawes, James. "Speculative Human Rights: Artificial Intelligence and the Future of
the Human." Human Rights Quarterly, vol. 42, no. 3, August 2020, pp. 573-593.
HeinOnline.

OSCOLA 4th ed.


James Dawes, 'Speculative Human Rights: Artificial Intelligence and the Future of the
Human' (2020) 42 Hum Rts Q 573 Please note: citations are provided
as a general guideline. Users should consult their preferred citation format's style
manual for proper citation formatting.

-- Your use of this HeinOnline PDF indicates your acceptance of HeinOnline's Terms and
Conditions of the license agreement available at
https://heinonline.org/HOL/License
-- The search text of this PDF is generated from uncorrected OCR text.
-- To obtain permission to use this article beyond the scope of your license, please use:
Copyright Information
HUMAN RIGHTS QUARTERLY

Speculative Human Rights: Artificial


Intelligence and the Future of the
Human

James Dawes

ABSTRACT
This essay takes seriously the claims of many experts in artificial intelligence
that AGI (artificial general intelligence) could emerge as early as 2040.
Some of the questions raised by AGI include: Does human-like intelligence
entail consciousness, and does consciousness entail rights? Will the rise
of AGI enhance or endanger human life? If the former, are there certain
perceived enhancements that run counter to notions of human rights? If
the latter, what are our collective duties, right now, to future generations?
How can a human rights framework help us to negotiate these questions?

I. INTRODUCTION

What will it mean for human rights when artificial intelligence transcends
the human?
Before I answer that question, I should explain why I think that it is
an important one. On the one hand, our culture is saturated with anxiety
about the hurtling speed of technological development. Public figures as
different as Stephen Hawking, Bill Gates, Henry Kissinger, and Elon Musk
have all warned that we are on the verge of developing Al so complicated
that we will neither be able to understand it nor control it. As Hawking

James Dawes, professor at Macalester College, is the author of The Novel of Human
Rights (Harvard, 2018), Evil Men (Harvard, 201 3), That the World May Know: Bearing
Witness to Atrocity (Harvard, 2007), and The Language of War (Harvard, 2002).

Human Rights Quarterly 42 (2020) 573-593 © 2020 by Johns Hopkins University Press
574 HUMAN RIGHTS QUARTERLY Vol. 42

once dramatically said, Al could "spell the end of the human race."' On
the other hand, the distant risk posed by the potential emergence of super-
intelligent Al seems-especially for those working in human rights-like an
intellectual exercise at best and a distraction at worst, given that we face a
terrifying array of immediate existential concerns right now. This year in this
journal, philosopher Mathias Risse challenged that latter stance, highlighting
the lack of attention artificial intelligence has received in the human rights
community. Noting the "exponential technological advancement" of the
last decades, he called for immediate attention to the human rights risks
presented by superintelligence, arguing that it is "urgent to get this matter
on the agenda." 2 I believe he is right. This essay is an attempt to help with
that agenda.
For those skeptical of futurist dystopianism, I want to begin by explain-
ing how I got to the point where the opening question began to worry me.
My research focus, broadly speaking, concerns conduct in war. The pull to
thinking about Al, in general, began with work I was doing on the problem
of war crimes and autonomous weapon systems (AWS). For those unfamiliar
with the term, AWS are weapons that do not have a human in the loop.
Drones, however concerning they might be, are still traditional weapons:
there is a human operator who makes the moral calls. AWS are designed to
exclude humans from moral intervention. AWS are built to operate entirely
independently, to make their own decisions and develop their own tactics.
Human Rights Watch has been on the front edge of concern about AWS,
describing them as "killer robots" and calling for a "preemptive ban on the
weapons' development, production, and use."3
As an international movement, "The Campaign to Stop Killer Robots"
traces its origin to a 2007 article in The Guardian by roboticist Noel Shar-
key, which looked at the use of armed battlefield robots in Iraq. Sharkey
warned against the development of fully autonomous robots and called for
their international regulation. 4 The article was called "Robot Wars Are a
Reality," and at the time I lacked the imaginative capacity to fully appreciate
the threat because I just could not get past how much it sounded like the
Terminator movie franchise. I started paying attention in 2012 after Human
Rights Watch published its first report detailing the dangers of autonomous

1. Rory Cellan-Jones, Stephen Hawking Warns Artificial Intelligence Could End Mankind,
BBC NEWs (2 Dec. 2014), https://www.bbc.com/news/technology-30290540.
2. Mathias Risse, Human Rights and Artificial Intelligence: An Urgently Needed Agenda,
41 HUM. RTS. Q. 1, 2, 5 (2019).
3. Human Rights Watch (HRW), Heed the Call: A Moral and Legal Imperative to Ban Killer
Robots, (21 Aug. 2018), https://www.hrw.org/report/2018/08/21/heed-call/moral-and-
legal-imperative-ban-killer-robots . For more, see PAUL SCHARRE, ARMY OF NONE: AUTONOMOUS
WEAPONS AND THE FUTURE OF WAR (2018). For a defense of AWS, see AMIR HUSAIN, THE SENTIENT
MACHINE: THE COMING AGE OF ARTIFICIAL INTELLIGENCE 87-108 (201 7).
4. Noel Sharkey, Robot Wars Are a Reality, THE GUARDIAN (17 Aug. 2007), https://www.
theguardian.com/commentisfree/2007/aug/1 8/comment.military.
2020 Speculative Human Rights 575

weapon systems.5 It was not until 2017 that I published anything at all on
the threat, which I confess with chagrin given that the changing nature of
war has been a key research area throughout my career. I simply could not
rise above my biases: science fiction took it seriously, so I did not.
It is now twelve years since Sharkey's loopy-sounding, Schwarzenegger-
invoking early warning, and today, right now, major militaries around the
world are aggressively investing in research to develop and deploy AWS.
The US alone budgeted $18 billion for AWS from 2016 to 2020.6 Of course,
national militaries express full confidence in their ability to control AWS.
But the history of maintaining control over weapons technology is not en-
couraging. The most likely near-term scenario is an AWS race with the same
devastating escalations and risks as the Cold War nuclear arms race and the
post-Cold War nuclear proliferation problem. Indeed, we can also expect
new versions of both the Kalashnikov problem and the dirty bomb problem:
the technology will be simplified, made cheaper and less controllable, and
will become easily accessible to a range of non-state actors. In a best-case
scenario, AWS will undermine the structure of international humanitarian
law and war crimes prosecution. We will inhabit a world of deeply confused
chains of control and moral responsibility, where crimes against humanity
happen before humans form thoughts about them. In a worst-case scenario,
AWS will be designed to do what tactical nuclear weapons can do, and
they could become an extinction-level development.
Unless we do something collectively and dramatically now, most of us
will live to see these weapons deployed, likely to catastrophic effect. But
there are real solutions available, and that is one thing I repeatedly emphasize
when giving talks on AWS: however dismaying the future might seem, there
are things we can do right now that will matter. Just imagine if something
like our current anti-nuclear proliferation strategies had been put in place
before the Manhattan project.
My worries about AWS naturally led to researching more about the fu-
ture of weaponized Al and then researching the capacities of Al generally.
And this led me to the question I started with: What will it mean for human
rights when artificial intelligence transcends the human?
Edmund Burke coined the phrase "speculative rights" to describe rights
that are weakly grounded because they are based on reason, which he judged

5. HRW, Losing Humanity: The Case against Killer Robots, (19 Nov. 2012), https://hrw.
org/report/2012/11/19/losing-humanity/case-against-killer-robots. Roboticist Ronald Ar-
kin's work on devising an "ethical governor" for AWS, while not sound as a matter of
weapons control, is an anticipatory model for establishing research pathways sensitive
to the threats of Al. See RONALD ARKIN, GEORGIA INSTITUTE OF TECHNOLOGY, GOVERNING LETHAL
BEHAVIOR: EMBEDDING ETHICS IN A HYBRID DELIBERATIVE/REACTIVE ROBOT ARCHITECTURE 1, 20 (2008),
https://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf.
6. Don't Let Robots Pull the Trigger, SCI. AM. (1 Mar. 2019), https://www.scientificamerican.
com/article/dont-let-robots-pull-the-trigger/.
576 HUMAN RIGHTS QUARTERLY Vol. 42

fallible, and not on tradition, which he trusted. 7 I am using the phrase specu-
lative human rights in this essay to deliberately invoke both Burke and the
genre of speculative fiction, to invite you to indulge with me in an exercise
in reason, a science-fiction thought experiment that anticipates the rights
that tradition cannot. Speculative human rights is about threats to human
flourishing that are beyond the current horizon of possibility, threats that
are so far down the technological path we are on that it is just hard to feel
anxious about them. But part of the anxiety I want to pass on to you is the
thought that the very same thing could have once been said for the current
array of existential risks we face.
In this article, I will be focusing primarily on AGI (artificial general intel-
ligence). To quickly specify terms: Al is information processing for specialized
tasks-like chess-playing or, in a human rights and humanitarianism context,
wartime bomb disposal. AGI would be generalized intelligence that could
perform all the cognitive tasks that humans can perform at least as well as
humans can; artificial intelligence that could, for instance, not only calculate
the permutations of chess but also navigate a physical object over cluttered
terrain for the purpose of bomb disposal; artificial intelligence that could
not just learn something, but learn anything.
My goal in this essay will be to comprehensively forecast the concerns
AGI would raise when looked at from the perspective of human rights. To
do that, I am going to rely upon the politically problematic but still con-
ceptually useful division between civil and political rights and economic,
social, and cultural rights.
First, what happens when we look at AGI from the perspective of civil
and political rights, focusing on protection from discrimination, arbitrary
harm, and coercion? Philosopher Nick Bostrom was among the first to think
seriously about dystopian futures in which AGI unleashed calamitous violence
or established terrifying new regimes of coercion, but others like cosmologist
Max Tegmark have followed in his path, spinning out scenarios of expunged
rights under benevolent AGI dictators and totalitarian surveillance states.
Second, what happens when we look at AGI from the perspective of
economic, social, and cultural rights, focusing in particular on the right to
work and the right to health? People like Nobel laureate economist Joseph
Stiglitz, among others, argue that the near-future impact of Al on work will
be historically unprecedented, triggering a global labor disruption that could
be politically destabilizing, generating new asymmetries of wealth and power
that could make our current era look like a golden age of equality.,

7. EDMUND BURKE, ON TASTE, ON THE SUBLIME AND BEAUTIFUL, REFLECTIONS ON THE FRENCH REVOLUTION,
A LETTER TO A NOBLE LORD 180 (1909).
8. Ian Sample, Joseph Stiglitz on Artificial Intelligence: We're Going Towards a More Divided
Society, THE GUARDIAN (8 Sept. 2018), https://www.theguardian.com/technology/2018/
sep/08/joseph-stigl itz-on-artificial-i ntel Iigence-were-goi ng-towards-a-more-divided-
society.
2020 Speculative Human Rights 577

At the moment, I think it is possible to comprehensively capture all


of these problems, and more, in what I think of as the five C's of Al and
speculative human rights. These are the problems of capital, cybernetics,
classification, control, and consciousness. Together, they will fundamentally
change our liberties, our jobs, our healthcare, our wars, and our basic con-
ception of what a human life is.

II. THE THOUGHT EXPERIMENT

To begin, I need to ask you to participate in a thought experiment, to ac-


cept as a premise the following five assumptions. The first is that we can
define intelligence as the ability to use information to affect an environment
to achieve a goal. 9 The second is that the intelligence of the humans and
computers existing now do not represent anywhere the peak of intelligence
possible in the universe. The third is that the functional equivalent of human-
level intelligence can exist in things other than human bodies. The fourth is
that, barring civilizational collapse, the relentless technological hunger of
capitalism means AGI is not just a possibility; it is a likelihood. And all of
these assumptions together naturally lead to the final assumption. Barring
civilizational collapse, we face the very real possibility of an intelligence
explosion.
The idea of an intelligence explosion-sometimes referred to as the
technological singularity-was first developed in 1965 by British mathema-
tician Irving Good. He writes: "Let an ultraintelligent machine be defined
as a machine that can far surpass all the intellectual activities of any man
however clever. Seeing as the design of machines is one of these intellectual
activities, an ultraintelligent machine could design even better machines;
there would then unquestionably be an 'intelligence explosion', and the
intelligence of man would be left far behind. Thus the first ultraintelligent
machine is the last invention that man need ever make, provided that the
machine is docile enough to tell us how to keep it under control."10 David
Chalmers was among the first of major contemporary philosophers to publish
arguments for the plausibility of a technological singularity.11 Contemporary
computer scientist Pedro Domingos argues that today's work in machine
learning now marks a clear first step toward Good's vision. "The Industrial
Revolution automated manual work and the Information Revolution did

9. Steven Pinker's more careful definition is this: "Intelligence, then, is the ability to attain
goals in the face of obstacles by means of decisions based on rational (truth-obeying)
rules." STEVEN PINKER, HOW THE MIND WORKS 62 (2009).
10. Cited in MAX TEGMARK, LIFE 3.0: BEING HUMAN IN THE AGE OF ARTIFICIAL INTELLIGENCE 4 (201 7).
11. See David ]. Chalmers The Singularity: A Philosophical Analysis, 17 ]. CONScIOUSNESS STUD.
7 (2010).
578 HUMAN RIGHTS QUARTERLY Vol. 42

the same for mental work," he writes, "but machine learning automates
automation itself." 12
When AGI begins researching AGI, it will be able to produce the next
generation of smarter machines faster than humans; this next, smarter
generation will be able to do so even more quickly, and so on. The pace
of technology will accelerate and then-because research will be bound
neither by the limits of human intelligence nor by the limits of each succes-
sive parent-generation of Al-the acceleration will begin to accelerate. This
kind of asymptotic, accelerating acceleration means that even if the run-up
to human-level AGI is quite long, the next major step to superintelligence
could be quite rapid. The changes wrought on the planet in the sliver of
geologic time since the emergence of human intelligence have been ef-
fectively immeasurable and, arguably, ruinous for everything nonhuman.
What will happen with the emergence of superintelligence-and perhaps
more important, how fast will it happen? Unless we begin preparing for
that moment long in advance, we will likely have no time to coordinate
a response when it occurs. As Henry Kissinger recently wrote, urging the
US government, along with Al developers, to develop a national vision for
containing the perils of AGI: "If we do not start this effort soon, before long
we shall discover that we started too late." 13
And with that strong claim, I am at the end of the thought experiment.
To ground us back in sober realism as we move forward, I should emphasize
that everything about the previous three paragraphs is up for debate. There
are many sound arguments against the logic of an intelligence explosion,
including Russell's suggestion that "it is logically possible that there are
diminishing returns to intelligence improvements, so that the process peters
out rather than exploding."1 4 And while surveys are often cited showing
that most experts in artificial intelligence believe AGI will eventually be
achieved, the predicted timelines range from 20 years to 100 years and
even, in Noam Chomsky's words, to "eons."" Pioneering Al researcher Ju-
dea Pearl, winner of the Turing Award (the highest distinction in computer
science), argues that we will eventually create AGI but that to do so we
will need a major paradigm shift, abandoning current Al which is defined
by associational reasoning-that is, pattern identification in large data sets,
or "curve fitting"-to develop Al capable of causal reasoning, which Pearl

12. PEDRO DOMINGOS, THE MASTER ALGORITHM: HOW THE QUEST FOR THE ULTIMATE LEARNING MACHINE
WILL REMAKE OUR WORLD 9-10, 286 (2015).
13. Henry Kissinger, How the Enlightenment Ends, THE ATLANTIC (June 2018), https://www.
theatlantic.com/magazine/archive/201 8/06/henry-kissinger-ai-could-mean-the-end-of-
human-history/559124/.
14. STUART RUSSELL, HUMAN COMPATIBLE: ARTIFICIAL INTELLIGENCE AND THE PROBLEM OF CONTROL 143
(2019).
15. Interview by Singularity Weblog with Noam Chomsky, American Linguist, on YouTube,
(10 June 2016: 1:06), https://www.youtube.com/watch?v=Ck9zKihlYLE.
2020 Speculative Human Rights 579

characterizes as the ability to move beyond mere identification of correla-


tions to the hierarchically superior task of asking counterfactual questions.16
Advances like this have no clear pathways; their timelines are indefinite. But
as Hawking famously wrote, together with a group of prominent physicists
and computer scientists, in response to the argument that AGI is too distant
a possibility to begin worrying about now: "If a superior alien civilization
sent us a message saying, 'We'll arrive in a few decades,' would we just
reply, 'OK, call us when you get here-we'll leave the lights on'? Probably
not-but this is more or less what is happening with Al."1 7
If you are willing for now to accept my opening assumptions and what
they entail, then we are faced with several potential threats to human rights
specifically and human flourishing generally: the five C's of capital, cybernet-
ics, classification, control, and consciousness. In what follows, I will address
each of these areas in three parts. First, I will identify a recognizable version
of the danger. With classification, capital, and control, for instance, the crises
are already upon us, if only in the very early stages; they do not require a
transition to AGI, only sufficiently complicated Al. Second, I will identify a
speculative, beyond-the-horizon version of each threat-that is, what will
the scale and scope of these problems be if and when we achieve AGI? In
some cases, the difference will be a matter of degree; in others, of nature.
Third and finally, I will identify a set of things we can do now to address both
the recognizable and the speculative versions of the problem. Importantly,
it will not just be the deficits created by AGI that will need addressing with
a rights framework, but also the surplus it creates, the value it adds to the
world. AGI's surpluses have the capacity to disrupt and endanger as surely
as its deficits do.
I will begin by focusing on second-generation rights concerns, focusing
on work and health. These are the problems of capital and cybernetics. I
will tip my hand by acknowledging that I am starting here because these
are the concerns least likely to test your tolerance for speculation-but by
the end of the paper, we will have landed squarely in the realm of science
fiction. Skeptics would be wise to remember, however, that science fiction
has had uncanny successes when forecasting future weapons and threats,
from Jules Verne's submarine to H. G. Wells' atomic weapons.

16. See Judea Pearl, Theoretical Impediments to Machine Learning with Seven Sparks from
the Causal Revolution, 3 (15 Jan. 2018), https://arxiv.org/pdf/1 801.04016.pdf
17. Stephen Hawking, Stuart Russell, Max Tegmark & Frank Wilczek, Stephen Hawking:
"Transcendence Looks at the Implications of Artificial Intelligence But are we Taking
Al Seriously Enough?", THE INDEPENDENT (1 May 2014), https://www.independent.co.uk/
news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-
intelligence-but-are-we-taking-9313474.html.
580 HUMAN RIGHTS QUARTERLY Vol. 42

III. CAPITAL AND CYBERNETICS

A. Capital

The near-term, recognizable version of the capital problem is that with the
emergence of superintelligent machines, the productivity of capital will
increase while the wages commanded by human laborers will plummet.
Worries over Al and automation are, of course, not new. In 1964, only eight
years after the field of artificial intelligence was founded at the Dartmouth
Summer Research Project on Artificial Intelligence, a group of prominent
activists, academics, and technology experts brought their concerns about
"cybernation" to President Lyndon B. Johnson in a report entitled "The Triple
Revolution."18 With the intellectual heft of scholars like Nobel laureate econo-
mist Gunnar Myrdal behind it, the group argued that the only useful historical
model for anticipating the change Al would force upon the economy was
the transition from the agricultural era to the industrial era. "The industrial
production system," they argued, "is no longer viable."1 9 The government
must take radical action to manage such dramatic disruption, they asserted,
to prevent soaring unemployment, a tumbling labor force participation rate,
socially destabilizing inequality, and a vicious cycle of decreasing income
that leads to decreasing demand that leads to more unemployment that, in
turn, leads to further decreasing income, spiraling ever downward.
Al entrepreneur Martin Ford argues that the authors of the "The Triple
Revolution" were not wrong, despite the decades of booming industrial
production that followed their dolorous predictions. They "simply sound[ed]
the alarm far too soon."20 Current forecasters working in economics and
Al differ, but one of the more frequent claims circulating now is around a
50% redundancy rate across the entirety of the human workforce within
the lifetimes of the younger people reading this essay, including areas like
higher education. While some believe this is hyperbolic, even the most sober,
conservative predictions are game-changers for the global economy. A 2017
study by McKinsey Global Institute estimated that "60% of occupations have
30% or more of their constituent activities that could be automated," which
could lead to a 15 percent displacement of the global workforce by 2030.21
Transportation and radiology are often coupled together in such job vulner-
ability studies as a way of showing the unprecedented comprehensive reach
automation will have into human work. In their grimly titled working paper

18. MARTIN FORD, RISE OF THE ROBOTS: TECHNOLOGY AND THE THREAT OF A JOBLESS FUTURE 30-31 (2015).
19. The Ad Hoc Committee on the Triple Revolution, The Triple Revolution, http://www.
educationanddemocracy.org/FSCfiles/CCC2aTripleRevolution.htm.
20. FORD, supra note 18, at 33.
21. Cited in Laura D'Andrea Tyson, Zukunftsmusik, 33 BERLIN J. 16 (2019).
2020 Speculative Human Rights 581

"Smart Machines and Long-Term Misery," economists Jeffrey Sachs and Lau-
rence Kotlikoff offer the hypothesis that, even worse, such automation could
in theory disproportionately affect the opportunities of the young, creating
a self-reinforcing intergenerational disparity that will lower "the wellbeing
of today's young generation and all future generations." 2 2
Economists like Stiglitz hold faith that artificial intelligence will be, ul-
timately, yet another instance of Joseph Schumpeter's creative destruction,
a technological disruption that will lead, after a painful transition, to a new
kind of economy with new and better work for humans. This kind of confi-
dence is deep-rooted in Western thinking, dating as far back as 350 B.C.E.
when Aristotle argued that automation would lead to the end of slavery.23
Nonetheless, even contemporary optimists acknowledge that the looming
Al disruption will be historically unprecedented. If we fail to plan now for
the transition, they warn, the human cost will be staggering and, ultimately,
injurious to democracy. "If we don't change our overall economic and policy
framework," says Stiglitz,

what we're going towards is greater wage inequality, greater income and wealth
inequality and probably more unemployment and a more divided society. But
none of this is inevitable. By changing the rules, we could wind up with a richer
society, with the fruits more equally divided, and quite possibly where people
have a shorter working week. We have gone from a 60-hour work week to a
45-hour week and we could go to 30 or 25.24

In 2017, Amnesty International declared that workers' rights in the age


of automation would be central to its "Al and human rights initiative." The
Aspen Institute recommends a comprehensive array of remedies for workers,
including worker training tax credits, expanded apprenticeships, regional
workforce partnerships, lifelong and affordable skills-training opportunities,
23
an increased minimum wage, and strengthened unemployment insurance.
Stiglitz has emphasized that new ways of thinking about wealth redistribu-
tion, education, and the bargaining power of labor can only be the begin-
ning of a solution. Both Stiglitz and the Aspen Institute, notably, stop short
of arguing for a universal basic income.
In the more extreme, speculative version of this capital problem, AGI
causes a workforce disruption that is not temporary. It is not just another
iteration of Schumpeter's creative destruction. It does not lead to more and

22. Jeffrey Sachs & Laurence Kotlikoff, Smart Machines and Long-Term Misery, NATIONAL
BUREAU OF ECONOMIC RESEARCH WORKING PAPER SERIES 16 (2012).
23. ARISTOTLE, POLITICS, bk. 1, pt. 4, at 80 (Benjamin Jowett trans., 1999), https://socialsciences.
mcmaster.ca/econ/ugcm/313/aristotle/Pol itics.pdf.
24. Sample, supra note 8.
25. Conor McKay, Ethan Pollak & Alastair Fitzpayne, The Aspen Inst. Future of Work Initiative,
Automation and a Changing Economy, Part Two: Policies for Shared Prosperity (2019),
https://www.aspeninstitute.org/publications/automation-and-a-changing-economy-
policies-for-shared-prosperity/.
582 HUMAN RIGHTS QUARTERLY Vol. 42

better jobs in the future, but rather to a jobless future. Calum Chace has
used the history of horses as an illuminating analogy. In 1900, there were
21 million horses in the United States. By 1960, with the ascendance of
the car and the mechanization of agriculture, the population had shrunk
to 3 million. For horses, becoming economically un-useful led to an extra
25 percent population loss over and above the percentage of human lives
claimed by the Black Death in Europe. Discussing a dire 2015 report on
automation and redundancy from Bank of America Merrill Lynch, the news-
paper The Guardian raises the issue of economically un-useful humans, and
asks: "What if we're the horses to Al's humans?"26 Princeton economists Anne
Case and Angus Deaton have used the chilling phrase "deaths of despair" to
account for the increase in mortality rates seen in groups affected by, among
other things, loss of economic opportunity based on technological change.2 7
In his dystopian novel Player Piano (1952), Kurt Vonnegut explores the
existential concerns of a fully automated society. What will life be like for
people stripped not only of their income but also of their sense of meaning
and purpose in work? What happens when capitalism cuts labor down to
the point where the economy can no longer generate demand? Or when
capitalism just does not need so many human bodies to function? Vonnegut
imagines revolution not as a political problem to be managed with minimally
ameliorative measures; he depicts it instead as a solution. The authors of "The
Triple Revolution," anticipating this more speculative future, argued aggres-
sively that we must therefore affirm an "unqualified right to an income,"2 8
or a universal basic income. Domingos argues that in the world of AGI, the
latter solution will eventually be recognized as an inevitability. "When the
unemployment rate rises above 50%, or even before," he writes, "attitudes
about redistribution will radically change. The newly unemployed major-
ity will vote for generous lifetime unemployment benefits and the sky-high
taxes need to fund them."29

B. Cybernetics

The next second-generation problem is cybernetics. The recognizable version


of this problem is a matter of ethics rather than rights, at least by mainstream
political standards. It is a commonplace observation that many of us are
already cyborgs, augmenting our bodies with medical implants and convert-

26. Heather Stewart, Artificial Intelligence: "Homo Sapiens Will Be Split into a Handful of
Gods and the Rest of Us," THE GUARDIAN (7 Nov. 2015), https://www.theguardian.com/
business/2015/nov/07/artificial-intelligence-homo-sapiens-split-handful-gods.
27. ANNE CASE & ANGUS DEATON, DEATHS OF DESPAIR AND THE FUTURE OF CAPITALISM (2020).
28. The Triple Revolution, supra note 19.
29. DoMINGOS, supra note 12, at 278-79.
2020 Speculative Human Rights 583

ing our biological memories into digital data. The ethical dilemma of our
current cyborg lives is something like the dilemma of higher education. The
technologies of health and knowledge provide life-enhancing opportunities
that exacerbate asymmetries of wealth and power because they are made
available disproportionately to the wealthy and powerful. In the US, most
seem, at least for now, to have accepted such disparities as infelicitous rather
than damning; amelioration is judged aspirational rather than a matter of
basic dignity.
According to some experts, however, the speculative scenario of more
extreme cybernetic lives may not be that far off in the future-with dispari-
ties correspondingly deeper, disparities that could change our conception
of basic dignity such that they rise to the level of a human rights concern.
Science-fiction writer lain Banks's 1996 novel Excession imagined a "neural
lace"-a digital layer above the cortex-that could help humans achieve
symbiosis with artificial intelligence. Today, neuroscientists like Harvard's
Charles Lieber are researching "electronic mesh" to inject into brain tissue,
and Elon Musk's company Neuralink has invested, by the last count, over
$150 million to specifically research Banks' "neural lace" concept. The US
Department of Defense's Defense Advanced Research Projects Agency (com-
monly known as DARPA) continues to invest heavily in both surgical and
nonsurgical brain-machine interfaces, hoping to give combat-ready soldiers
superhuman speed and to help wounded soldiers more fully recover from
injury (DARPA teams have had preliminary success with neural control
of prosthetic limbs).30 Going further, futurist Ray Kurzweil, the Director of
Engineering for Google, predicts that by 2050 nanobots will integrate with
the immune system, generating what has been dubbed "actuarial escape
velocity" 31-that is, the point when technology can add more than one
year to your life for each year you remain alive, creating the possibility of
an indefinite lifespan. As Noah Harari writes, characterizing the mood of
wild optimism in tech culture: Death is "a technical problem," and "every
technical problem has a technical solution."32
Such radical optimists will, no doubt, meet with bitter disappointment,
but along the way, they will also witness breathtaking health gains. To see
why such surplus is not just a thrilling opportunity but also a destabilizing
problem, it is helpful to look back at Plato's views on healthcare. Plato was
comfortable withholding healthcare from the old and the chronically ill.33
That seems morally misguided to most of us today; we see healthcare as a

30. See Al Emondi, Revolutionizing Prosthetics, DEFENSE ADVANCED RES. PROJECTS AGENCY, https://
www.darpa.mil/program/revol utionizing-prosthetics.
31. See RAY KURZWEIL, THE SINGULARITY Is NEAR: WHEN HUMANS TRANSCEND BIOLOGY (2005); see also
RAY KURZWEIL, THE AGE OF SPIRITUAL MACHINES: WHEN COMPUTERS EXCEED HUMAN INTELLIGENCE (1999).
32. YUVAL NOAH HARARI, HOMO DEUS: A BRIEF HISTORY OF TOMORROW 23 (2017).
33. See his discussion of Asclepius in ROBIN WATERFIELD, PLATO REPUBLIC 105-08 (1993).
584 HUMAN RIGHTS QUARTERLY Vol. 42

human right. But I do not believe that is because our moral intuitions about
healthcare have changed so radically since Plato. Rather, it is that what
healthcare can offer has changed, and therefore along with it our idea of
what is just. Healthcare is a disparity-generating technology. When dispari-
ties become vast enough, passing a moral tipping point into the crushingly
unfair and politically destabilizing, they change our conception of basic
human dignity. Advances in well-being like those imagined by DARPA and
Google, while welcome, will require a reconceptualization of what we
can expect from a human life. Cybernetic disparities will become a matter
of basic dignity rather than aspirational ethics. Almost certainly, the future
work of rights activists will involve redefining the benefits of cutting-edge
technology not as a privilege of wealth but rather as a shared entitlement.
For now, the solution to the problem of Al and the right to health, as with
Al and the right to work, is preemptive distributive justice that will minimize
the human costs of delayed equal access.

IV. CLASSIFICATION, CONTROL, AND CONSCIOUSNESS

AGI and the problems of classification, control, and consciousness present


challenges not only to basic liberties and first-generation rights like protec-
tion from arbitrary harm, but also to human flourishing more generally. Let
me start with classification.

A. Classification

The recognizable, current problems of classification are typically bundled


together as "the problem of privacy in the digital age." Such concerns include
the harvesting of data for corporations or insurers, along with the use of data
for political surveillance or deep fake videos. Human Rights Watch's work on
the use of mobile apps to carry out illegal mass surveillance is emblematic
here. The Human Rights Commission of the US House of Representatives
held a hearing on such threats in 2018, taking testimony that argued data
collection in the age of Al makes possible abuses "at a scale not even Orwell
could have imagined."34
As important as such work is, it is critical to emphasize here that the
problem of big data is not only a surveillance and privacy problem. Fixating
on privacy can sometimes mean ignoring or setting aside concerns about

34. PAUL SCHARRE, ARTIFICIAL INTELLIGENCE: THE CONSEQUENCES FOR HUMAN RIGHTS, CTR. FOR NEW AM.
SECURITY, TESTIMONY BEFORE THE TOM LANTOS HUMAN RIGHTS COMMISSION (23 May 2018), https://
humanrightscommission.house.gov/sites/humanrightscommission.house.gov/files/docu-
ments/AITranscriptpubl ic.pdf.
2020 Speculative Human Rights 585

how big data is reconfiguring racism and sexism, how the brave new world
of technology reproduces the same old discriminations. Here I would point
to a range of sources, including Safiya Noble's work on the racism "baked
in" and masked in purportedly neutral search engines; the ACLU's work on
the racism of predictive policing algorithms like PredPol; two 2017 reports
submitted to the UN Human Rights Council emphasizing the human rights
risks of algorithmic discrimination based on age and gender; and a 2017
report from the Human Rights, Big Data and Technology Project (HRBDT)
to the House of Lords Select Committee on Artificial Intelligence, which
cites the same concern in its program for developing a human rights-based
approach to the development and use of Al.33 Privacy narrowly conceived
of course matters, but a fuller conception of the value of privacy should
include distortions of the environment within which privacy is constituted.
That is, the problem is not simply the unprecedented ways we can collect
and use information but also our conception of information itself. Stuart
Russell argues, for instance, that Articles 18 and 19 of the Universal Dec-
laration of Human Rights (freedom of thought and expression) need to be
supplemented now, if they wish to remain robust as conceptions of dignity
in the age of Al, by a declaration of a "right to mental security"-that is, the
"right to live in a largely true information environment."36
Regulatory bodies and rights organizations have been slow to respond to
the challenge of statistical discrimination based upon "bad" (unrepresentative,
insufficiently detailed, or distorted by racist inputs) information. But to be
fair, the challenge is difficult, given that acquiring unbiased data poses not
only a range of technical problems and cost issues but also the intractable
puzzle of rights trade-offs: acquiring more detailed, unbiased information
would almost certainly in many cases require sacrifices of privacy rights.37
And as the HRBDT notes, regulation of online content by social media
platforms can impact freedom of expression. Nonetheless, as the cases cited
above show, movement to address these issues is accelerating with a wide
range of recommendations for redress-for instance, by using human rights
impact assessments to "complement the artificial intelligence design and
deployment process, instead of focusing solely on post-facto accountability."38
What will the classification problem look like in a world of AGI, and
what can we do about it? That is a question that must be subsumed under
the control problem, as should become clear shortly.

35. See SAFIYA NOBLE, ALGORITHMS OF OPPRESSION: How SEARCH ENGINES REINFORCE RACISM (2018). See
also The Human Rights, Big Data and Technology Project Written Evidence (AIC0196)
(6 Sept. 2017), http://data.parliament.uk/writtenevidence/committeeevidence.svc/
evidencedocument/artificial-intelligence-committee/artificial-inteligence/written/69717.
html.
36. RUSSELL, supra note 14, at 107.
37. See Lior Jacob Strahilevitz, Privacy Versus Antidiscrimination, U. CHI. L. REV. 363 (2008).
38. Big Data and Technology Project, supra note 35.
586 HUMAN RIGHTS QUARTERLY Vol. 42

B. Control

The recognizable, near-term version of the control problem is autonomous


weapon systems (AWS), discussed above, along with magnified versions
of contemporary cyberwar that targets civilian infrastructure. The more
extreme, speculative version of the AGI control problem is an apocalyptic
amplification of the law of unintended consequences, so painfully familiar
to researchers in human rights. It is perhaps best summarized in philosopher
Nick Bostrom's now-canonical AGI thought experiment, in which superin-
telligence emerges unexpectedly from a recursively self-improving Al at a
paperclip manufacturing company.39 Bostrom writes: "This could result...
in a superintelligence whose top goal is the manufacturing of paperclips,
with the consequence that it starts transforming first all of Earth and then
increasing portions of space into paperclip manufacturing facilities."40 What
is important about Bostrom's example is that the human-extinction scenario
does not require an AGI developed in a military context, as with AWS. It does
not require the superintelligence to be conscious, to see humans as threats
to be eliminated, to have a menacing personality, or even a survival instinct
produced under evolutionary pressure. The AGI simply needs to be what it
is-that is, a system that optimizes for paperclip production, but with more
capacity for doing so than we were able to predict. Bostrom's "instrumental
convergence thesis" persuasively demonstrates that simple goal optimization
will generate Al behavior that is functionally equivalent to a relentless and
competitive survival instinct, as task completion will necessarily include
subgoals like self-preservation and resource-maximization.4 1
The extinction by paperclip scenario illuminates what is known as the
"value alignment problem." Given the vast number of goals an optimizing
system might have, and the even more vast number of subgoals it might have
in service of its larger goals, the odds that an emergent superintelligence will
behave according to values that align with ours are quite low. Just like nonhu-
man mammals could not imagine what the goals of early humans would be
and what that would mean for their lives on this planet, we cannot imagine
what a superintelligence will have as its goals and what that will mean for
us. Risse offers the comforting possibility that superintelligence could very
well entail ethics, pointing to Kant's categorical imperative as a potentially
necessary aspect of higher reasoning.4 Musk, in sharp contrast and with
characteristic theatrics, predicts the possibility of total demise, arguing that
the best way to think of humanity, in the end, might be to say that we were

39. Nick Bostrom, Ethical Issues in Advanced Artificial Intelligence (2003), https://www.
nickbostrom.com/ethics/ai.html.
40. Id.
41. NICK BOSTROM, SUPERINTELLIGENCE: PATHS, DANGERS, STRATEGIES 131-39 (2014).
42. Risse, supra note 2, at 7.
2020 Speculative Human Rights 587

the "biological bootloader for digital superintelligence."43 A bootloader, for


those new to the term, is the minimal code required for a computer to start.
Importantly, value alignment would be a problem even with an AGI that
was somehow constrained by a rule dictating that it must act only so as to
produce human happiness. Bostrom, Tegmark, and others have imagined
multiple scenarios where AGI logic achieves happiness for humans through
benign yet rights-nullifying domination, including enslavement to prevent
us from harming each other and our environment. Russell affirms such vi-
sions of "mental asphyxiation" through "political control."44 He goes on to
elaborate this as a version of the "King Midas Problem": task AGI with curing
cancer, AGI induces tumors in all living humans to carry out medical trials;
task AGI with solving environmental problems, AGI restores the ocean's pH
levels with a process that depletes 25% of the oxygen in the atmosphere;
etc. Bostrom sees such "perverse instantiation" of human-endorsed goals as
a question with ever-receding answers:
Final goal: "Make us smile"
Perverse instantiation: Paralyze human facial musculatures into constant beam-
ing smiles
Final goal: "Make us smile without directly interfering with our facial muscles"
Perverse instantiation: Stimulate the part of the motor cortex that controls our
facial musculature in such a way as to produce constant beaming smiles
Final goal: "Make us happy"
Perverse instantiation: Implant electrodes into the pleasure centers of our brains45

I should note here, for clarity, two things that are perhaps already obvi-
ous. First, Bostrom's quirky examples are not unusual for the genre of the
philosophical thought experiment, and are meant to function as stand-ins for
more serious abstract categories. For instance, the paperclip of his paperclip
maker is a stand-in for the concept of 'anything not useful to human final
goals,' and the rictus smile of the happiness machine is best understood as
'anything we think we want that is difficult to limit definitionally.' Second,
those most concerned with value-alignment tend to regard with suspicion
control plans that rely on programming guardrails like digital apoptosis, kill
switches, and ongoing supervision through human overrides. While it is not
impossible that such techniques could work, the reigning assumption is that
a superintelligent learning system optimizing for goals not only could but
would find ways to escape digital handcuffs.
Scholars working on the control problem, along with for-profit corpora-
tions like Open Al and volunteer organizations like the Future of Life Institute,

43. World Artificial Intelligence Conference, Jack Ma and Elon Musk Debate, YOUTUBE (29
Aug. 2019), https://www.youtube.com/watch?v=f3lUEnMaiAU.
44. RUSSELL, supra note 14, at 138.
45. BOSTROM, supra note 41, at 146-47.
588 HUMAN RIGHTS QUARTERLY Vol. 42

generally recommend a combined approach for minimizing such existential


risk. First, many believe it is important to make Al research globally transpar-
ent and cooperative.46 AGI can become an extinction-level problem even if it
never emerges, even if, for reasons we do not yet understand, AGI turns out
to be in principle impossible. What might a nation with nuclear weapons do
if its intelligence agencies gathered what they felt was compelling evidence
that their enemies were on the verge of developing the permanently decisive
advantage of military superintelligence? Recall Vladimir Putin's declaration,
when discussing research in artificial intelligence: "the one who becomes
the leader in this sphere will be the ruler of the world." 47
Second, we need to radically change our priorities. Russell describes how
his perspective on Al research evolved after the first edition of his Al textbook
(arguably, the most widely used in the world) in 1995. In interviews, he recalls
how he-like everybody else-was at that time entirely focused on making
Al smarter, asking the question: "How can we succeed?"4 1 By 2013, he had
come to believe that the field was moving in the entirely wrong direction.49
Al researchers now urgently needed to focus on the question "What if we do
succeed?" Given emerging awareness about the risks of artificial intelligence,
Russell was convinced that this was currently the "most important question
facing humanity."30 He became an early and strong advocate for the position
that we must not just race for raw AGI-we must, instead, undertake the
more complex task of making safe and ethical AG. 1 In 2017, he was part
of the group of Al and tech luminaries that drafted the oft-cited Asilomar
Al Principles, which argue that the goal of Al research should be "to create
not undirected intelligence, but beneficial intelligence."52
Some approach the "friendly Al" programming challenge with the idea
of an "ethical governor" or rule systems based on values we can endorse,
not unlike Isaac Asimov's famous "Three Laws of Robotics." Russell sees
this as a hopelessly Sisyphean task, calling for a process-based rather than
a rule-based ethical system. He argues persuasively that Al research must
abandon the reigning "standard" model of computer intelligence, which sees
intelligence narrowly as goal optimization and designs machines to pursue
fixed, exogenously supplied objectives.53 Instead, he proposes a new foun-
dation for Al research focusing on inverse reinforcement learning (learning

46. See, Amanda Askell, Miles Brundage & Jack Clark, Why Responsible Al Development
Needs Cooperation on Safety, OPENAI (10 July 2019), https://openai.com/blog/cooperation-
on-safety/.
47. Putin: Leader in Artificial Intelligence Will Rule World, CNBC (4 Sept. 2017), https://
www.cnbc.com/2017/09/04/puti n-leader-i n-artificial-intelligence-will-rule-world.html.
48. RUSSELL, supra note 14, at 138.
49. Id.
50. Id.
51. Id.
52. Asilomar Al Principles, FUTURE OF LIFE INSTITUTE (2017), https://futureoflife.org/ai-principles/.
53. RUSSELL, supra note 14, at 138.
2020 Speculative Human Rights 589

human preferences through observation).5 4 At the most basic level, we must


reconceive Al not as a standalone system but rather as a "coupled system"
constituted by its relation to humans.55 Safe Al, in other words, requires
rethinking Al altogether.

C. Consciousness

The next first-generation-rights problem is the problem of consciousness.


And it is here, with the most speculative of speculations, that I will bring this
essay to its conclusion. The control problem is about the harm AGI might
do to us, but the consciousness problem is about the harm we might do to
AGI. To borrow from Thomas Nagel's famous characterization of subjective
experience: What if there is something that it is like to be an AGI? Could
AGI achieve consciousness? If the control problem is the ultimate deficit
scenario, the consciousness problem is the ultimate surplus scenario.
In 2016, the European Parliament proposed drafting regulations to grant
"electronic personhood" to sufficiently complicated Al, following the model
of corporate personhood. But the issue goes deeper than that. We recognize
that we have duties to more kinds of conscious life than just human life.
The Great Apes Project was initiated in 1994 to extend basic human rights
to apes, drawing upon the philosophical work of Paola Cavalieri and Peter
6
Singer. In 2008 it scored a major victory in the Spanish Parliament when
the environmental committee passed a resolution urging compliance with
the Great Apes Project's recommendations. In 2015, New Zealand passed
the Animal Welfare Amendment Act, becoming the first country in the world
to pass legislation recognizing animals as "sentient" beings to whom we
have duties. 7 The Nonhuman Rights Project is seeking similar victories in
the US. "Environmental personhood" is a developing concept that has led
to, for instance, legal personhood for rivers. For some, the "human" part of
human rights may no longer be the key term. Perhaps consciousness, not
humanness, entails rights. Or perhaps the relevant term is even as broad as
"cognition," as Katherine Hayles argues. She calls for an ethical remapping
of the entire human "cognitive ecology," attending to the moral status of
both conscious and nonconscious cognizers, including "human-technical
assemblages." 8

54. Id.
55. See id. at 1-2, 171-83. See also Lucas Perry, Human Compatible: Artificial Intelligence
and the Problem of Control with Stuart Russell, Al ALIGNMENT PODCAST (8 Oct. 2019),
https://futureoflife.org/2019/10/08/ai-alignment-podcast-human-compatible-artificial-
intel Iigence-and-the-problem-of-control-with-stuart-russel l/.
56. See PAOLA CAVALIERI & PETER SINGER, THE GREAT APE PROJECT: EQUALITY BEYOND HUMANITY (1993).
57. See Animal Welfare Amendment Act (No 2) (2015), http://www.legislation.govt.nz/act/
public/2015/0049/latest/whole.html #DLM5174807.
58. See N. KATHERINE HAYLES, UNTHOUGHT: THE POWER OF THE COGNITIVE NONCONSCIOUS 30-32, 39
(2017).
590 HUMAN RIGHTS QUARTERLY Vol. 42

Nobody understands consciousness well enough now to do anything but


speculate about its potential emergence in AGI. Theories of consciousness
abound. People of faith embrace dualism. Philosophers discuss higher-order
theories. Neuroscientists, who feel confident they are beginning to win the
debate, argue over global neuronal workspace theory (GNW and information
integration theory (IIT). The former argues that consciousness arises when
information in the brain is made available to multiple cognitive systems in
a central "workspace." The latter argues that consciousness is integrated in-
formation, the greater whole that arises from information's parts in a system
defined by high internal connectivity. IIT argues further-testing even my
tolerance for speculative science-that any system's level of consciousness
can be measured and given a specific numerical value.
Consciousness research, in sum, is deep and variegated at the same
time that it is embryonic. So could a robot have feelings? On one end of
the spectrum, philosopher Susan Schneider explains, biological essentialists
insist that this is an absurd worry. Consciousness is a property of biological
organisms; wires and chips made out of silicon and copper cannot produce
subjective experience or self-awareness. But this objection simply begs the
question. That is, if we are asking "Can machines be conscious?," then the
response "No, as a matter of definition" is a way of avoiding the question
rather than making an argument. On the other end of the spectrum, techno-
optimists insist that consciousness is computational through and through and
therefore an inevitability for sufficiently complex Al. But this also is a way
of defining away the question, insisting consciousness just is, by definition,
the kind of things computers can have. 9 Currently, we are simply unable
to make definitive claims about the potential existence of consciousness in
AGI, or even, if we take the philosophical problem of other minds seriously,
in other humans.
This problem of the final opacity of consciousness will be particularly
distressing with Al because we can expect it to seem vividly, agonizingly,
irresistibly conscious, even if it is not. That is just how we like our Al to
look. But will it be? Or will it be a zombie, in the way philosophers use that
term-that is, an entity that is indistinguishable from humans in its behavior
but that lacks the capacity for what philosophers call qualia, or, effectively,
subjective experience, the feeling of being something? Physically abusing a
philosophical zombie might be a property-related crime, and arguably it could
lead down a slippery slope to violence against humans, as the Campaign
against Sex Robots and other groups argue 60-but it is not a crime in any

59. See SUSAN SCHNEIDER, ARTIFICIAL YOU: Al AND THE FUTURE OF YOUR MIND 19-26 (2019). For more
on computationalism and its limits, see DAVID GELERNTER, THE TIDES OF MIND: UNCOVERING THE
SPECTRUM OF CONSCIOUSNESS (201 6).
60. For more on this debate, see Carlotta Rigotti, Sex Robots and Human Rights, OPENDEM-
OCRACY (8 May 2019), https://www.opendemocracy.net/en/democraciaabierta/sex-robots-
and-human-rights/.
2020 Speculative Human Rights 591

meaningful sense against the zombie itself. However, if what you thought
was a zombie is, in fact, conscious, if there is something that it is like to
be that thing, then the moral stakes change dramatically. Bostrom warns
that discarding such Als when they are no longer useful would constitute
"mind crime."61 Philosopher Susan Schneider asserts that forcing conscious
Al "to serve us would be akin to slavery" and insists that thinking otherwise
is speciesism. 6 2
Philosophers Mara Garza and Eric Schwitzgebel, who also worry about
this problem, urge us to establish Al research pathways that follow what
they call "the principle of the excluded middle"-that is, only produce
intelligences that are very clearly not-conscious or very clearly conscious.
No in-betweens, no zombies.63 Schneider has also begun developing various
conceptual tests to determine the likelihood that any particular Al is con-
scious, all based on the premise that a zombie Al that was "boxed in"-that
is, not yet connected to external information sources-would be incapable
of understanding concepts based upon the felt quality of consciousness
(such as, "Could you survive the permanent deletion of your program?") or
forming coherent answers to questions about their experience of conscious-
ness (such as, "What is it like to be you right now?").64 In the meantime,
Schneider urges the use of a precautionary principle that requires us to grant
rights rather than deny them when we cannot be sure.
Schneider's moral generosity comes, however, with potentially wrenching
costs. Philosophers like Matthew Liao, along with Garza and Schwitzgebel,
have begun testing our moral intuitions with thought experiments that mimic
Phillipa Foot's original trolley problem.63 Would you put a human at risk
to save a larger number of seemingly conscious androids, or would you
abandon the helpless Al with the moral ease of recycling an iPhone? Until
you know what consciousness is and who or what has it, they assert, you
cannot really give a satisfying moral answer. Erring in favor of the human
risks moral catastrophe, if the android represents a new kind of conscious
life ushered into the universe. But erring in favor of androids with the precau-
tionary principle could also be moral catastrophe, no differently than if we
now, at the expense of needful humans, invested precious, zero-sum-game
resources to protect the dignity of today's pre-zombies, which we can buy
now: Pepper ($25,000), Kuri ($899), CHiP ($199.99), Lynx ($799.99), the

61. BOSTROM, supra note 41, at 153-54.


62. SCHNEIDER, supra note 59, at 4.
63. See Eric Schwitzgebel & Mara Garza, A Defense of the Rights of Artificial Intelligences,
39 MIDWEST STUD. IN PHIL. 98 (2015); see also SCHNEIDER, supra note 59, at 68.
64. SCHNEIDER, supra note 46, at 51-56.
65. See also Matthew Liao, Artificial Intelligence and Moral Status, YOUTUBE (9 Sept. 201 7),
https://www.youtube.com/watch?v=qPIqZl rs-j8; Mara Garza & Eric Schwitzgebel, The
Rights of Artificial Intelligences, YOUTUBE (10 Sept. 2017), https://www.youtube.com/
watch?v=54-FI4qpwa8. See also SCHNEIDER, supra note 59, at 68.
592 HUMAN RIGHTS QUARTERLY Vol. 42

lovable Paro ($6400), and Sophia (not for sale; she is a legal citizen in Saudi
Arabia).66 Tegmark foresees final calamity here, arguing that assuming con-
sciousness where it might not exist could lead to a future in which humans
allow themselves to be replaced, thereby abandoning our most basic moral
duty: to sustain consciousness in the universe. He writes: "[l]f we enable
high-tech descendents that we mistakenly think are conscious, would this
be the ultimate zombie apocalypse, transforming our grand cosmic endow-
ment into nothing but an astronomical waste of space?"6 7
The robot rights argument has already begun-just Google the phrase.
The most likely future development (assuming a successful control scenario)
is neither the extreme of full inclusion nor full exclusion, but rather an ac-
celerated version of the incrementalism we have seen with animal rights.
But it will be an urgent, messy, febrile incrementalism, born of strife. The
argument for consciousness could very well alter our conceptions of rights
and moral duties at a fundamental level. It could also radically expand our
sense of moral possibilities. After all, in an existentially terrifying, infinitely
empty and lonely universe, the only company we will ever be able to have,
our only hope for shared meaning, is with other conscious minds. And if
this last claim holds any truth, it may also hold our most hopeful answer to
the control problem. Like our own, most deeply felt experiences of empathy
reveal: consciousness matters to consciousness. As Schneider writes: "The
value that an Al places on us may hinge on whether it believes it feels like
something to be us." 68

V. CONCLUSION

Let me close here, at the extreme end of Al fabulation, by addressing what


we might call the "Chicken Little" problem of rights futurism. In the old Eu-
ropean folk tale, a chick panics when an acorn falls on its head, concluding
that the world must be ending because, in the hapless critter's timeless and
mortifying catchphrase, "The sky is falling!" Chicken Little is a cautionary tale
about the embarrassment of overreacting and the danger of fearmongering
("Chicken Little Syndrome" is a state of despair and passivity induced by
imagined calamity). I am mindful of such concerns, but believe a more apt
characterization for our time, and a more relevant theory of embarrassment
can be found in a metaphor offered by Al researcher EliezerYudkowksy, who
has argued: "There is no fire alarm for artificial general intelligence." The

66. For a review of these robots, see Patricia Marx, Learning to Love Robots, THE NEW YORKER
(26 Nov. 2018), https://www.newyorker.com/magazine/2018/11/26/learning-to-love-
robots.
67. TEGMARK, supra note 10, at 282.
68. SCHNEIDER, supra note 59, at 40.
2020 Speculative Human Rights 593

point of this essay is not to provide evidence of a fire so that humans can
take precautionary measures. That is not even the point of a fire alarm. As
Yudkowksy points out, seeing evidence that there is a fire actually has very
little impact on our willingness to take measures to protect ourselves. Study
after study shows people doing nothing as they sit in rooms filling with smoke,
paralyzed by the embarrassing possibility that they might be overreacting to
signs of danger. "The real function of the fire alarm," Yudkowksy says when
discussing the Al control problem, "is the social function of telling you that
everyone else knows there is a fire and you can now exit the building in
an orderly fashion without looking panicky or losing face socially."69 We do
not need to start running from a fire that may never ignite. We only need to
build a fire alarm and map pathways to the nearest exits. 70

69. Sam Harris, A: Racing toward the Brink: A Conversation with Eliezer Yudkowsky, MAKING
SENSE PODCAST (28 Feb. 2018), https://samharris.org/podcasts/1 16-ai-racing-toward-brink/.
70. For more on the risks of distant existential threats and our current incapacity to prepare
for or even adequately imagine them, see Nick Bostrom, The Vulnerable World Hypoth-
esis, 10 GLOBAL POL. 455 (2019), https://www.nickbostrom.com/papers/vulnerable.pdf.

You might also like