Professional Documents
Culture Documents
42 Hum Rts Q573
42 Hum Rts Q573
Citations:
Please note: citations are provided as a general guideline. Users should consult their preferred
citation format's style manual for proper citation formatting.
-- Your use of this HeinOnline PDF indicates your acceptance of HeinOnline's Terms and
Conditions of the license agreement available at
https://heinonline.org/HOL/License
-- The search text of this PDF is generated from uncorrected OCR text.
-- To obtain permission to use this article beyond the scope of your license, please use:
Copyright Information
HUMAN RIGHTS QUARTERLY
James Dawes
ABSTRACT
This essay takes seriously the claims of many experts in artificial intelligence
that AGI (artificial general intelligence) could emerge as early as 2040.
Some of the questions raised by AGI include: Does human-like intelligence
entail consciousness, and does consciousness entail rights? Will the rise
of AGI enhance or endanger human life? If the former, are there certain
perceived enhancements that run counter to notions of human rights? If
the latter, what are our collective duties, right now, to future generations?
How can a human rights framework help us to negotiate these questions?
I. INTRODUCTION
What will it mean for human rights when artificial intelligence transcends
the human?
Before I answer that question, I should explain why I think that it is
an important one. On the one hand, our culture is saturated with anxiety
about the hurtling speed of technological development. Public figures as
different as Stephen Hawking, Bill Gates, Henry Kissinger, and Elon Musk
have all warned that we are on the verge of developing Al so complicated
that we will neither be able to understand it nor control it. As Hawking
James Dawes, professor at Macalester College, is the author of The Novel of Human
Rights (Harvard, 2018), Evil Men (Harvard, 201 3), That the World May Know: Bearing
Witness to Atrocity (Harvard, 2007), and The Language of War (Harvard, 2002).
Human Rights Quarterly 42 (2020) 573-593 © 2020 by Johns Hopkins University Press
574 HUMAN RIGHTS QUARTERLY Vol. 42
once dramatically said, Al could "spell the end of the human race."' On
the other hand, the distant risk posed by the potential emergence of super-
intelligent Al seems-especially for those working in human rights-like an
intellectual exercise at best and a distraction at worst, given that we face a
terrifying array of immediate existential concerns right now. This year in this
journal, philosopher Mathias Risse challenged that latter stance, highlighting
the lack of attention artificial intelligence has received in the human rights
community. Noting the "exponential technological advancement" of the
last decades, he called for immediate attention to the human rights risks
presented by superintelligence, arguing that it is "urgent to get this matter
on the agenda." 2 I believe he is right. This essay is an attempt to help with
that agenda.
For those skeptical of futurist dystopianism, I want to begin by explain-
ing how I got to the point where the opening question began to worry me.
My research focus, broadly speaking, concerns conduct in war. The pull to
thinking about Al, in general, began with work I was doing on the problem
of war crimes and autonomous weapon systems (AWS). For those unfamiliar
with the term, AWS are weapons that do not have a human in the loop.
Drones, however concerning they might be, are still traditional weapons:
there is a human operator who makes the moral calls. AWS are designed to
exclude humans from moral intervention. AWS are built to operate entirely
independently, to make their own decisions and develop their own tactics.
Human Rights Watch has been on the front edge of concern about AWS,
describing them as "killer robots" and calling for a "preemptive ban on the
weapons' development, production, and use."3
As an international movement, "The Campaign to Stop Killer Robots"
traces its origin to a 2007 article in The Guardian by roboticist Noel Shar-
key, which looked at the use of armed battlefield robots in Iraq. Sharkey
warned against the development of fully autonomous robots and called for
their international regulation. 4 The article was called "Robot Wars Are a
Reality," and at the time I lacked the imaginative capacity to fully appreciate
the threat because I just could not get past how much it sounded like the
Terminator movie franchise. I started paying attention in 2012 after Human
Rights Watch published its first report detailing the dangers of autonomous
1. Rory Cellan-Jones, Stephen Hawking Warns Artificial Intelligence Could End Mankind,
BBC NEWs (2 Dec. 2014), https://www.bbc.com/news/technology-30290540.
2. Mathias Risse, Human Rights and Artificial Intelligence: An Urgently Needed Agenda,
41 HUM. RTS. Q. 1, 2, 5 (2019).
3. Human Rights Watch (HRW), Heed the Call: A Moral and Legal Imperative to Ban Killer
Robots, (21 Aug. 2018), https://www.hrw.org/report/2018/08/21/heed-call/moral-and-
legal-imperative-ban-killer-robots . For more, see PAUL SCHARRE, ARMY OF NONE: AUTONOMOUS
WEAPONS AND THE FUTURE OF WAR (2018). For a defense of AWS, see AMIR HUSAIN, THE SENTIENT
MACHINE: THE COMING AGE OF ARTIFICIAL INTELLIGENCE 87-108 (201 7).
4. Noel Sharkey, Robot Wars Are a Reality, THE GUARDIAN (17 Aug. 2007), https://www.
theguardian.com/commentisfree/2007/aug/1 8/comment.military.
2020 Speculative Human Rights 575
weapon systems.5 It was not until 2017 that I published anything at all on
the threat, which I confess with chagrin given that the changing nature of
war has been a key research area throughout my career. I simply could not
rise above my biases: science fiction took it seriously, so I did not.
It is now twelve years since Sharkey's loopy-sounding, Schwarzenegger-
invoking early warning, and today, right now, major militaries around the
world are aggressively investing in research to develop and deploy AWS.
The US alone budgeted $18 billion for AWS from 2016 to 2020.6 Of course,
national militaries express full confidence in their ability to control AWS.
But the history of maintaining control over weapons technology is not en-
couraging. The most likely near-term scenario is an AWS race with the same
devastating escalations and risks as the Cold War nuclear arms race and the
post-Cold War nuclear proliferation problem. Indeed, we can also expect
new versions of both the Kalashnikov problem and the dirty bomb problem:
the technology will be simplified, made cheaper and less controllable, and
will become easily accessible to a range of non-state actors. In a best-case
scenario, AWS will undermine the structure of international humanitarian
law and war crimes prosecution. We will inhabit a world of deeply confused
chains of control and moral responsibility, where crimes against humanity
happen before humans form thoughts about them. In a worst-case scenario,
AWS will be designed to do what tactical nuclear weapons can do, and
they could become an extinction-level development.
Unless we do something collectively and dramatically now, most of us
will live to see these weapons deployed, likely to catastrophic effect. But
there are real solutions available, and that is one thing I repeatedly emphasize
when giving talks on AWS: however dismaying the future might seem, there
are things we can do right now that will matter. Just imagine if something
like our current anti-nuclear proliferation strategies had been put in place
before the Manhattan project.
My worries about AWS naturally led to researching more about the fu-
ture of weaponized Al and then researching the capacities of Al generally.
And this led me to the question I started with: What will it mean for human
rights when artificial intelligence transcends the human?
Edmund Burke coined the phrase "speculative rights" to describe rights
that are weakly grounded because they are based on reason, which he judged
5. HRW, Losing Humanity: The Case against Killer Robots, (19 Nov. 2012), https://hrw.
org/report/2012/11/19/losing-humanity/case-against-killer-robots. Roboticist Ronald Ar-
kin's work on devising an "ethical governor" for AWS, while not sound as a matter of
weapons control, is an anticipatory model for establishing research pathways sensitive
to the threats of Al. See RONALD ARKIN, GEORGIA INSTITUTE OF TECHNOLOGY, GOVERNING LETHAL
BEHAVIOR: EMBEDDING ETHICS IN A HYBRID DELIBERATIVE/REACTIVE ROBOT ARCHITECTURE 1, 20 (2008),
https://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf.
6. Don't Let Robots Pull the Trigger, SCI. AM. (1 Mar. 2019), https://www.scientificamerican.
com/article/dont-let-robots-pull-the-trigger/.
576 HUMAN RIGHTS QUARTERLY Vol. 42
fallible, and not on tradition, which he trusted. 7 I am using the phrase specu-
lative human rights in this essay to deliberately invoke both Burke and the
genre of speculative fiction, to invite you to indulge with me in an exercise
in reason, a science-fiction thought experiment that anticipates the rights
that tradition cannot. Speculative human rights is about threats to human
flourishing that are beyond the current horizon of possibility, threats that
are so far down the technological path we are on that it is just hard to feel
anxious about them. But part of the anxiety I want to pass on to you is the
thought that the very same thing could have once been said for the current
array of existential risks we face.
In this article, I will be focusing primarily on AGI (artificial general intel-
ligence). To quickly specify terms: Al is information processing for specialized
tasks-like chess-playing or, in a human rights and humanitarianism context,
wartime bomb disposal. AGI would be generalized intelligence that could
perform all the cognitive tasks that humans can perform at least as well as
humans can; artificial intelligence that could, for instance, not only calculate
the permutations of chess but also navigate a physical object over cluttered
terrain for the purpose of bomb disposal; artificial intelligence that could
not just learn something, but learn anything.
My goal in this essay will be to comprehensively forecast the concerns
AGI would raise when looked at from the perspective of human rights. To
do that, I am going to rely upon the politically problematic but still con-
ceptually useful division between civil and political rights and economic,
social, and cultural rights.
First, what happens when we look at AGI from the perspective of civil
and political rights, focusing on protection from discrimination, arbitrary
harm, and coercion? Philosopher Nick Bostrom was among the first to think
seriously about dystopian futures in which AGI unleashed calamitous violence
or established terrifying new regimes of coercion, but others like cosmologist
Max Tegmark have followed in his path, spinning out scenarios of expunged
rights under benevolent AGI dictators and totalitarian surveillance states.
Second, what happens when we look at AGI from the perspective of
economic, social, and cultural rights, focusing in particular on the right to
work and the right to health? People like Nobel laureate economist Joseph
Stiglitz, among others, argue that the near-future impact of Al on work will
be historically unprecedented, triggering a global labor disruption that could
be politically destabilizing, generating new asymmetries of wealth and power
that could make our current era look like a golden age of equality.,
7. EDMUND BURKE, ON TASTE, ON THE SUBLIME AND BEAUTIFUL, REFLECTIONS ON THE FRENCH REVOLUTION,
A LETTER TO A NOBLE LORD 180 (1909).
8. Ian Sample, Joseph Stiglitz on Artificial Intelligence: We're Going Towards a More Divided
Society, THE GUARDIAN (8 Sept. 2018), https://www.theguardian.com/technology/2018/
sep/08/joseph-stigl itz-on-artificial-i ntel Iigence-were-goi ng-towards-a-more-divided-
society.
2020 Speculative Human Rights 577
9. Steven Pinker's more careful definition is this: "Intelligence, then, is the ability to attain
goals in the face of obstacles by means of decisions based on rational (truth-obeying)
rules." STEVEN PINKER, HOW THE MIND WORKS 62 (2009).
10. Cited in MAX TEGMARK, LIFE 3.0: BEING HUMAN IN THE AGE OF ARTIFICIAL INTELLIGENCE 4 (201 7).
11. See David ]. Chalmers The Singularity: A Philosophical Analysis, 17 ]. CONScIOUSNESS STUD.
7 (2010).
578 HUMAN RIGHTS QUARTERLY Vol. 42
the same for mental work," he writes, "but machine learning automates
automation itself." 12
When AGI begins researching AGI, it will be able to produce the next
generation of smarter machines faster than humans; this next, smarter
generation will be able to do so even more quickly, and so on. The pace
of technology will accelerate and then-because research will be bound
neither by the limits of human intelligence nor by the limits of each succes-
sive parent-generation of Al-the acceleration will begin to accelerate. This
kind of asymptotic, accelerating acceleration means that even if the run-up
to human-level AGI is quite long, the next major step to superintelligence
could be quite rapid. The changes wrought on the planet in the sliver of
geologic time since the emergence of human intelligence have been ef-
fectively immeasurable and, arguably, ruinous for everything nonhuman.
What will happen with the emergence of superintelligence-and perhaps
more important, how fast will it happen? Unless we begin preparing for
that moment long in advance, we will likely have no time to coordinate
a response when it occurs. As Henry Kissinger recently wrote, urging the
US government, along with Al developers, to develop a national vision for
containing the perils of AGI: "If we do not start this effort soon, before long
we shall discover that we started too late." 13
And with that strong claim, I am at the end of the thought experiment.
To ground us back in sober realism as we move forward, I should emphasize
that everything about the previous three paragraphs is up for debate. There
are many sound arguments against the logic of an intelligence explosion,
including Russell's suggestion that "it is logically possible that there are
diminishing returns to intelligence improvements, so that the process peters
out rather than exploding."1 4 And while surveys are often cited showing
that most experts in artificial intelligence believe AGI will eventually be
achieved, the predicted timelines range from 20 years to 100 years and
even, in Noam Chomsky's words, to "eons."" Pioneering Al researcher Ju-
dea Pearl, winner of the Turing Award (the highest distinction in computer
science), argues that we will eventually create AGI but that to do so we
will need a major paradigm shift, abandoning current Al which is defined
by associational reasoning-that is, pattern identification in large data sets,
or "curve fitting"-to develop Al capable of causal reasoning, which Pearl
12. PEDRO DOMINGOS, THE MASTER ALGORITHM: HOW THE QUEST FOR THE ULTIMATE LEARNING MACHINE
WILL REMAKE OUR WORLD 9-10, 286 (2015).
13. Henry Kissinger, How the Enlightenment Ends, THE ATLANTIC (June 2018), https://www.
theatlantic.com/magazine/archive/201 8/06/henry-kissinger-ai-could-mean-the-end-of-
human-history/559124/.
14. STUART RUSSELL, HUMAN COMPATIBLE: ARTIFICIAL INTELLIGENCE AND THE PROBLEM OF CONTROL 143
(2019).
15. Interview by Singularity Weblog with Noam Chomsky, American Linguist, on YouTube,
(10 June 2016: 1:06), https://www.youtube.com/watch?v=Ck9zKihlYLE.
2020 Speculative Human Rights 579
16. See Judea Pearl, Theoretical Impediments to Machine Learning with Seven Sparks from
the Causal Revolution, 3 (15 Jan. 2018), https://arxiv.org/pdf/1 801.04016.pdf
17. Stephen Hawking, Stuart Russell, Max Tegmark & Frank Wilczek, Stephen Hawking:
"Transcendence Looks at the Implications of Artificial Intelligence But are we Taking
Al Seriously Enough?", THE INDEPENDENT (1 May 2014), https://www.independent.co.uk/
news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-
intelligence-but-are-we-taking-9313474.html.
580 HUMAN RIGHTS QUARTERLY Vol. 42
A. Capital
The near-term, recognizable version of the capital problem is that with the
emergence of superintelligent machines, the productivity of capital will
increase while the wages commanded by human laborers will plummet.
Worries over Al and automation are, of course, not new. In 1964, only eight
years after the field of artificial intelligence was founded at the Dartmouth
Summer Research Project on Artificial Intelligence, a group of prominent
activists, academics, and technology experts brought their concerns about
"cybernation" to President Lyndon B. Johnson in a report entitled "The Triple
Revolution."18 With the intellectual heft of scholars like Nobel laureate econo-
mist Gunnar Myrdal behind it, the group argued that the only useful historical
model for anticipating the change Al would force upon the economy was
the transition from the agricultural era to the industrial era. "The industrial
production system," they argued, "is no longer viable."1 9 The government
must take radical action to manage such dramatic disruption, they asserted,
to prevent soaring unemployment, a tumbling labor force participation rate,
socially destabilizing inequality, and a vicious cycle of decreasing income
that leads to decreasing demand that leads to more unemployment that, in
turn, leads to further decreasing income, spiraling ever downward.
Al entrepreneur Martin Ford argues that the authors of the "The Triple
Revolution" were not wrong, despite the decades of booming industrial
production that followed their dolorous predictions. They "simply sound[ed]
the alarm far too soon."20 Current forecasters working in economics and
Al differ, but one of the more frequent claims circulating now is around a
50% redundancy rate across the entirety of the human workforce within
the lifetimes of the younger people reading this essay, including areas like
higher education. While some believe this is hyperbolic, even the most sober,
conservative predictions are game-changers for the global economy. A 2017
study by McKinsey Global Institute estimated that "60% of occupations have
30% or more of their constituent activities that could be automated," which
could lead to a 15 percent displacement of the global workforce by 2030.21
Transportation and radiology are often coupled together in such job vulner-
ability studies as a way of showing the unprecedented comprehensive reach
automation will have into human work. In their grimly titled working paper
18. MARTIN FORD, RISE OF THE ROBOTS: TECHNOLOGY AND THE THREAT OF A JOBLESS FUTURE 30-31 (2015).
19. The Ad Hoc Committee on the Triple Revolution, The Triple Revolution, http://www.
educationanddemocracy.org/FSCfiles/CCC2aTripleRevolution.htm.
20. FORD, supra note 18, at 33.
21. Cited in Laura D'Andrea Tyson, Zukunftsmusik, 33 BERLIN J. 16 (2019).
2020 Speculative Human Rights 581
"Smart Machines and Long-Term Misery," economists Jeffrey Sachs and Lau-
rence Kotlikoff offer the hypothesis that, even worse, such automation could
in theory disproportionately affect the opportunities of the young, creating
a self-reinforcing intergenerational disparity that will lower "the wellbeing
of today's young generation and all future generations." 2 2
Economists like Stiglitz hold faith that artificial intelligence will be, ul-
timately, yet another instance of Joseph Schumpeter's creative destruction,
a technological disruption that will lead, after a painful transition, to a new
kind of economy with new and better work for humans. This kind of confi-
dence is deep-rooted in Western thinking, dating as far back as 350 B.C.E.
when Aristotle argued that automation would lead to the end of slavery.23
Nonetheless, even contemporary optimists acknowledge that the looming
Al disruption will be historically unprecedented. If we fail to plan now for
the transition, they warn, the human cost will be staggering and, ultimately,
injurious to democracy. "If we don't change our overall economic and policy
framework," says Stiglitz,
what we're going towards is greater wage inequality, greater income and wealth
inequality and probably more unemployment and a more divided society. But
none of this is inevitable. By changing the rules, we could wind up with a richer
society, with the fruits more equally divided, and quite possibly where people
have a shorter working week. We have gone from a 60-hour work week to a
45-hour week and we could go to 30 or 25.24
22. Jeffrey Sachs & Laurence Kotlikoff, Smart Machines and Long-Term Misery, NATIONAL
BUREAU OF ECONOMIC RESEARCH WORKING PAPER SERIES 16 (2012).
23. ARISTOTLE, POLITICS, bk. 1, pt. 4, at 80 (Benjamin Jowett trans., 1999), https://socialsciences.
mcmaster.ca/econ/ugcm/313/aristotle/Pol itics.pdf.
24. Sample, supra note 8.
25. Conor McKay, Ethan Pollak & Alastair Fitzpayne, The Aspen Inst. Future of Work Initiative,
Automation and a Changing Economy, Part Two: Policies for Shared Prosperity (2019),
https://www.aspeninstitute.org/publications/automation-and-a-changing-economy-
policies-for-shared-prosperity/.
582 HUMAN RIGHTS QUARTERLY Vol. 42
better jobs in the future, but rather to a jobless future. Calum Chace has
used the history of horses as an illuminating analogy. In 1900, there were
21 million horses in the United States. By 1960, with the ascendance of
the car and the mechanization of agriculture, the population had shrunk
to 3 million. For horses, becoming economically un-useful led to an extra
25 percent population loss over and above the percentage of human lives
claimed by the Black Death in Europe. Discussing a dire 2015 report on
automation and redundancy from Bank of America Merrill Lynch, the news-
paper The Guardian raises the issue of economically un-useful humans, and
asks: "What if we're the horses to Al's humans?"26 Princeton economists Anne
Case and Angus Deaton have used the chilling phrase "deaths of despair" to
account for the increase in mortality rates seen in groups affected by, among
other things, loss of economic opportunity based on technological change.2 7
In his dystopian novel Player Piano (1952), Kurt Vonnegut explores the
existential concerns of a fully automated society. What will life be like for
people stripped not only of their income but also of their sense of meaning
and purpose in work? What happens when capitalism cuts labor down to
the point where the economy can no longer generate demand? Or when
capitalism just does not need so many human bodies to function? Vonnegut
imagines revolution not as a political problem to be managed with minimally
ameliorative measures; he depicts it instead as a solution. The authors of "The
Triple Revolution," anticipating this more speculative future, argued aggres-
sively that we must therefore affirm an "unqualified right to an income,"2 8
or a universal basic income. Domingos argues that in the world of AGI, the
latter solution will eventually be recognized as an inevitability. "When the
unemployment rate rises above 50%, or even before," he writes, "attitudes
about redistribution will radically change. The newly unemployed major-
ity will vote for generous lifetime unemployment benefits and the sky-high
taxes need to fund them."29
B. Cybernetics
26. Heather Stewart, Artificial Intelligence: "Homo Sapiens Will Be Split into a Handful of
Gods and the Rest of Us," THE GUARDIAN (7 Nov. 2015), https://www.theguardian.com/
business/2015/nov/07/artificial-intelligence-homo-sapiens-split-handful-gods.
27. ANNE CASE & ANGUS DEATON, DEATHS OF DESPAIR AND THE FUTURE OF CAPITALISM (2020).
28. The Triple Revolution, supra note 19.
29. DoMINGOS, supra note 12, at 278-79.
2020 Speculative Human Rights 583
ing our biological memories into digital data. The ethical dilemma of our
current cyborg lives is something like the dilemma of higher education. The
technologies of health and knowledge provide life-enhancing opportunities
that exacerbate asymmetries of wealth and power because they are made
available disproportionately to the wealthy and powerful. In the US, most
seem, at least for now, to have accepted such disparities as infelicitous rather
than damning; amelioration is judged aspirational rather than a matter of
basic dignity.
According to some experts, however, the speculative scenario of more
extreme cybernetic lives may not be that far off in the future-with dispari-
ties correspondingly deeper, disparities that could change our conception
of basic dignity such that they rise to the level of a human rights concern.
Science-fiction writer lain Banks's 1996 novel Excession imagined a "neural
lace"-a digital layer above the cortex-that could help humans achieve
symbiosis with artificial intelligence. Today, neuroscientists like Harvard's
Charles Lieber are researching "electronic mesh" to inject into brain tissue,
and Elon Musk's company Neuralink has invested, by the last count, over
$150 million to specifically research Banks' "neural lace" concept. The US
Department of Defense's Defense Advanced Research Projects Agency (com-
monly known as DARPA) continues to invest heavily in both surgical and
nonsurgical brain-machine interfaces, hoping to give combat-ready soldiers
superhuman speed and to help wounded soldiers more fully recover from
injury (DARPA teams have had preliminary success with neural control
of prosthetic limbs).30 Going further, futurist Ray Kurzweil, the Director of
Engineering for Google, predicts that by 2050 nanobots will integrate with
the immune system, generating what has been dubbed "actuarial escape
velocity" 31-that is, the point when technology can add more than one
year to your life for each year you remain alive, creating the possibility of
an indefinite lifespan. As Noah Harari writes, characterizing the mood of
wild optimism in tech culture: Death is "a technical problem," and "every
technical problem has a technical solution."32
Such radical optimists will, no doubt, meet with bitter disappointment,
but along the way, they will also witness breathtaking health gains. To see
why such surplus is not just a thrilling opportunity but also a destabilizing
problem, it is helpful to look back at Plato's views on healthcare. Plato was
comfortable withholding healthcare from the old and the chronically ill.33
That seems morally misguided to most of us today; we see healthcare as a
30. See Al Emondi, Revolutionizing Prosthetics, DEFENSE ADVANCED RES. PROJECTS AGENCY, https://
www.darpa.mil/program/revol utionizing-prosthetics.
31. See RAY KURZWEIL, THE SINGULARITY Is NEAR: WHEN HUMANS TRANSCEND BIOLOGY (2005); see also
RAY KURZWEIL, THE AGE OF SPIRITUAL MACHINES: WHEN COMPUTERS EXCEED HUMAN INTELLIGENCE (1999).
32. YUVAL NOAH HARARI, HOMO DEUS: A BRIEF HISTORY OF TOMORROW 23 (2017).
33. See his discussion of Asclepius in ROBIN WATERFIELD, PLATO REPUBLIC 105-08 (1993).
584 HUMAN RIGHTS QUARTERLY Vol. 42
human right. But I do not believe that is because our moral intuitions about
healthcare have changed so radically since Plato. Rather, it is that what
healthcare can offer has changed, and therefore along with it our idea of
what is just. Healthcare is a disparity-generating technology. When dispari-
ties become vast enough, passing a moral tipping point into the crushingly
unfair and politically destabilizing, they change our conception of basic
human dignity. Advances in well-being like those imagined by DARPA and
Google, while welcome, will require a reconceptualization of what we
can expect from a human life. Cybernetic disparities will become a matter
of basic dignity rather than aspirational ethics. Almost certainly, the future
work of rights activists will involve redefining the benefits of cutting-edge
technology not as a privilege of wealth but rather as a shared entitlement.
For now, the solution to the problem of Al and the right to health, as with
Al and the right to work, is preemptive distributive justice that will minimize
the human costs of delayed equal access.
A. Classification
34. PAUL SCHARRE, ARTIFICIAL INTELLIGENCE: THE CONSEQUENCES FOR HUMAN RIGHTS, CTR. FOR NEW AM.
SECURITY, TESTIMONY BEFORE THE TOM LANTOS HUMAN RIGHTS COMMISSION (23 May 2018), https://
humanrightscommission.house.gov/sites/humanrightscommission.house.gov/files/docu-
ments/AITranscriptpubl ic.pdf.
2020 Speculative Human Rights 585
how big data is reconfiguring racism and sexism, how the brave new world
of technology reproduces the same old discriminations. Here I would point
to a range of sources, including Safiya Noble's work on the racism "baked
in" and masked in purportedly neutral search engines; the ACLU's work on
the racism of predictive policing algorithms like PredPol; two 2017 reports
submitted to the UN Human Rights Council emphasizing the human rights
risks of algorithmic discrimination based on age and gender; and a 2017
report from the Human Rights, Big Data and Technology Project (HRBDT)
to the House of Lords Select Committee on Artificial Intelligence, which
cites the same concern in its program for developing a human rights-based
approach to the development and use of Al.33 Privacy narrowly conceived
of course matters, but a fuller conception of the value of privacy should
include distortions of the environment within which privacy is constituted.
That is, the problem is not simply the unprecedented ways we can collect
and use information but also our conception of information itself. Stuart
Russell argues, for instance, that Articles 18 and 19 of the Universal Dec-
laration of Human Rights (freedom of thought and expression) need to be
supplemented now, if they wish to remain robust as conceptions of dignity
in the age of Al, by a declaration of a "right to mental security"-that is, the
"right to live in a largely true information environment."36
Regulatory bodies and rights organizations have been slow to respond to
the challenge of statistical discrimination based upon "bad" (unrepresentative,
insufficiently detailed, or distorted by racist inputs) information. But to be
fair, the challenge is difficult, given that acquiring unbiased data poses not
only a range of technical problems and cost issues but also the intractable
puzzle of rights trade-offs: acquiring more detailed, unbiased information
would almost certainly in many cases require sacrifices of privacy rights.37
And as the HRBDT notes, regulation of online content by social media
platforms can impact freedom of expression. Nonetheless, as the cases cited
above show, movement to address these issues is accelerating with a wide
range of recommendations for redress-for instance, by using human rights
impact assessments to "complement the artificial intelligence design and
deployment process, instead of focusing solely on post-facto accountability."38
What will the classification problem look like in a world of AGI, and
what can we do about it? That is a question that must be subsumed under
the control problem, as should become clear shortly.
35. See SAFIYA NOBLE, ALGORITHMS OF OPPRESSION: How SEARCH ENGINES REINFORCE RACISM (2018). See
also The Human Rights, Big Data and Technology Project Written Evidence (AIC0196)
(6 Sept. 2017), http://data.parliament.uk/writtenevidence/committeeevidence.svc/
evidencedocument/artificial-intelligence-committee/artificial-inteligence/written/69717.
html.
36. RUSSELL, supra note 14, at 107.
37. See Lior Jacob Strahilevitz, Privacy Versus Antidiscrimination, U. CHI. L. REV. 363 (2008).
38. Big Data and Technology Project, supra note 35.
586 HUMAN RIGHTS QUARTERLY Vol. 42
B. Control
39. Nick Bostrom, Ethical Issues in Advanced Artificial Intelligence (2003), https://www.
nickbostrom.com/ethics/ai.html.
40. Id.
41. NICK BOSTROM, SUPERINTELLIGENCE: PATHS, DANGERS, STRATEGIES 131-39 (2014).
42. Risse, supra note 2, at 7.
2020 Speculative Human Rights 587
I should note here, for clarity, two things that are perhaps already obvi-
ous. First, Bostrom's quirky examples are not unusual for the genre of the
philosophical thought experiment, and are meant to function as stand-ins for
more serious abstract categories. For instance, the paperclip of his paperclip
maker is a stand-in for the concept of 'anything not useful to human final
goals,' and the rictus smile of the happiness machine is best understood as
'anything we think we want that is difficult to limit definitionally.' Second,
those most concerned with value-alignment tend to regard with suspicion
control plans that rely on programming guardrails like digital apoptosis, kill
switches, and ongoing supervision through human overrides. While it is not
impossible that such techniques could work, the reigning assumption is that
a superintelligent learning system optimizing for goals not only could but
would find ways to escape digital handcuffs.
Scholars working on the control problem, along with for-profit corpora-
tions like Open Al and volunteer organizations like the Future of Life Institute,
43. World Artificial Intelligence Conference, Jack Ma and Elon Musk Debate, YOUTUBE (29
Aug. 2019), https://www.youtube.com/watch?v=f3lUEnMaiAU.
44. RUSSELL, supra note 14, at 138.
45. BOSTROM, supra note 41, at 146-47.
588 HUMAN RIGHTS QUARTERLY Vol. 42
46. See, Amanda Askell, Miles Brundage & Jack Clark, Why Responsible Al Development
Needs Cooperation on Safety, OPENAI (10 July 2019), https://openai.com/blog/cooperation-
on-safety/.
47. Putin: Leader in Artificial Intelligence Will Rule World, CNBC (4 Sept. 2017), https://
www.cnbc.com/2017/09/04/puti n-leader-i n-artificial-intelligence-will-rule-world.html.
48. RUSSELL, supra note 14, at 138.
49. Id.
50. Id.
51. Id.
52. Asilomar Al Principles, FUTURE OF LIFE INSTITUTE (2017), https://futureoflife.org/ai-principles/.
53. RUSSELL, supra note 14, at 138.
2020 Speculative Human Rights 589
C. Consciousness
54. Id.
55. See id. at 1-2, 171-83. See also Lucas Perry, Human Compatible: Artificial Intelligence
and the Problem of Control with Stuart Russell, Al ALIGNMENT PODCAST (8 Oct. 2019),
https://futureoflife.org/2019/10/08/ai-alignment-podcast-human-compatible-artificial-
intel Iigence-and-the-problem-of-control-with-stuart-russel l/.
56. See PAOLA CAVALIERI & PETER SINGER, THE GREAT APE PROJECT: EQUALITY BEYOND HUMANITY (1993).
57. See Animal Welfare Amendment Act (No 2) (2015), http://www.legislation.govt.nz/act/
public/2015/0049/latest/whole.html #DLM5174807.
58. See N. KATHERINE HAYLES, UNTHOUGHT: THE POWER OF THE COGNITIVE NONCONSCIOUS 30-32, 39
(2017).
590 HUMAN RIGHTS QUARTERLY Vol. 42
59. See SUSAN SCHNEIDER, ARTIFICIAL YOU: Al AND THE FUTURE OF YOUR MIND 19-26 (2019). For more
on computationalism and its limits, see DAVID GELERNTER, THE TIDES OF MIND: UNCOVERING THE
SPECTRUM OF CONSCIOUSNESS (201 6).
60. For more on this debate, see Carlotta Rigotti, Sex Robots and Human Rights, OPENDEM-
OCRACY (8 May 2019), https://www.opendemocracy.net/en/democraciaabierta/sex-robots-
and-human-rights/.
2020 Speculative Human Rights 591
meaningful sense against the zombie itself. However, if what you thought
was a zombie is, in fact, conscious, if there is something that it is like to
be that thing, then the moral stakes change dramatically. Bostrom warns
that discarding such Als when they are no longer useful would constitute
"mind crime."61 Philosopher Susan Schneider asserts that forcing conscious
Al "to serve us would be akin to slavery" and insists that thinking otherwise
is speciesism. 6 2
Philosophers Mara Garza and Eric Schwitzgebel, who also worry about
this problem, urge us to establish Al research pathways that follow what
they call "the principle of the excluded middle"-that is, only produce
intelligences that are very clearly not-conscious or very clearly conscious.
No in-betweens, no zombies.63 Schneider has also begun developing various
conceptual tests to determine the likelihood that any particular Al is con-
scious, all based on the premise that a zombie Al that was "boxed in"-that
is, not yet connected to external information sources-would be incapable
of understanding concepts based upon the felt quality of consciousness
(such as, "Could you survive the permanent deletion of your program?") or
forming coherent answers to questions about their experience of conscious-
ness (such as, "What is it like to be you right now?").64 In the meantime,
Schneider urges the use of a precautionary principle that requires us to grant
rights rather than deny them when we cannot be sure.
Schneider's moral generosity comes, however, with potentially wrenching
costs. Philosophers like Matthew Liao, along with Garza and Schwitzgebel,
have begun testing our moral intuitions with thought experiments that mimic
Phillipa Foot's original trolley problem.63 Would you put a human at risk
to save a larger number of seemingly conscious androids, or would you
abandon the helpless Al with the moral ease of recycling an iPhone? Until
you know what consciousness is and who or what has it, they assert, you
cannot really give a satisfying moral answer. Erring in favor of the human
risks moral catastrophe, if the android represents a new kind of conscious
life ushered into the universe. But erring in favor of androids with the precau-
tionary principle could also be moral catastrophe, no differently than if we
now, at the expense of needful humans, invested precious, zero-sum-game
resources to protect the dignity of today's pre-zombies, which we can buy
now: Pepper ($25,000), Kuri ($899), CHiP ($199.99), Lynx ($799.99), the
lovable Paro ($6400), and Sophia (not for sale; she is a legal citizen in Saudi
Arabia).66 Tegmark foresees final calamity here, arguing that assuming con-
sciousness where it might not exist could lead to a future in which humans
allow themselves to be replaced, thereby abandoning our most basic moral
duty: to sustain consciousness in the universe. He writes: "[l]f we enable
high-tech descendents that we mistakenly think are conscious, would this
be the ultimate zombie apocalypse, transforming our grand cosmic endow-
ment into nothing but an astronomical waste of space?"6 7
The robot rights argument has already begun-just Google the phrase.
The most likely future development (assuming a successful control scenario)
is neither the extreme of full inclusion nor full exclusion, but rather an ac-
celerated version of the incrementalism we have seen with animal rights.
But it will be an urgent, messy, febrile incrementalism, born of strife. The
argument for consciousness could very well alter our conceptions of rights
and moral duties at a fundamental level. It could also radically expand our
sense of moral possibilities. After all, in an existentially terrifying, infinitely
empty and lonely universe, the only company we will ever be able to have,
our only hope for shared meaning, is with other conscious minds. And if
this last claim holds any truth, it may also hold our most hopeful answer to
the control problem. Like our own, most deeply felt experiences of empathy
reveal: consciousness matters to consciousness. As Schneider writes: "The
value that an Al places on us may hinge on whether it believes it feels like
something to be us." 68
V. CONCLUSION
66. For a review of these robots, see Patricia Marx, Learning to Love Robots, THE NEW YORKER
(26 Nov. 2018), https://www.newyorker.com/magazine/2018/11/26/learning-to-love-
robots.
67. TEGMARK, supra note 10, at 282.
68. SCHNEIDER, supra note 59, at 40.
2020 Speculative Human Rights 593
point of this essay is not to provide evidence of a fire so that humans can
take precautionary measures. That is not even the point of a fire alarm. As
Yudkowksy points out, seeing evidence that there is a fire actually has very
little impact on our willingness to take measures to protect ourselves. Study
after study shows people doing nothing as they sit in rooms filling with smoke,
paralyzed by the embarrassing possibility that they might be overreacting to
signs of danger. "The real function of the fire alarm," Yudkowksy says when
discussing the Al control problem, "is the social function of telling you that
everyone else knows there is a fire and you can now exit the building in
an orderly fashion without looking panicky or losing face socially."69 We do
not need to start running from a fire that may never ignite. We only need to
build a fire alarm and map pathways to the nearest exits. 70
69. Sam Harris, A: Racing toward the Brink: A Conversation with Eliezer Yudkowsky, MAKING
SENSE PODCAST (28 Feb. 2018), https://samharris.org/podcasts/1 16-ai-racing-toward-brink/.
70. For more on the risks of distant existential threats and our current incapacity to prepare
for or even adequately imagine them, see Nick Bostrom, The Vulnerable World Hypoth-
esis, 10 GLOBAL POL. 455 (2019), https://www.nickbostrom.com/papers/vulnerable.pdf.