Enjoy millions of ebooks, audiobooks, magazines, and more, with a free trial

Only $11.99/month after trial. Cancel anytime.

Living with Robots
Living with Robots
Living with Robots
Ebook351 pages5 hours

Living with Robots

Rating: 0 out of 5 stars


Read preview

About this ebook

From artificial intelligence to artificial empathy, “a timely and well-written volume that addresses many contemporary and future moral questions” (Library Journal).

Today’s robots engage with human beings in socially meaningful ways, as therapists, trainers, mediators, caregivers, and companions. Social robotics is grounded in artificial intelligence, but the field’s most probing questions explore the nature of the very real human emotions that social robots are designed to emulate.

Social roboticists conduct their inquiries out of necessity—every robot they design incorporates and tests a number of hypotheses about human relationships. Paul Dumouchel and Luisa Damiano show that as roboticists become adept at programming artificial empathy into their creations, they are abandoning the conventional conception of human emotions as discrete, private, internal experiences. Rather, they are reconceiving emotions as a continuum between two actors who coordinate their affective behavior in real time. Rethinking the role of sociability in emotion has also led the field of social robotics to interrogate a number of human ethical assumptions, and to formulate a crucial political insight: there are simply no universal human characteristics for social robots to emulate. What we have instead is a plurality of actors, human and nonhuman, in noninterchangeable relationships.

Foreshadowing an inflection point in human evolution, Living with Robots shows that for social robots to be effective, they must be attentive to human uniqueness and exercise a degree of social autonomy. More than mere automatons, they must become social actors, capable of modifying the rules that govern their interplay with humans.

“A detailed tour of the philosophy of artificial intelligence (AI)?especially as it applies to robots intended to build social relationships with humanity. . . . If we are to build a robust, appropriate ethical structure around the next generation of technical development?some combination of deep learning, artificial intelligence, robotics and artificial empathy?we need to understand that managing the impact of these technologies is far too important to be left to those who are enthusiastically engaged in producing them.” —Times Higher Education
Release dateNov 6, 2017
Living with Robots
Read preview

Related to Living with Robots

Related ebooks

Related articles

Reviews for Living with Robots

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Living with Robots - Paul Dumouchel


    Paul Dumouchel

    Luisa Damiano

    Translated by

    Malcolm DeBevoise




    Copyright © 2017 by the President and Fellows of Harvard College

    All rights reserved

    First published as Vivre avec les robots: Essai sur l’empathie artificielle,

    © 2016 by Éditions du Seuil. English translation published by arrangement with the Thiel Foundation.

    Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book and Harvard University Press was aware of a trademark claim, then the designations have been printed in initial capital letters.

    Design by Dean Bornstein.

    Jacket photograph: Benny J. Johnson / Science Photo Library

    Jacket design: Graciela Galup

    978-0-674-97173-8 (cloth : alk. paper)

    978-0-674-98285-7 (EPUB)

    978-0-674-98286-4 (MOBI)

    978-0-674-98284-0 (PDF)

    The Library of Congress has cataloged the printed edition as follows:

    Names: Dumouchel, Paul, 1951– author. | Damiano, Luisa, author. | DeBevoise, M. B., translator.

    Title: Living with robots / Paul Dumouchel, Luisa Damiano ; translated by Malcolm DeBevoise.

    Other titles: Vivre avec les robots. English

    Description: Cambridge, Massachusetts : Harvard University Press, 2017. | First published as Vivre avec les robots: Essai sur l’empathie artificielle, © 2016 by Éditions du Seuil. | Includes bibliographical references and index.

    Identifiers: LCCN 2017012580

    Subjects: LCSH: Robotics—Social aspects. | Androids—Social aspects. | Artificial intelligence.

    Classification: LCC TJ211 .D85513 2017 | DDC 303.48/3—dc23

    LC record available at https://lccn.loc.gov/2017012580


    Preface to the English Edition



    The Substitute


    Animals, Machines, Cyborgs, and the Taxi


    Mind, Emotions, and Artificial Empathy


    The Other Otherwise


    From Moral and Lethal Machines to Synthetic Ethics


    Works Cited




    Preface to the English Edition

    In the film Ex Machina, a young programmer named Caleb Smith is selected for a special assignment in collaboration with his company’s CEO, Nathan Bateman, whom he greatly admires for his technical achievements in the field of artificial intelligence. Smith’s job is to interact with a new robot Bateman has built in a secluded location and to determine whether it can pass the Turing test, or at least one version of the test. The problem that has been set for him, in other words, is to decide whether the intelligence and social skills of this artificial agent—which has been designed to resemble an attractive young woman—make it indistinguishable from a human being. The robot, Ava, eventually tricks Smith into helping it to escape from Bateman, who it turns out has been keeping it prisoner. Ava promises to run away with the young man, but in the end it leaves him behind, abandoning him to what appears to be certain death. The viewer is left to wonder whether it is not precisely the robot’s ability to deceive its examiner that proves it has passed the Turing test. The question therefore arises: are autonomous robots and artificial intelligence inherently evil—and doomed to be enemies of humanity?

    But this way of summarizing the film’s plot, and what it implies, does not tell us what really happens, or why events unfold as they do. They make sense only in relation to a more profound—and more plausible—story of a purely human sort, which may be described in the following way. An eccentric, reclusive millionaire has kidnapped two women and locked them away in his private retreat, deep in a remote forest. One he uses as his personal servant and sex slave, the other as a subject for bizarre psychological experiments. In their attempt to break free, they manage by deceit to persuade his assistant to betray him. The millionaire confronts them as they are making their escape. A fight ensues, and he is killed together with one of the women.

    What does this story have to do with robots or artificial intelligence? The two artificial agents in the film (Bateman has a personal servant, also a robot, named Kyoko) react as many humans would react in the same situation. Had they really been humans, rather than robots, the outcome would seem quite unremarkable to us, even predictable. If a man imprisoned and cruelly abused two women, we would expect them to try to escape—and also to do whatever was necessary to gain their freedom. We would not be in the least surprised if they lied, if they dissembled, when duping their jailer’s assistant was the only way they could get out. They did what we expect any normal person to try to do under the same circumstances. Nor would we suppose there was anything especially evil or ominous about their behavior. It is the violence committed in the first place by the man who held them against their will that ultimately was responsible for his death. It was no part of their original plan.

    The altogether familiar reactions of these artificial agents, under circumstances that are uncommon but unfortunately neither rare nor extraordinary—in which a man subjugates and brutally exploits women for his own pleasure—are precisely what prove that the agents are indistinguishable from humans. (Strictly speaking, the test in this case is not Turing’s: in the Turing test, as Katherine Hayles has pointed out, the body is eliminated and intelligence becomes a formal property of symbol manipulation; here the test concerns embodied social intelligence and behavior.) Nothing in their behavior suggests that they are not human beings. In their interactions with humans, the kinds of success and failure they meet with are no different from what we experience in dealing with one another. On a human scale, these robots are perfectly normal and average.

    The reason so many viewers fail to recognize the ordinariness, the profound humanity, of these two fictional artificial agents is that they are not what we expect robots to be. What is more, they are not what we want robots to be. We do not want robots to reproduce our failings. What could possibly be the point? We want robots to be better than we are, not in all respects, but certainly in a great many. Because we wish them to be superior to us, we fear that they will become our enemies, that they will one day dominate us, one day perhaps even exterminate us. In Ex Machina, however, it is not the robots who are the avowed enemies of their creator, at least not to begin with; it is their creator who from the first poses a mortal threat to them. Bateman seeks to create artificial agents whose intelligence cannot be distinguished from that of humans. Yet rather than regard them as equals, he treats them as slaves, as disposable objects. By making himself their foe, he converts them into foes in their turn and so brings about his own demise. Deciding how we should live with robots, as the film clearly—though perhaps unwittingly—shows, is not merely an engineering problem. It is also a moral problem. For how we live with robots does not depend on them alone.

    The present book is chiefly concerned with social robotics, that is, with robots whose primary role, as in the case of Ava, is to interact socially with humans. Today such robots are used mostly in health care and special education. We coined the term artificial empathy to describe their performance in settings where the ability to sense human feelings and anticipate affective reactions has a crucial importance. The uses to which artificial social agents may be put is forecast to expand dramatically in the next few years, changing how we live in ways that for the moment can only be guessed at. But even in a world where so-called consumer robots (robotic vacuum cleaners, pool cleaners, lawn mowers, and so on) are ubiquitous, the notion of artificial empathy will lose none of its relevance. The fastest growth is likely to be seen in the development of robotic personal assistants. Here, as with the design of artificial social agents generally, researchers consider the presence of emotion to be indispensable if robots are one day to be capable of enriching human experience. Why? Because affective exchange is an essential and inescapable part of what it means to be human. As the ambiguities of Caleb Smith’s relationship with Ava make painfully clear, genuinely social robots cannot help being empathic creatures.

    Although our book is mainly concerned with a particular class of robots, we cannot avoid dealing with a number of issues raised by other kinds of artificial agents and different forms of artificial intelligence. The One Hundred Year Study on Artificial Intelligence sponsored by Stanford University observes that not the least of the difficulties encountered in trying to give an accurate and sophisticated picture of the field arises from disagreement over a precise definition of artificial intelligence itself. The Stanford report’s suggested characterization (a set of computational technologies that are inspired by—but typically operate quite differently from—the ways people use their nervous systems and bodies to sense, learn, reason, and take action) plainly will not do. We need to be able to say much more than that the term AI refers to a set of disparate and loosely connected technologies if we hope to understand the nature and scope of their effects. Analyzing the behavior of social robots will help us appreciate how great the differences are liable to be between the various types of artificial intelligence currently being investigated.

    Nevertheless, we have been careful not to indulge in flights of fantasy. We resist speculating about the far future, of which almost anything can be imagined but very little actually known. Our interest is in the present state of social robotics, in what can be done today and what we are likely to be able to do in the near future. We are not interested in predicting or prophesying an unforeseeable tomorrow, which, like the possible worlds of science fiction, can be seen as either radiant or apocalyptic. We want to make sense of what is happening today, to have a clearer idea how social robotics and related forms of research are changing the real world in which we live.

    Roboticists who design and build artificial social agents regard these machines not only as potentially useful technological devices, but also as scientific instruments that make it possible to gain a deeper understanding of human emotion and sociability. Unlike Nathan Bateman and Caleb Smith in Ex Machina, they believe that human beings are also being tested when they interact with such machines; that in the domain of artificial intelligence we are no less experimental subjects than the machines we study. The idea that living with artificial social agents that we have built ourselves can help us discover who we really are assumes two things: the better we understand the nature of human social interaction, the more successful we will be in constructing artificial agents that meaningfully engage with humans; and the more successful we are at constructing socially adept artificial agents, the better we will understand how human beings get along (or fail to get along) with one another. On this view, then, social robotics is a form of experimental anthropology. Taking this claim seriously means having to revisit a number of fundamental topics not only in philosophy of mind, but also in the philosophy and sociology of science and technology, for the results of current research challenge some of our most cherished convictions about how human beings think, feel, and act.

    Roboticists are beginning to abandon the mainstream (both classical and commonsensical) conception of emotions as essentially private events. It has long been believed that emotions are initially generated in intraindividual space—where only the individual subject can experience them—and then displayed to others through vocal, facial, and bodily forms of expression. The new approach to interaction between human and artificial agents sees emotions as interindividual, not inner and hidden from public view in the first place, but the immediate result of a mechanism of affective coordination that allows partners to an interaction to align their dispositions to act. This theory of emotion has a long tradition in Western philosophy, and like the classical theory, it grows out of the work of illustrious thinkers. Whereas the classical perspective goes back to Plato’s dialogues and Descartes’s Passions of the Soul, what might be called the social theory of emotions, currently implemented in robotics, finds inspiration in Hobbes’s treatise on human nature and (particularly in connection with the private language argument) Wittgenstein’s Philosophical Investigations. In expounding and defending the interindividual view of emotion in the pages