Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Pain, Pleasure, and the Greater Good: From the Panopticon to the Skinner Box and Beyond
Pain, Pleasure, and the Greater Good: From the Panopticon to the Skinner Box and Beyond
Pain, Pleasure, and the Greater Good: From the Panopticon to the Skinner Box and Beyond
Ebook452 pages5 hours

Pain, Pleasure, and the Greater Good: From the Panopticon to the Skinner Box and Beyond

Rating: 0 out of 5 stars

()

Read preview

About this ebook

How should we weigh the costs and benefits of scientific research on humans? Is it right that a small group of people should suffer in order that a larger number can live better, healthier lives? Or is an individual truly sovereign, unable to be plotted as part of such a calculation?
 
These are questions that have bedeviled scientists, doctors, and ethicists for decades, and in Pain, Pleasure, and the Greater Good, Cathy Gere presents the gripping story of how we have addressed them over time. Today, we are horrified at the idea that a medical experiment could be performed on someone without consent. But, as Gere shows, that represents a relatively recent shift: for more than two centuries, from the birth of utilitarianism in the eighteenth century, the doctrine of the greater good held sway. If a researcher believed his work would benefit humanity, then inflicting pain, or even death, on unwitting or captive subjects was considered ethically acceptable. It was only in the wake of World War II, and the revelations of Nazi medical atrocities, that public and medical opinion began to change, culminating in the National Research Act of 1974, which mandated informed consent. Showing that utilitarianism is based in the idea that humans are motivated only by pain and pleasure, Gere cautions that that greater good thinking is on the upswing again today and that the lesson of history is in imminent danger of being lost.
 
Rooted in the experiences of real people, and with major consequences for how we think about ourselves and our rights, Pain, Pleasure, and the Greater Good is a dazzling, ambitious history.
LanguageEnglish
Release dateOct 19, 2017
ISBN9780226501994
Pain, Pleasure, and the Greater Good: From the Panopticon to the Skinner Box and Beyond
Author

Cathy Gere

Cathy Gere is a lecturer in the History of Science at the University of Chicago. She has published on a wide range of topics from witchcraft to brain banking. Her book on Knossos in Crete is forthcoming.

Related to Pain, Pleasure, and the Greater Good

Related ebooks

Science & Mathematics For You

View More

Related articles

Reviews for Pain, Pleasure, and the Greater Good

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Pain, Pleasure, and the Greater Good - Cathy Gere

    Pain, Pleasure, and the Greater Good

    Pain, Pleasure, and the Greater Good

    From the Panopticon to the Skinner Box and Beyond

    CATHY GERE

    The University of Chicago Press

    Chicago and London

    The University of Chicago Press, Chicago 60637

    The University of Chicago Press, Ltd., London

    © 2017 by The University of Chicago

    All rights reserved. No part of this book may be used or reproduced in any manner whatsoever without written permission, except in the case of brief quotations in critical articles and reviews. For more information, contact the University of Chicago Press, 1427 E. 60th St., Chicago, IL 60637.

    Published 2017

    Printed in the United States of America

    26 25 24 23 22 21 20 19 18 17    1 2 3 4 5

    ISBN-13: 978-0-226-50185-7 (cloth)

    ISBN-13: 978-0-226-50199-4 (e-book)

    DOI: 10.7208/chicago/9780226501994.001.0001

    Library of Congress Cataloging-in-Publication Data

    Names: Gere, Cathy, 1964– author.

    Title: Pain, pleasure, and the greater good : from the Panopticon to the Skinner box and beyond / Cathy Gere.

    Description: Chicago ; London : The University of Chicago Press, 2017. | Includes bibliographical references and index.

    Identifiers: LCCN 2017009044 | ISBN 9780226501857 (cloth : alk. paper) | ISBN 9780226501994 (e-book)

    Subjects: LCSH: Utilitarianism—History. | Utilitarianism—England—History. | Utilitarianism—United States—History. | Psychology and philosophy. | Philosophy and science. | Medicine—Philosophy.

    Classification: LCC B843 .G46 2017 | DDC 144/.6—dc23

    LC record available at https://lccn.loc.gov/2017009044

    This paper meets the requirements of ANSI/NISO Z39.48-1992 (Permanence of Paper).

    For C.M.H.V.G.,

    for her patience and forbearance

    CONTENTS

    Introduction: Diving into the Wreck

    1  Trial of the Archangels

    2  Epicurus at the Scaffold

    3  Nasty, British, and Short

    4  The Monkey in the Panopticon

    5  In Which We Wonder Who Is Crazy

    6  Epicurus Unchained

    Afterword: The Restoration of the Monarchy

    Notes

    Bibliography

    Index

    INTRODUCTION

    Diving into the Wreck

    I came to see the damage that was done

    And the treasures that prevail.

    —Adrienne Rich, Diving into the Wreck, 1973

    THE TREASURES THAT PREVAIL

    March 8, 1973, dawned gray and mild in Washington, DC, but on the political horizon conditions were ominous, with headlines roiling on the front page of the Washington Post. A story about POWs coming back from Vietnam was accompanied by a photo of an ecstatic young woman being swept off her feet by her war-hero husband, but there was no glossing over the fact that America had just suffered the first military defeat in its history. Three headshots running down the right side of the page represented a chain of people linking the Watergate break-in with the president’s personal attorney, finally rendering Nixon’s involvement in the affair an absolute certainty. Below that, in a piece about the dire economic outlook, the Federal Reserve chairman urged fast action . . . to restore confidence in paper money. Meanwhile, in South Dakota, the government sternly warned Indians holding the Wounded Knee settlement today that they must be out . . . or face the consequences.¹ From the humiliating outcome in Vietnam, to the ballooning Watergate scandal, to the tanking economy, to the attention the American Indian Movement kept drawing to the genocide on which the United States had been founded, the nation was plunging fast into a full-blown crisis of moral legitimacy.²

    A little before 9:30 a.m., Edward Kennedy arrived at a stately, wood-paneled room on the fourth floor of the Dirksen Senate Office Building, just northeast of the Capitol, and settled into his day’s work. Kennedy, the Democratic senator from Massachusetts, was then forty years old, his life shattered by the assassinations of two of his older brothers, his back barely healed from being broken in a plane crash, his reputation shadowed by the fatal car accident on Chappaquiddick Island only four years before. His presidential ambitions scuppered by tragedy and scandal, he had recently embarked on the second act of his political career, focusing on medical reform through his chairmanship of the Senate Subcommittee on Health. In that capacity, he was grappling with an issue that seemed to exemplify the prevailing national mood of crisis, paralysis, and paranoia. The four-decade-long Tuskegee Study of Untreated Syphilis in the Male Negro had finally hit the headlines the previous summer, but despite official handwringing and public outrage, nothing had yet been done even to obtain treatment for the research subjects, let alone redress. In response to this foot-dragging, and hoping to clear up the matter once and for all, Kennedy had organized a series of investigative hearings on the topic of human experimentation.

    March 8 was the sixth and last day of the hearings. Dozens of witnesses had already testified. A woman on welfare who had been forced onto the injectable birth-control drug Depo Provera told her story and described the debilitating side effects. Double-helix discoverer James Watson talked about the dangers and promise of genetic modification. The aristocratic British rabble-rouser Jessica Mitford delivered a scathing report on prison experimentation. Three ex-convicts testified about taking part in pharmaceutical trials in Philadelphia jails. The director of the National Institutes of Health discussed research funding. Various senators proposed schemes for legal oversight of scientific research. Now, finally, on the last day of the proceedings, two survivors of the Tuskegee Study were to bear witness to their experiences. Their lawyer—the African-American civil rights attorney Fred Gray—thanked Kennedy, noting for the record that this was the first time that any governmental agency has permitted them to present their side of the story.

    Sixty-seven-year-old Charles Pollard was the first Tuskegee survivor to speak. He described the start of the study, more than forty years earlier: After they give us the blood tests, all up there in the community, they said we had bad blood. After then they started giving us the shots and give us the shots for a good long time. Pollard was unable to recall exactly the schedule of the shots, but one thing did stick in his memory: After they got through giving us those shots, they gave me a spinal tap. That was along in 1933. Kennedy sympathized, noting that he had also undergone a spinal puncture—they stick that long needle into your spine—and knew that it was rather unpleasant. Pollard admitted, It was pretty bad with me, remembering that he had spent 10 days or two weeks in bed after the procedure.

    The second survivor to testify, Lester Scott, described how he had taken shots from the researchers every other week for decades, figuring they were doing me good. Kennedy then asked: You thought you were being treated?

    I thought I was being treated then, Scott replied.

    And you were not.

    I was not.

    That is not right, is it?

    No, it is not right.³

    It is rare that the historical record presents us with such a clear and significant turning point. The syphilis study is now so notorious that the single word Tuskegee evokes a nightmare of the healing art gone horribly awry, of medical research conducted with contemptuous disregard for the health and well-being of the subjects, of a scientific establishment so profoundly racist that it was prepared to sacrifice hundreds of African-American lives on the altar of knowledge of dubious value. When the senator from Massachusetts and the farmer from Alabama agreed that what had happened was not right, a line was crossed. In front of that line stand our current standards of scientific research ethics; behind it lies a shadowy Cold War America in which thousands of men, women, and children were pressed into service as unwitting laboratory animals.

    As a result of the Kennedy hearings, the National Research Act was passed, mandating informed consent. From that time forward, scientists have been legally obligated to obtain the free, prior, voluntary, and comprehending agreement of their research subjects. This was a change in day-to-day ethical practice of lasting significance. Mandatory informed consent has spread from research to therapy, from life science to social science. It underpins definitions of sexual harassment and rape; it has emerged as the preferred structure for negotiations between nation-states and indigenous peoples; it may provide the most intuitive framework for practical ethical action in the contemporary world.⁴ Tempting as it is to be cynical about the routine, bureaucratic, and often excessive practice of informed consent today, it is also difficult to imagine life without it. On March 8, 1973, a flawed, death-haunted man swam down to the moral nadir of liberal democracy, and salvaged mandatory informed consent from the wreckage.

    MEDICAL UTILITY

    Despite, or perhaps because of, all that has been said and written about informed consent, a host of confusions, obfuscations, exaggerations, and denials still spin around that moment of agreement between Lester Scott and Edward Kennedy. This book will argue that without a fresh understanding of what was at stake in that encounter, we are in danger of letting its most important lessons slip out of our grasp. The key to the analysis is a simple insight: the drama of stopping Cold War human experimentation was not a battle between good and evil, but rather a conflict between two conceptions of the good. Both were philosophically defensible. One was victorious. And when the other was defeated, it was consigned to the dustbin of incomprehension. In a breathless sprint for the moral high ground, scholars have denounced the Tuskegee researchers as incorrigible racists, unable or reluctant to understand their actions in any other terms. This has made it harder to analyze with any precision what went wrong beyond the question of racial prejudice. And now, as a result, the core problem that informed consent was supposed to solve is making a stealthy but steady comeback.

    Tuskegee is such a loaded subject that it is well-nigh impossible to present a neutral statement of the facts of the case. In 1932, after a syphilis treatment program in the community was derailed by the Great Depression, government doctors identified around four hundred black men with latent-stage syphilis, presumed to be noninfectious, in Macon County, Alabama, and embarked on a study of the progression of the untreated disease, using two hundred uninfected men as a control group. At the time, the best treatments for syphilis (mercury rubs and derivatives of arsenic) were dangerous, burdensome, and unreliable. Told only that they had bad blood—the local vernacular for syphilis—the subjects were given aspirin and iron tonics, along with some deliberately inadequate treatment, under the guise of proper medical care. One purpose was to see if late-stage syphilis would, as many physicians assumed, spontaneously remit without treatment, and if so, in what percentage of cases. Another goal was to ascertain racial differences: a study of untreated syphilis had already been done in Norway; the Tuskegee researchers wanted to compare those results in whites with the progression of the disease in blacks. More specifically, the consensus was that whites suffered from neurological complications from late-stage syphilis, while blacks were more likely to experience cardiovascular problems.

    At the end of the first year, the subjects were sent a letter telling them to meet the nurse at a designated spot so that she could drive them to Tuskegee Hospital for an overnight visit. REMEMBER, the letter announced, THIS IS YOUR LAST CHANCE FOR SPECIAL FREE TREATMENT. The treatment was, in fact, the administration of a diagnostic spinal tap, looking for evidence of neurosyphilis. This is an extremely unpleasant procedure, and in 1933 it was cruder than today. Side effects included leakage of cerebrospinal fluid, causing headaches, nausea, and vomiting that could last a week or more. The spinal taps should have heralded the end of the study, but the man who had confected the deceptive letter persuaded his bosses that the proper procedure is the continuance of the observation of the Negro men used in the study with the idea of eventually bringing them to autopsy.⁵ In the end, the research went on for decades, long past the time that penicillin became available as an easy-to-administer and effective treatment.

    There is no evidence that anyone in the study was deliberately infected with the disease; there is abundant evidence that strenuous measures were taken to keep the men away from antibiotics. In 1932 it was thought that latent syphilis would remit in a percentage of long-term cases; by 1953 it was acknowledged that men in the infected group were dying faster than the controls. Still, nothing was done to treat them. Finally, in 1972, a whistle-blower at the Public Health Service informed a journalist friend, whose news agency wrote a story about the research that was published on the front page of the New York Times. Coming on top of a host of prior research scandals, and with its explosive racial dimension, the Tuskegee exposé ensured that informed consent in medical research became the law of the land.

    Figure 1. Fred Gray, attorney for the Tuskegee Study survivors, with Martin Luther King Jr., around 1955. Gray defended Rosa Parks and worked with King on the legal dimensions of the civil rights agenda. His decisive role in bringing the Tuskegee Study to a close connects informed consent to the struggle for voting rights. Medical ethics reform was part of the broad postwar challenge for the United States to live up to the universalistic philosophy on which it had supposedly been founded. Reprinted from Bus Ride to Justice by Fred Gray, with the permission of NewSouth Books.

    In 1978 the crusading historian Allan Brandt rendered his verdict on the cause and meaning of these events: There can be little doubt that the Tuskegee researchers regarded their subjects as less than human. As a result, the ethical canons of experimenting on human subjects were completely disregarded.⁷ Most subsequent retrospective accounts of the study have followed Brandt’s lead, arguing that racist white doctors dehumanized black people and were therefore comfortable with violating the usual standards of research ethics.

    Without denying the importance and timeliness of Brandt’s denunciation of scientific racism, I venture to suggest that his analysis of the Tuskegee researchers’ motives is a little sweeping. One aspect of the story that has vexed and puzzled commentators since the time of the exposé, for example, is the enthusiastic involvement of African-American medical professionals in the study. Surely nurse Eunice Rivers, the much-adored liaison between doctors and research subjects, who lived through the controversy and remained staunch in her defense of the research, did not simply regard the participants as subhuman? Nor, I am convinced, did the widely respected doctor Eugene Dibble, who managed the project at the Tuskegee Institute Hospital. Dibble died of cancer in 1968, four years before the exposé, after a long career devoted to improving the health of his community.

    Historian Susan Reverby has a chapter about Dibble in her valuable 2009 book Examining Tuskegee. In it, she asks this pivotal question about the doctor’s motives: Is it possible that Dibble was enough of a science man that he was willing to sacrifice men to the research needs, justifying the greater good that would hopefully come from the research?⁸ Notwithstanding the slightly incredulous tone in which the question is posed—is it possible?—the answer must be a straightforward yes. Dibble regarded himself as working for the greater good. He believed that scientists and doctors could and should make decisions on behalf of others in order to maximize societal benefit. Dibble was, in other words, a utilitarian when it came to the ethics of medical research on human subjects. Utilitarianism is the moral creed that considers, above all, the consequences of any given course of action for the balance of pain and pleasure in the world. In contrast to the idea of absolute rules of conduct—that everyone has a right to informed consent, for example—utilitarianism asks us to weigh the costs and benefits of our actions and choose the path that will maximize happiness and pleasure or minimize pain and suffering.

    To understand the ideology that conscripted six hundred black men into the medical research enterprise for forty years without their knowledge or consent, we need to locate Tuskegee in the larger context of medical utilitarianism. Before the Kennedy hearings, the utilitarian credo the greatest good for the greatest number was the driver of most human experimentation in the United States. The cost-benefit analysis that is the essence of utilitarian morality found its most compelling application in scientific medicine. In the face of the devastation wrought by uncontrolled disease, a medical breakthrough was the clearest example of an outcome that justified whatever it took to get there: the harm to research subjects was limited; the payoff was potentially infinite. Not only does this ethos help explain the participation of African-American doctors and nurses in the study, it also accounts for the thousands of Americans of every race drafted into similar research projects around the mid-twentieth century. Eugene Dibble was not a race traitor, psychological puzzle, or victim of false consciousness. He was a good physician working hard for the betterment of his people within the widely accepted (if not explicitly articulated) framework of medical utilitarianism. As one distinguished pediatrician recently recalled, reflecting on research he undertook with children in an orphanage in the early 1950s, I use the hackneyed expression that the end justifies the means. And that was, I think, the major ethical principle that we were operating under.

    Since 1973 a small number of articles, mostly in medical journals, have, in fact, defended the Tuskegee research in precisely these terms. Summarizing these in 2004, University of Chicago anthropologist Richard Shweder asks, What if, consistent with the moral logic that every person’s welfare should be weighted equally, the PHS researchers would have made exactly the same utilitarian assessment, regardless who was in the study [black or white]?¹⁰ Shweder seems to think that an affirmative answer to this question would amount to a full exoneration. If the scientists would have made exactly the same utilitarian assessment regardless of the racial identity of the subjects, then no wrong was done.

    Even if we accept Shweder’s version of the story, I think we can still agree that something went morally wrong. He is correct that the Tuskegee subjects were not drafted into the syphilis study just because they were African-American. They faced this predicament as a result of a systematic bias built into utilitarian medicine that permitted any vulnerable citizen to be sacrificed on the altar of science. Time and again, nonconsensual research on terminally ill, marginal, impoverished, incapacitated, undereducated, or institutionalized human subjects was rationalized on cost-benefit grounds: terminal patients would die soon anyway; poor rural populations were otherwise out of reach of medical care; illiterate people could not be expected to understand risk; state asylums were already rife with communicable diseases; the lives of institutionalized people with disabilities were so unpleasant that research procedures represented little extra burden. Even when such assessments were done with care and sensitivity, even when the harms and benefits were weighed with sincerity, gravity, and good will, under the terms of this calculus, anyone with little enough to lose was fair game in the cause of medical progress. The legacies of slavery in Macon County, Alabama—poverty, illiteracy, medical neglect—were a natural fit with this larger system of moral justification.

    During the buildup to Tuskegee, as scandal after medical scandal broke in the press, utilitarianism was often identified as the core ethical problem in human experimentation. But in the early 1970s, when all the issues of scientific power and responsibility were finally put on the table, utilitarianism was conflated with fascism, and the opportunity to critique it on its own terms was lost. Leading up to that moment of reckoning between Edward Kennedy and Lester Scott were the aftermath of World War II, the rise of the civil rights movement, and the dissolution of the European empires. The philosophically complex question of the rights and wrongs of medical utilitarianism got lost in the gravity and urgency of the issues—racism, Nazi atrocities, civil rights—that were invoked in agonized discussions of the Tuskegee Study.

    There is no denying the historical freight carried by the human experimentation controversy. The idea that informed consent should be an absolute mandate originated at the 1947 Nuremberg Medical Trial, at which twenty-three Nazi physicians and medical administrators were prosecuted by an American legal team. At the end of the proceedings, one of the judges read out a ten-point code of research ethics, which had been drafted to serve as the foundation for the convictions. The first provision of the so-called Nuremberg Code was an emphatic command: The voluntary consent of the human subject is absolutely essential.

    Perhaps forgivably, the medical profession on the side that had defeated fascism did not regard the Nuremberg Code as binding, however. After the trial, most American research scientists proceeded to ignore the requirement to obtain consent from their human subjects. Instead, they held to the prevailing norms of their scientific vocation, according to which they were trusted to judge the balance of harm to subjects against good to future patients.

    For a decade or so, this rather flagrant bit of victors’ justice went unremarked. But as civil rights cases began to accumulate in the courts, domestic violations of the right to informed consent could no longer be ignored. Starting in 1963, human experimentation scandals began to break with disconcerting frequency. Radicalized lawyers denounced medical researchers as human rights abusers. Journalists drew invidious comparisons between Nazi medicine and American research. When news of the Tuskegee Study broke in 1972, it seemed to confirm the diagnosis of fascist callousness. If American scientists could treat the descendants of slaves with such lethal contempt, how different could they be from their Nazi counterparts? The Kennedy hearings on human experimentation followed. The first draft of the legislation mandating informed consent took its wording straight out of the Nuremberg Code.

    It took outraged comparisons with the Nazis to stimulate the root and branch reform of medical research ethics; nothing less could overcome the professional autonomy of American medicine. But once the necessity for reform had been accepted, both sides backed off. Comparisons with the Nazis were wide enough of the mark that the activists gained no legal traction, while the medical profession quietly embraced informed consent. In the quiet after the storm, utilitarianism was left momentarily chastened but basically intact. As it turned out, the calculation of benefits and harms proved to be so deeply woven into scientific medicine, and so necessary to its operation, that it was impossible to disentangle it neatly from the fabric of medical practice and hold it to the light.

    A peace treaty was duly organized. In 1974 a commission was appointed to hammer out a philosophical foundation for the new medical ethics. Four years later, the group issued its report. Medical ethics was founded on three timeless and universal principles, it announced: autonomy, beneficence, and justice. Autonomy meant that individuals have the right to decide what happens to them. Securing informed consent was therefore the primary duty of all investigators. Beneficence dealt with the risks and rewards of the research, and demanded that any given study satisfy a cost-benefit analysis. Here was the guiding principle of medical utilitarianism, now demoted to second place in the ethical ranking. Finally, justice required that research subjects not be drawn exclusively from the most vulnerable members of society. As a matter of practical action, this third principle was quickly assimilated into the procedural niceties of informed consent. Anyone whose freedom was compromised by exceptionally difficult life circumstances was (at least in theory) protected from exploitation under guidelines barring undue inducement—for example, monetary payment to people in great financial need or commutation of sentences for prisoners.

    With the principle of justice absorbed into informed consent, the ultimate importance of the commission’s report lay in the fact that it ranked autonomy above beneficence. By placing autonomy first—as the Nuremberg Code had done—it effectively made informed consent a threshold condition for any sort of human experimentation.¹¹

    Since then, the preeminence of informed consent has been resoundingly confirmed. The ranking of the principles is now firmly established. Autonomy rules in contemporary research ethics and clinical practice, but utilitarianism, in the guise of beneficence, happily coexists with it as a close runner-up. Meanwhile, unvarnished utility calculations still underpin many aspects of modern medicine, from medical resource allocation and emergency medicine, to responses to epidemics, global health initiatives, and most domestic public health measures.

    Making peace with utilitarianism was thus a necessary compromise with the bedrock realities of modern medicine. As a piece of pragmatic moral reasoning, the so-called principles approach, with informed consent at its heart, works extremely well. The ever-expanding regulatory reach of informed consent may be the most important development in practical ethics in our secular age. But I think it is important to understand not only that it is right, but also why. The demotion of utilitarianism from first to second place in the ethical hierarchy of modern medicine offers us a golden opportunity to understand how, and to what extent, greater-good reasoning fails as a guide to action. We can theorize about such matters, but the history of medicine offers us a wealth of data about how such moral abstractions play out in real life.

    Utilitarian reasoning will never be exorcised from modern medical research. By its very nature, a clinical trial weighs risk and benefits to present subjects against potential benefit to future patients. Cost-benefit calculations cannot be avoided in the context of modern medical provision, from rationing drugs in socialized medical systems to setting insurance premiums in privatized ones. Utilitarianism and scientific medicine are deeply intertwined, interdependent, and to a great extent inseparable. The delicate trick is to balance acknowledgment of utilitarianism’s indispensability with an understanding of why informed consent arose as a necessary corrective.

    I find it helpful to think of the relation between informed consent and utility as analogous to that between figure and ground: in the ubiquitous diagram, two silhouetted profiles face one another; the space between them has the shape of a vase. Which do you see? Vase or faces? Faces or vase? Imagine that the profiles are those of doctor and patient, negotiating a treatment. The image of two human faces, engaged in an exchange as equals, represents the practice of informed consent. Utility is the vase, the course of action that emerges once the two sets of concerns and interests are brought into symmetry, made mirror images of each other. Informed consent says that doctor and patient must engage in truthful dialog until they view the situation in the same way, and have agreed on the path ahead. Then, and only then, does the vase come into view.

    At his arrogant worst, the utilitarian focuses exclusively on the vase. Assuming that he knows the contours of a rational profile—mirroring his own, of course—he believes the vase’s dimensions can be calculated with just pencil and paper, without the need to negotiate with a human being of equal worth. It is the purpose of this book to persuade you that this is a moral error as devastating as it is subtle.

    JEREMY BENTHAM

    I first became interested in the history of medical utilitarianism during a postdoctoral fellowship in medical ethics and health policy at Cambridge University. Our research group was formed in response to an ongoing scandal in the British medical profession. In the course of an inquiry into poor outcomes in a pediatric cardiology unit, it emerged that certain British hospitals, under the auspices of the National Health Service, had routinely removed and stored the internal organs of children who died within their walls, without informing relatives or asking their permission. Bereaved parents began to call the hospitals where their children had been treated. It turned out that one particular hospital in Liverpool had retained almost all of the internal organs of children who died there. In December 1999 the coroner for Liverpool suggested that the practice was unlawful. There was a media outcry, the trickle of phone calls from parents turned into a flood, and the government announced that there would be an independent inquiry into organ and tissue retention in the NHS.

    A support group for affected parents was set up, which obtained a change in the law to allow for the cremation of organs after the burial of a body. Many of the parents subsequently went through second, third, and even fourth funerals as parts of their children’s bodies—from organs and organ systems down to tissue blocks and microscope slides—were returned, literally piecemeal, by various hospitals around the country. As a result of these events, and the sympathetic government response, the informed consent revolution that had swept the American medical profession in the 1970s finally made its way into every aspect of medical practice in Britain.

    In 2001 the Wellcome Trust, a British medical charity, put out a call for proposals to investigate aspects of the crisis with a view to contributing to the discussion about policy reform. I was coming down the home stretch of a PhD in the history and philosophy of science at Cambridge, and was interested to see if my hard-won skills had any real-world application. A colleague and I applied for and received a grant to explore the legal and ethical dimensions of human tissue storage, using the brain bank at the Cambridge University Hospital as a case study.

    Our first step was to try to understand the historical preconditions and precedents for the scandal. It turned out that the legislation governing tissue collection practices had its roots in the Anatomy Act of 1832, a piece of legal reform inspired by Jeremy Bentham, the most famous and influential exponent of utilitarian philosophy, sometimes known as the father of utilitarianism.¹² This direct connection to the history of utilitarianism made the situation unfolding before our eyes—with utility and autonomy so clearly opposed—seem especially stark.

    That the organ retention scandal was about the bodies of the dead rather than the living also sharpened the dichotomy between cost-benefit thinking and the enshrining of inalienable rights. The use of archived human tissue for research purposes has such a strong utilitarian justification that an explicit appeal must be made to another, overriding principle, such as the right of citizens in a liberal polity to dispose of their remains or those of their kin as they see fit. As a consequence of this involvement with a homegrown medical scandal and its fallout, utilitarianism became visible to me not just as a philosophical abstraction but as a political reality, with very material consequences for the practice of scientific medicine.

    Primed by this close encounter with the implementation of informed consent in Britain, I continued to register a tension between utility and autonomy in medical ethics after I moved to the United States. My first academic teaching job was at the University of Chicago, where I attended the weekly meetings of the ethics consulting team at the University Hospital. One doctor repeatedly registered his protest at the terms of reformist medical ethics. Was it really the place of the medical profession to promote autonomy above all other values? Surely it should dedicate itself first and foremost to the relief of suffering? Another member of the group was an eloquent champion of patient autonomy, but even she returned again and again to the same question: was not anyone who refused the hospital’s beneficent treatment irrational by definition, and therefore unable to give informed consent? Despite a deep institutional and professional commitment to it on the part of the institution, patient autonomy seemed a fragile thing amid the clamoring reality of a large urban hospital.

    As I listened to the deliberations of the ethics team at the University of Chicago Hospital, it struck me that the issues dividing the assembled doctors were as much about human nature as about moral principle. Underlying each position voiced around that conference table was an implicit set of views about human motivation, behavior, and decision-making. One weary veteran of such meetings has described these views as competing and incompatible conceptions of the self. She observes that they tend to be implicit and therefore largely invisible, but they tell us "what

    Enjoying the preview?
    Page 1 of 1