You are on page 1of 13

This article was downloaded by: [University of Rhodes]

On: 29 August 2013, At: 23:51


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Twenty-First Century Society: Journal


of the Academy of Social Sciences
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/rsoc20

The ethical case against ethical


regulation in humanities and social
science research
a
Robert Dingwall
a
Institute for Science and Society, University of Nottingham,
University Park, Nottingham, NG7 2RD, UK
Published online: 22 Jan 2008.

To cite this article: Robert Dingwall (2008) The ethical case against ethical regulation in
humanities and social science research, Twenty-First Century Society: Journal of the Academy of
Social Sciences, 3:1, 1-12, DOI: 10.1080/17450140701749189

To link to this article: http://dx.doi.org/10.1080/17450140701749189

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with, in relation to or
arising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
21st Century Society
Vol. 3, No. 1, 1– 12, February 2008

The ethical case against ethical


regulation in humanities and social
science research
Robert Dingwall
Downloaded by [University of Rhodes] at 23:51 29 August 2013

Institute for Science and Society, University of Nottingham, UK

The system of pre-emptive ethical regulation developed in the biomedical sciences has become a
major threat to research in the humanities and the social sciences (HSS). Although there is
growing criticism of its effects, most commentators have tended to accept the principle of
regulation. This paper argues that we should not make this concession and that ethical regulation
is fundamentally wrong because the damage that it inflicts on a democratic society far exceeds any
harm that HSS research is capable of causing to individuals.

Introduction
The paper begins by reviewing the history of ethical regulation in the biomedical
sciences, and the reasons why this is appropriate and should be supported. It then
considers the extension of this model to HSS research and some of the perverse con-
sequences that have arisen as a result. Finally, it sets out some arguments against this
regulatory intervention.

The rise of ethical regulation in biomedicine


There is a conventional history of the development of bioethics that pivots on the
Nuremberg Medical Trials of 1946– 47 and the code of ‘permissible medical exper-
iments’ set out in the final judgement (Annas & Grodin, 1992; Jonsen, 1998;
Schmitt, 2004; Weindling, 2004). The excesses of the Nazi doctors, it is argued,
demonstrate the consequences of unregulated scientific activity. Regulation is
essential to prevent such abuses from ever occurring again. End of story.
If that were indeed true, this would be a very short paper. It is not. First, it seems
that 1930s Germany had the most elaborated system of regulation for scientific


Robert Dingwall, Institute for Science and Society, University of Nottingham, University Park,
Nottingham, NG7 2RD, UK.

1745-0144 (print) 1745-0152 (online)/08/010001– 12 # 2008 Academy of Social Sciences


DOI: 10.1080/17450140701749189
2 Robert Dingwall

research of any developed country, which did not do much to prevent the wartime
atrocities (Morin, 1998). Second, at the beginning of the Nuremberg trials, there
was no statement of ‘accepted ethical practice’ and one had to be hastily invented
by the US prosecutors, in collaboration with the American Medical Association, in
order to sustain the charges against the Nazi doctors (Hazelgrove, 2002). Neverthe-
less, third, there is quite good evidence of an energetic system of informal professional
control, certainly in the US during the 1930s, where scientists considered by their
peers to be unethical were excluded from important social and intellectual networks.
However, this system clearly began to break down after World War II (Halpern, 2004;
Lederer, 1995). Fourth, the result was the perception that the Nuremberg Trials had
represented ‘victors’ justice’, and the Allies did not actually take the code of practice
very seriously in application to themselves. This perception of an ethical vacuum was
Downloaded by [University of Rhodes] at 23:51 29 August 2013

set out in both internal critiques of biomedical research, from physicians like Henry
Beecher (1959; 1966) in the US and Maurice Papworth (1967) in the UK, and by
HSS investigations of a number of acknowledged scandals (Jones, 1981; Rothman,
1993). However, the disconnection between Allied scientific research and the
Nuremberg Code only seems to have become an issue alongside the cultural reinven-
tion of the Holocaust by US Jewish leaders as a response to identity crises within their
own community from the end of the 1960s onwards (Novick, 1999).
Therefore, the result is an emerging revisionist history that regards the simple story
of descent from the Nuremberg Medical Trial as a ‘creation myth’ developed to legit-
imise the interests served by the institutionalisation of ethical regulation. It points to
the breakdown of the established professional mechanisms of peer control with the
massive expansion of biomedical research after World War II and sees regulation as
one of the ways in which traditional elites tried to retain their power and influence.
It considers the so-called ‘rights revolution’ of the 1960s, especially in the US,
where the established social order faced a crisis of legitimation in the face of
demands from historically underrepresented or excluded groups like women,
people with disabilities, or members of minority ethnic communities. Patients and
experimental subjects were located among these oppressed classes and ethical regu-
lation was seen as a strategy for legitimising biomedical science in the face of these
demands. This history identifies the growing vested interests among those who staff
systems of ethical regulation and their material stake in expanding the scale and jur-
isdiction of the enterprise. This applies both to employees and to those who sit on
ethical review committees, which may become important resources in organisational
politics, in struggles, for example, to define whether a research institution is to be
dominated by a biomedical or an HSS culture. Finally, it recognises that much of
this regulatory approach is distinctively Anglo-American and that Mainland Europe
has found different ways to address the same problems, although these are under
growing pressure from the Anglophone hegemony in scientific governance and
publication (De Vries et al., forthcoming).
Nevertheless, it is not difficult to articulate a justification for pre-emptive ethical
regulation in biomedical science, including some areas of psychology. This does not
rest on the historical experience so much as on the nature of the interventions and
The ethical case against ethical regulation in humanities and social science research 3

the relationship between experimenter and subject. Much of this research has a
potential for harm that cannot necessarily be identified when subjects are recruited
or easily reversed should problems develop during the course of the intervention.
The nature of the research also means that there is a strong asymmetry of knowledge
between researcher and researched. In March 2006, for example, six men experienced
multiple organ failure as a result of the administration of an experimental drug, TGN
1412, in a clinical trial at Northwick Park. All survived but are suffering a range of
long-term health problems. Although the subsequent investigation revealed a
number of procedural lapses, none were relevant to the outcome: in the areas that
mattered, regulatory clearance had been obtained and adopted. There has, of
course, been further debate about the adequacy of the current regulatory regime in
dealing with the new class of drugs represented by TGN 1412 and future trials will
Downloaded by [University of Rhodes] at 23:51 29 August 2013

be conducted rather differently. However, the impact of these events on confidence


in clinical and experimental research has clearly been contained by the evidence of
good faith regulatory review: in a situation where research participants were not
well able to make judgements for themselves, the regulatory systems had provided
a check. The adverse outcome could be explained as entirely untoward and not
reasonably foreseeable, precisely because the investigators had not been judge and
jury in their own cause. The known risks had been described to the participants
and they had voluntarily accepted these. The regulatory institutions have functioned
to supply legitimacy to the institutions of biomedical science.
The issue of legitimacy is discussed further at a later point. However, it is important
to note now that HSS research presents no equivalent risk. HSS researchers do
nothing that begins to compare with injecting someone with potentially toxic green
stuff that cannot be neutralised or rapidly eliminated from their body if something
goes wrong. At most there is a potential for causing minor and reversible emotional
distress or some measure of reputational damage. However, these are risks that
research participants are well able to assess for themselves. All HSS research is
based on the same methods that ordinary people use in their everyday life: observing
other people, asking them questions, reading documents or looking at pictures.
Research participants can use this methodological knowledge to manage their own
risk. HSS researchers are guests in their lives and, like any guest, are likely to be
asked to leave if their behaviour is inappropriate (Murphy & Dingwall, 2007).
Consent, then, is not a one-off agreement to a potentially irreversible intervention
but an evolving relationship.
This difference is crucial to the argument about the inappropriateness or, as I shall
argue, illegitimacy of extending the biomedical model of ethical regulation to the HSS
disciplines. Whether we recall the Nazi medical experiments or the abuses of 1950s
and 1960s biomedical science, no evidence of comparable harm has ever been
adduced for HSS research. We do not ever seem to have killed, maimed or caused per-
manent mental disability to anyone in the name of science. Therefore, when we turn
to consider the recent rise of ethical regulation in HSS, we should remember that this
does not rest on evidence of harm or, indeed, of the invention of new methods of
investigation that create new risks.
4 Robert Dingwall

The rise of ethical regulation in HSS


The first UK statement on research with human subjects seems to have been issued by
the Medical Research Council (1964) in its 1962 –63 Annual Report and reprinted by
the British Medical Journal in 1964. This did not explicitly cover HSS research, which
seems to have received its first mention in a 1975 MRC document on access to
medical information. Ethical regulation has mostly developed in the UK since
1991, under the stimulus of the attempt by the NHS to manage access to its staff
and patients. However, there has been a parallel, but relatively undocumented,
growth of regulation within universities. In some research-intensive universities, this
has been driven by compliance with the conditions for receiving US government
research funds, and largely restricted to biomedical research, but a number of less
Downloaded by [University of Rhodes] at 23:51 29 August 2013

research-intensive universities seem to have set up institution-wide ethical regulation


committees from around the year 2000. Their influence was strongly apparent on
early drafts of the ESRC’s Research Ethics Framework. These were heavily criticised
by senior researchers, particularly from the Academy for the Social Sciences, and the
draft was significantly modified before publication in 2005. However, most univer-
sities seem to have declined to make use of the greater flexibility that this permitted
and to have created systems that look very like US Institutional Review Boards,
and their analogues in Canada and Australia. They exemplify the process that has
come to be called ‘regulatory creep’ (Haggerty, 2004), where systems with narrowly
defined missions have gradually expanded their jurisdiction.
If this process is not driven by risk, why is it happening? I have already referred to
the issue of legitimacy. This is where I need to do a more technical piece of sociology.
There is an approach to the study of organisations called neo-institutional analysis
that examines how organisations are affected by their competition for resources in a
changing environment. One aspect of this competition is the pursuit of legitimacy.
This can be defined as the degree of cultural support that can be derived from the
environment and translated into access to material resources. Organisations both
compete to gain the symbolic resources associated with legitimacy and exchange
these resources to build strategic alliances with others (Scott, 1991). As they
pursue legitimacy, organisations tend towards isomorphism, convergence on the
model of the most successful competitors in their market. This movement is driven
by three processes: coercive, mimetic, and normative. Coercive isomorphism often
has a legal basis, although this is not essential. However, it certainly involves some
relatively explicit formal or informal external pressure on an organisation to
conform to an expected model. Mimetic isomorphism is driven by organisation
members themselves as a strategy for managing risk: if everybody else is doing X,
then nobody can criticise us for doing the same, even if an adverse event occurs. Nor-
mative isomorphism is primarily associated with professionalisation and occurs where
behaviour is imposed on an organisation as a result of the cross-cutting ties of core
members and their desire to sustain legitimacy in the eyes of those others. Isomorph-
ism is unrelated to efficiency or effectiveness, but is critical to external perceptions of
the organisation as reasonable, rational, competent, ethical, etc. However, those
The ethical case against ethical regulation in humanities and social science research 5

perceptions may be more important in securing the flow of material and symbolic
resources necessary to sustain the organisation than its actual economic performance,
especially in public sector organisations where performance is hard to define and
measure in appropriate ways (DiMaggio & Powell, 1991).
The remorseless spread of ethical regulation is essentially isomorphic as
organisations seek to copy fashions set by the market leaders, who, in turn, cement
their advantages by circumscribing the opportunities of others. Let us consider the
ESRC’s Research Ethics Framework. There is no justification for its introduction
in terms of a change in the risks of social science research. The case can be found
at paragraphs 4.1.2.1 and 4.1.2.3.1. In summary, these justify regulation in terms of:

. Increase in interdisciplinary and transdisciplinary research


Downloaded by [University of Rhodes] at 23:51 29 August 2013

. Globalisation
. New codes in government departments
. New developments in COREC1
. Public demands for transparency
. Demand for formality

ESRC’s argument is essentially that everyone else is doing this sort of thing, so we
need to do it as well or they will not treat us as legitimate. The first item, the
spread of interdisciplinary and transdisciplinary research, is an attempt at normative
isomorphism, trying to claim professional dominance in shared projects; the second,
third and fourth are mainly mimetic and the last two are coercive. In the process of
mimicking other organisations in this institutional field, of course, it must be noted
that ESRC is supplying legitimacy to them, as much as deriving it. If ESRC copies
the Department of Health, the Department’s Research Governance Framework
becomes more obviously acceptable and the opportunities to resist it become more
constrained. The need for regulation is unquestioned.
What are the consequences? These have been most fully documented in North
America, where there is a growing body of evidence of perverse consequences
(AAUP, 2000):

. A linguist seeking to study language development in a preliterate tribe was


instructed by the IRB to have the subjects read and sign a consent form before
the study could proceed.
. A political scientist who had bought a commercial mailing list of names for a survey
of voting behaviour was required by the IRB to get written informed consent from
the subjects before sending them the survey.
. A Caucasian PhD student, seeking to study career expectations in relation to
ethnicity, was told by the IRB that African American PhD students could not be
interviewed because it might be traumatic for them to be interviewed by the
student.
. An IRB attempted to block publication of an English professor’s essay that drew on
anecdotal information provided by students about their personal experiences with
6 Robert Dingwall

violence because the students, although not identified by name, might be distressed
by reading the essay.
. An IRB attempted to deny an MA student her diploma because she did not obtain
prior approval for phoning newspaper executives to request copies of printed
material generally available to the public.

Similar accounts are beginning to emerge in the UK. A hapless graduate student who
was planning to study victims of medical accidents, identified through their involve-
ment with a charity, met a demand from her university ethics committee that she
get NHS ethical clearance, and a refusal to accept authoritative declarations from
the NHS that they did not have jurisdiction because the sample was not recruited
through their records. At another UK university where this paper was presented, a
Downloaded by [University of Rhodes] at 23:51 29 August 2013

researcher described how he had been welcomed for observational fieldwork in a


large factory in an Asian country. He wanted to conclude this study by formally inter-
viewing the plant management, but was required by his university ethics committee to
obtain signed consent for this. The managers were grossly offended by the implied
lack of trust and disrespect. The interviews produced meaningless data and his
access to the plant was withdrawn. As he put it, a high trust society had been polluted
by the low trust of the Anglo-Saxon world.
The increased regulation of HSS research has coincided with the rise of a
surveillance society, where citizens’ privacy is routinely invaded in far less respectful
ways. HSS are always in competition with a wide range of other social observers
and commentators to describe and understand their society (Strong, 1983). Partici-
pant observation in both anthropology and sociology may be dying at the hands of
ethical regulators (Katz, 2006) but CCTV observation of both public and semi-
private spaces is constantly expanding. Homeland security agencies are assembling
vast unregulated databases of identifiable and sensitive information. Journalists
regularly use deception in pursuit of stories, whether of celebrity trivia or serious
wrongdoing. A good example is the work of Barbara Ehrenreich (2001; 2005),
whose recent books, Nickel and Dimed and Bait and Switch, made the New York
Times Bestsellers list for their explorations of the conditions of low-wage employment
and of redundant middle-managers respectively. However, both depend on covert
research, where Ehrenreich faked CVs and references to conceal her identity as a jour-
nalist and social investigator. The Guardian journalist, Polly Toynbee (2003) was
inspired by Nickel and Dimed to write a similar book on low-paid groups in the UK.
Such works are widely assigned to undergraduates and held up as examples of the
type of interesting books that social scientists ought to write—but none of these
could receive ethical approval. Ethical regulation is becoming a smokescreen
behind which our rivals in social investigation and commentary can proceed
unchecked, while those of us whose practice is disciplined by a professional ethic
and a regulative ideal of truth-telling are handicapped in our access to the public
realm. By picking on a politically weak group, academics, it appears that concern is
expressed for citizens’ rights, while security, media and corporate interests can
range unchecked.
The ethical case against ethical regulation in humanities and social science research 7

Another irony is the determination of ethical regulators to overprotect citizens who


do fall within their jurisdiction. Historians and political scientists routinely interview
people who are happy to speak on the record: indeed, their motive for co-operation
is often precisely a desire to document their version of events for posterity. However,
they are encountering growing problems with ethical regulation and demands that
all data are anonymised, which are being reinforced by journal editors’ deference.
Another UK case I have collected involved an oral historian who had completed a
study of publicly identified community health activists before the introduction of
ethical regulation by his university but was then compelled by a major journal
to remove their names from his report (Smith, 2007). This over-protection extends
to so-called ‘vulnerable groups’. There is something slightly odd about the scale of
activity devoted to empowering people with learning disabilities, for example, while
Downloaded by [University of Rhodes] at 23:51 29 August 2013

simultaneously denying them the right to make their own decisions about being inter-
viewed. Clearly, their intellectual limitations may be important in understanding the
risks of a complex biomedical experiment, and protection is appropriate to avoid
some of the past abuses documented by Rothman (1993). However, there do not
seem to be widespread reports of people in this category having difficulty in refusing
to participate in social research that makes them feel uncomfortable. There may be
scope to debate to what extent such groups require protection and whether this is
best achieved by ethical regulation or by access gatekeepers, who will, in any case,
make their own decisions regardless of whatever a university committee thinks.
However, if double jeopardy is to be avoided, it may be better not to pre-empt the
response of those with a wider responsibility for the protection of those in their care.2
Unfortunately, such observations tend to sound like whingeing about the unfairness
of the world and are not, in my view, a very persuasive argument for those whose
support will be necessary to roll back ethical regulation. They invite the response
that the solution is to regulate everyone equally and that HSS scholars should just
see themselves as the first in line rather than the fall guys. One Utopian day every-
one—journalists, TV reality shows and secret police—will be subject to ethical
regulation.

The case against regulation


There are two main ways in which the case can be made against regulation.
One, which has been attracting growing attention in the US, focuses on its
interference with the constitutional rights of HSS researchers. The other focuses
more on the damage to wider interests in acquiring reliable and valid information
about the social, political, economic and cultural life of our society.

Regulation as censorship
In a recent paper, Philip Hamburger (2005), a law professor at Columbia University,
has argued that the Institutional Review Board system breaches the First Amendment
to the US Constitution, which reads as follows:
8 Robert Dingwall

Congress shall make no law respecting an establishment of religion, or prohibiting the


free exercise thereof; or abridging the freedom of speech, or of the press; or the right
of the people peaceably to assemble, and to petition the Government for a redress of
grievances.

This is the entrenched provision that bars Congress from making laws that would
abridge either the freedom of speech or the freedom of the press. In a legal sense
HSS research is a form of speech, and research publication is covered by the definition
of ‘the press’. IRB review is a form of licensing of speech. In its legal effect, it consti-
tutes censorship of ideas, so that only those approved by the prior scrutiny of
government agents may enter the public domain. Since censorship of the press is
unlawful in the US, unless it can be shown to be strongly outweighed by compelling
government interests, then so is censorship of researchers. Most possible government
Downloaded by [University of Rhodes] at 23:51 29 August 2013

interests are trumped by a collective interest in the results of the research and their
contribution to an informed public domain. Unfortunately, as Hamburger also
notes, the costs and uncertainties of litigation have thus far deterred professional
associations in the social sciences and humanities from pursuing a legal challenge
to this unconstitutional system. This is partly a question of the relative poverty of
these associations, and their inability to finance the kind of litigation that would be
necessary to sustain the argument and partly a question of their desire to maintain
good relations with the Federal agencies that fund so much of their members’ work.
In the UK we do not, regrettably, benefit from such a robust approach to freedom of
speech and the press. Compare the First Amendment with the parallel clause in the
European Convention on Human Rights:
Everyone has the right to freedom of expression. This right shall include freedom to
hold opinions and to receive and impart information and ideas without interference
by public authority and regardless of frontiers. . . . The exercise of these freedoms . . .
may be subject to such formalities, conditions, restrictions or penalties as are prescribed
by law and are necessary in a democratic society, in the interests . . . the protection of the
reputation or the rights of others, for preventing the disclosure of information received
in confidence . . .

The European Convention has a much more qualified approach, and it would be
more difficult to argue against government censorship on constitutional grounds,
although the state would still need to show proportionality in its interventions. Free
speech in Europe is not such an absolute as in the US. A rights-oriented, litigation-
based strategy has much less to commend it in such an environment. Moreover
HSS associations in the UK are even less well resourced to sustain litigation than
their counterparts in the US. This strategy also has some of the weaknesses of the
unfairness argument, implying that we cannot make a political argument but must
look to the courts for protection.
However, it is worth considering why the authors of the US constitution felt that
this was such an important principle. The First Amendment formed part of the Bill
of Rights sponsored by Jefferson and the Anti-Federalists, who were concerned to
prevent the creation of a dominant central state power in the new nation. Free
inquiry, and free dissemination of the results through a public realm accessible to
The ethical case against ethical regulation in humanities and social science research 9

all citizens, was a fundamental check on authoritarian government and its abuses.
That observation remains as true today as it ever was, and it explains why govern-
ments in an age of rising authoritarianism are so hostile to unregulated investigations
in the social sciences and humanities. Of course, there were, and still are, many issues
about whose voices are heard in the public realm and about whose interests and inqui-
ries are supported. Nevertheless, the underlying principle is of the greatest
importance: the abridgement of free speech, free commentary and free inquiry is a
step on the road to tyranny. As such, the installation of a system of censorship runs
contrary to the interests of citizens in a free society.

What are the costs of regulation?


Downloaded by [University of Rhodes] at 23:51 29 August 2013

Therefore, the turn to interest-based arguments seems more promising than one
based primarily on rights, which are vulnerable to problems in enforcement and
which may lack legitimacy. The challenge is to consider the interests that are served
by the creation of rights and to recognise that the denial of these rights may
damage the denier as much as, if not more than, the rights-holder. This, then, is
not an argument based on equity between HSS scholars and other commentators
or on the self-interest of those scholars in claiming rights but on the wider social
damage caused by pre-emptive systems of regulation. Can it be shown that the
social costs of ethical regulation exceed whatever private benefits it confers? If we
are to make this case, we need to return to fundamental issues about why freedom
of inquiry matters in democratic societies. This argument may have three elements.
The first is quite narrow but far-reaching. It acknowledges that a great deal of social
science research is funded by governments to determine whether tax revenues are
being spent efficiently and effectively, and, at least in European social democracies,
equitably and humanely. This forms part of the government’s moral contract with tax-
payers that tax demands will be kept to the minimum necessary to achieve public
service objectives and that public services will achieve the objectives for which they
are designed. If this cannot be demonstrated, taxation becomes extortion (von
Mises, 1996). When ethical regulation obstructs HSS research of this kind, it inter-
feres with this basic contract between government and citizens. For example, a col-
league and I were recently commissioned by the NHS Patient Safety Research
Programme to study the national incidence and prevalence of the reuse of single-use
surgical and anaesthetic devices, and to consider why this practice persisted in the face
of strict prohibitions. Part of this involved an online survey, using well-established
techniques from criminology to encourage self-reporting of deviant behaviour, so
that relevant staff in about 350 hospitals could complete the forms without us ever
needing to leave Nottingham. However, a change in NHS ethical regulation meant
that we needed approval from each site, potentially generating about 1600 signatures
and 9000 pages of documentation. Although we never planned to set foot in any site,
it would also have required my colleague to undergo around 300 occupational health
examinations and criminal record checks. As a result, we were unable to carry out the
study as commissioned and delivered a more limited piece of work (Rowley &
10 Robert Dingwall

Dingwall, 2007). Other estimates suggest that the practice we were studying leads to
about seven deaths every year in the UK and a significant number of post-operative
infections. The ethical cost of the NHS system can be measured by the lives that
will not be saved because our study could not investigate the problems of compliance
as thoroughly as it was originally designed to. Moreover, UK tax revenues are clearly
not being spent appropriately since the health care system is killing or injuring people
it is supposed to be benefiting: this is not an effective use of the funds that have been
redistributed from private citizens to achieve public goals. It compromises the ethical
basis on which that taxation has been levied. We should, then, see both research com-
missioners and at least some citizen groups as potential allies in resistance to
over-reaching by ethical regulators.
Second, this narrow case may be capable of extension to consider the wider issue of
Downloaded by [University of Rhodes] at 23:51 29 August 2013

the role of trust in democratic societies. This has been a recurrent theme in social
and political theory for the last 2000 years. In small-scale societies, trust may be sus-
tained without a specialised cadre of auditors or investigators. In the contemporary
world, citizens depend upon a great deal of expert knowledge in order to make
good judgements about each other and about the social institutions that they encoun-
ter. The quality of that knowledge depends crucially on free competition between
information providers. If what has traditionally been the most disinterested source
of information, the universities, becomes systematically handicapped in that compe-
tition, then all citizens lose out. When we give up doing participant observation with
vulnerable or socially marginal groups because of the regulatory obstacles, then a
society becomes less well-informed about the condition of those who it excludes
and more susceptible to their explosions of discontent. How helpful is it when the
only ethnographers of Islamic youth in the UK are undercover police or security
service agents?
Finally, there is the argument that societies that over-regulate speech and ideas
ultimately ossify. The great English sociologist, Herbert Spencer, drew an import-
ant contrast between industrial and militant societies. The latter type, which are
well-exemplified by the former Soviet Union and its East European satellites,
were, he argued, doomed to lose out in global competition because their authoritar-
ian structures blocked diversity and innovation. Both socially and economically,
they were frozen by their command systems. IRBs and other forms of pre-emptive
ethical regulation begin to look like the precursors of the surveillance states that
are being increasingly entrenched in the US and the UK. Their incursions into
liberty are justified in the name of security, but may well have unanticipated con-
sequences in terms of prosperity. Wherever dissident voices are silenced, innovation
eventually dies.
Theoretical arguments like this need a more popular framing to carry wide appeal.
However, it seems to me that if we can show that ethical regulation does not actually
contribute to a better society, but to a waste of public funds, serious information def-
icits for citizens, and long-term economic and, hence, political decline, then we may
have identified a set of arguments that might lead to a more sceptical approach to the
self-serving claims of those who sustain that system.
The ethical case against ethical regulation in humanities and social science research 11

What is to be done?
What strategies are available to us for resisting the advance of censorship in the name
of ethics? One is represented by commentaries like this paper itself, namely the
importance of denying the oxygen of legitimacy to the self-appointed sanitary inspec-
tors of HSS. The importance of legitimacy to the survival of organisations was
discussed earlier. By corollary, the refusal of legitimacy can be a powerful instrument
for the correction of institutional abuse. Ethical regulation in HSS is not motherhood
and apple pie but a deeply flawed and problematic notion. Its advocates must be con-
stantly challenged to show both that there is a disorder and that the cure is
proportionate to the harm that is alleged to result. In the absence of this, we should
refuse engagement with such systems and respect for those who service them.
Downloaded by [University of Rhodes] at 23:51 29 August 2013

The health and well-being of HSS research in UK universities rests on a


tipping-point. There is an opportunity for the Academy to exercise leadership in sub-
jecting the social movement behind ethical regulation to serious intellectual
questioning and in demanding that it produce evidence to justify its claims that
there is sufficient of a social problem for pre-emptive regulation to be a proportionate
response. There is also an opportunity to exercise leadership in exploring alternatives
to the stultifying and perverse systems that have been created elsewhere in the
Anglophone world.

Acknowledgement
Previous versions of this paper were presented as a Plenary Address to the BSA
Medical Sociology Group Annual Conference, Edinburgh 2006 and the University
of Nottingham Humanities Research Centre 1st Annual Lecture, 15 May 2007.

Notes
1. COREC was the Central Office for Research Ethics Committees, the body that
co-ordinated ethical regulation in the National Health Service. It was rebranded as the
National Research Ethics Service (NRES) from 1 April 2007.
2. This paper was accepted before the implementation of the Mental Capacity Act 2005 from 1
October 2007. This vastly extends the jurisdiction of the NHS regulatory system to all
research on those who are deemed to lack capacity, regardless of the means by which
participants have been recruited, and introduces complex new processes for obtaining
proxy consent. Although there is not an identity between ‘vulnerable’ groups and ‘lack of
capacity’ in the legal sense, this is likely to introduce significant new process and
institutional barriers to HSS research. Very preliminary inquiries have failed to identify
any consultation with HSS representatives in the drafting of this legislation. I am grateful
to my colleague, Professor Peter Bartlett, for drawing my attention to this new development.

References
AAUP (American Association of University Professors) (2006) Research on human subjects: academic
freedom and the Institutional Review Board. Available online at: http://www.aaup.org/AAUP/
About/committees/committee þ repts/CommA/ResearchonHumanSubjects.htm (accessed
20 August 2007).
12 Robert Dingwall

Annas, G. J. & Grodin, M. A. (1992) Nazi doctors and the Nuremberg Code: human rights in human
experiments (Oxford, Oxford University Press).
Beecher, H. (1959) Experimentation in man, Journal of the American Medical Association, 169,
461 –478.
Beecher, H. (1966) Ethics and clinical research, New England Journal of Medicine, 24, 1345– 1360.
De Vries, R., Dingwall, R. & Orfali, K. (forthcoming) Regulating professions: the profession of
ethics and the ethics of professions, Current Sociology.
DiMaggio, P. J. & Powell, W. W. (1991) The iron cage revisited: institutional isomorphism and col-
lective rationality in organizational fields, in: W. W. Powell & P. J. DiMaggio (Eds) The new
institutionalism in organizational analysis (Chicago, University of Chicago Press), 63– 82.
Ehrenreich, B. (2001) Nickel and dimed: on (not) getting by in America (New York, Henry Holt).
Ehrenreich, B. (2005) Bait and switch: the (futile) pursuit of the American dream (New York, Henry
Holt).
Haggerty, K. (2004) Ethics creep: governing social science research in the name of ethics,
Downloaded by [University of Rhodes] at 23:51 29 August 2013

Qualitative Sociology, 27, 391 – 414.


Halpern, S. A. (2004) Lesser harms: the morality of risk in medical research (Chicago, University of
Chicago Press).
Hamburger, P. (2005) The new censorship: Institutional Review Boards, The Supreme Court Review,
2004, 271– 354.
Hazelgrove, J. (2002) The old faith and the new science: The Nuremberg Code and human exper-
imentation ethics in Britain 1946– 73, Social History of Medicine, 15, 109– 135.
Jones, J. (1981) Bad blood: the Tuskegee syphilis experiment (New York, Free Press).
Jonsen, A. (1998) The birth of bioethics (New York, Oxford).
Katz, J. (2006) Ethical escape routes for undergrand ethnographers, American Ethnologist, 33,
499 –506.
Lederer, S. E. (1995) Subjected to science: human experimentation in America before the Second World
War (Baltimore, Johns Hopkins Press).
Medical Research Council (1964) Annual Report for 1962 – 63. Cmnd 2382 (London, HMSO). Rep-
rinted as: Responsibility in investigations on human subjects, British Medical Journal, 1964, 2,
178 –180.
Morin, K. (1998) The standard of disclosure in human subject experimentation, Journal of Legal
Medicine, 19, 157 – 221.
Murphy, E. & Dingwall, R. (2007) Informed consent and ethnographic practice, Social Science and
Medicine, 65(11), 2223– 2234.
Novick, P. (1999) The Holocaust in American life (New York, Houghton and Mifflin).
Papworth, M. H. (1967) Human guinea pigs (London, Routledge and Kegan Paul).
Rothman, D. J. (1993) Strangers at the bedside: a history of how law and bioethics transformed medical
decision making (New York. Basic Books).
Rowley, E. & Dingwall, R. (2007) The use of single-use devices in anaesthesia: balancing the risks to
patient safety, Anaesthesia, 62, 569– 574.
Schmitt, U. (2004) Justice at Nuremburg: Leo Alexander and the Nazi doctors’ trial (London, Palgrave).
Scott, W. R. (1991) Unpacking institutional arguments, in: W. W. Powell & P. J. DiMaggio (Eds) The
new institutionalism in organizational analysis (Chicago, University of Chicago Press), 164 – 182.
Smith, G. (2007) Re-expressing the division of British medicine under the NHS: The importance of
locality in general practitioners’ oral histories, Social Science & Medicine, 64, 938 – 948.
Strong, P. M. (1983) The rivals: an essay on the sociological trades, in: R. Dingwall & P. Lewis (Eds)
The sociology of the professions: lawyers, doctors and others (London, Macmillan), 59 –83.
Toynbee, P. (2003) Hard work: life in low-pay Britain (London, Bloomsbury).
Von Mises, L. (1996) Human action (4th edn) (Irvington, Foundation for Economic Education).
Weindling, P. (2004) Nazi medicine and the Nuremberg Trials: from medical war crimes to informed
consent (London, Palgrave).

You might also like