Professional Documents
Culture Documents
Maximilian Fochler
Annina Müller
Michael Strassnig
Unruly Ethics:
On the Difficulties of a Bottom-up
Approach to Ethics in the Field
of Genomics
October 2009
reprint
STS
Department of Social Studies of Science
University of Vienna 2009
Copyright
You are allowed to download this paper for personal use only. This paper must not be
published elsewhere without the author’s explicit permission. The paper must not be
used for commercial purposes.
The final, definitive version of this paper was published as Felt, Ulrike, Fochler, Maxi-
milian, Müller, Annina and Strassnig, Michael (2009). Unruly Ethics: On the Difficulties of a
Bottom-up Approach to Ethics in the Field of Genomics. Public Understanding of Science
18(3): 354-371.
Please cite, if possible, this final version of the article.
If only the reprint is available, you should cite this paper in the following way:
Felt, Ulrike, Fochler, Maximilian, Müller, Annina and Strassnig, Michael (2009). Unruly Eth-
ics: On the Difficulties of a Bottom-up Approach to Ethics in the Field of Genomics. Pub-
lished by the Department of Social Studies of Science, University of Vienna,
October 2009. Available at http://sciencestudies.univie.ac.at/publications
http://sciencestudies.univie.ac.at
Unruly Ethics
On the Difficulties of a Bottom-up Approach
to Ethics in the Field of Genomics
1. Introduction
At the dawn of the twenty-first century, questions concerning the relation of
emerging technosciences and society seem more pressing than ever. Genomics and
nanotechnology, for example, raise fundamental ethical and social questions. Public
controversies hint at the fragility of current socio-technical arrangements and at a lack
of “social robustness” of scientific innovations (Nowotny et al., 2001). In these mo-
ments of conflict and uncertainty, societies started to experiment with ways to reflex-
ively engage (Beck, 1997; Giddens, 1990) with the implications of technosciences for
social order. Both public engagement exercises and ethics committees as well as ethi-
cal review boards have been established as possible answers to this new situation.
The involvement of citizens in technoscientific decision-making has gained high
symbolic importance in recent policy discourses in Europe. The European Commis-
sion’s White Paper on Governance outlines the need for more “participation through-
out the policy chain—from conception to implementation. Improved participation is
likely to create more confidence in the end result and in the institutions which deliver
policies … Legitimacy today depends on involvement and participation.” It is a clear
1
This research was funded by the GEN-AU/ELSA program of the Austrian Federal Ministry of Educa-
tion, Science and Culture. We thank the lay participants and the genome scientists who shared their
thoughts and convictions with us. Furthermore, we want to thank the invited experts of the Round
Table. Special thanks go to our colleagues and collaborators Marcus Düwell, Priska Gisler, Michaela
Glanz, Sandra Karner, Astrid Mager, Silke Schicktanz, Rosmarie Waldner, Bernhard Wieser and Brian
Wynne who have been helpful in many different ways in carrying out the project and discussing our
ideas at different stages of the project. An earlier version of this paper was presented at the 4S con-
ference in Pasadena in October 2005. Finally, we want to thank two anonymous reviewers for their
valuable comments.
Reprint 09 3
plea for new forms of governance replacing “the linear model of dispensing policies
from above” (EC, 2001: 10–11). Still, the “details” (EC, 2005: 18) of this endeavor, such as
when engagement should take place in the innovation process, or who should be in-
volved and how, remain open. This becomes visible in the recent call to move more
“upstream” to allow for a much earlier discussion of “deeper questions about the
values, visions, and vested interests that motivate scientific endeavor” (Wilsdon and
Willis, 2004: 18).
The ethical dimensions of research and innovation are also receiving increasing
attention in the policy realm. But contrary to the public engagement rhetoric, the task
of dealing with these questions is largely ascribed to expert ethics committees at
national and European level (EC, 2002a). The relation of this “commitology”2 approach
to public engagement remains largely unclear (Tallacchini, 2006). And even if there is
a theoretical debate on “empirical ethics” (Borry et al., 2005), actual public engage-
ment with ethics neither belongs to the repertoire of suggested actions nor is it dis-
cussed as a serious alternative or complement to the decision-making by expert com-
mittees.
The focus of this article is the exploration of this blind spot concerning the
possibilities and difficulties of more upstream participatory approaches to issues of
ethics and science. The empirical basis of our reflections is six Round Table debates
between genomics researchers and laypeople we organized between October 2004
and April 2005 in Austria. Our aim was to create an open space in which a broad range
of issues concerning the ethical and social implications of genome research in general
and the work of this research group in particular could be discussed. In what follows
we analyze how ethical issues were approached, framed, debated, displaced or closed
in one of these Round Tables which had an explicit focus on ethical dimensions of ge-
nome research. We elaborate on the possibilities and limits of dealing with ethics in
such a participatory setting and hint at what should be taken into consideration when
approaching issues of science and ethics more “upstream.”
We start by discussing the relations between public engagement and expert
ethics, followed by describing our empirical approach and its central underlying as-
sumptions. The core of this article analyzes three different ways in which the partici-
pants performed ethical debates. Finally, summing up our findings, we will point at the
difficulties and limits we encountered in the debate of ethical issues of genome re-
search between researchers and laypeople, and what this could mean in terms of em-
pirical ethics.
ing of science (PUS) (Bodmer, 1985) approach, the public was conceptualized as having
knowledge deficits on technoscientific issues, which would inhibit them from seeing
the “true” benefits of the technoscientific innovations envisioned by experts. Exper-
tise in this model remained on the side of science and largely unchallenged—it was
the public that was in need of “education.”
Over recent years, this much criticized approach has lost its dominance in con-
ceptualizing and performing science–society relations. On a discursive level, the
rhetoric of “deficit” and “education” has been sidelined by a new kind of speech: “pub-
lic engagement with science,” “public participation” or “dialogue” are the new buzz-
words. In this “reinforced culture of consultation and dialogue” (EC, 2002b; see also
Irwin, 2006) “citizens” and “civil society” are actors to be more actively involved in the
policy process, even though it largely remains unclear who concretely is to speak in
the name of the society. On the one hand this shift is linked to a perceived crisis in
traditional forms of expertise (Gerold and Liberatore, 2001). Public controversies such
as the one surrounding BSE have highlighted the failure of traditional expert systems
owing to their highly problematic entanglement with the policy world (Jasanoff, 1997;
Irwin, 2001). In this context, public participation is seen as a means to increase the
visibility and accountability of policy processes (EC, 2001) and, subsequently, to in-
crease public trust in science and scientific expertise. On the other hand, in the en-
gagement paradigm the public is no longer seen as a passive entity to be convinced by
the superior rationality of science. Rather it is imagined as carrying its own legitimate
values and opinions towards science and in some areas such as in patient movements
in biomedicine even being able to contribute its own “lay expertise” (Epstein, 1996).
Science and technology studies (STS), especially the critical PUS tradition
(Wynne, 1992; Irwin and Wynne, 1996; Irwin and Michael, 2003) have played an im-
portant role in the development of this engagement paradigm and its theoretical
foundations. Another important reference point, at least in Europe, is a tradition of
staging participatory experiments in the context of Technology Assessment, which
chronologically precedes the recent wave of attention towards engaging the public.
Starting from the first consensus conference in Denmark in 1987, a number of count-
ries in Europe have “imported” this model and a range of new methods including na-
tion-wide “public debates” have been developed and experimented with (Felt et al.,
2003; Joss and Bellucci, 2002).
However, recent contributions (Wilsdon and Willis, 2004; Wilsdon et al., 2005)
have criticized many of these experiments for being situated too far “downstream” in
the innovation process. This implies that by focusing on assessing the risks linked to
imminent application or implementation, public engagement sets in at a point at
which many institutional commitments concerning a technoscientific development
are already in place. Thus, instead of opening up more fundamental questions related
to basic values such as “Do we need such an innovation?” or “What kind of society is
implied in the visions this innovation carries?,” the issue may be narrowed down to
questions of risk and safety. This argument is of high relevance for a “public engage-
ment on ethics” as in our understanding the aim of engaging laypeople in discussing
the ethical questions of genome research entails bringing up and discussing exactly
this type of “upstream” question.
The renewed interest in and the increasing institutionalization of ethics in the
domain of technoscience may also be seen as a response to a perceived need for a dif-
ferent form of interaction between science and society. Over recent years, ethics as a
resource for science and technology policymaking has gained importance (Evans,
2002), with (bio-)ethics committees increasingly being established on national and
supranational levels3 (Jasanoff, 2005; Salter and Salter, 2007). It may be argued that
these committees deal with similar issues and questions as those envisaged for “up-
stream” public engagement: also concerned with “fundamental values,” an important
task of many ethics committees is to give policy advice on the ethical legitimacy of
socio-technical trajectories. In doing so, they always draw on presumptions of public
interest (Burgess, 2004). Except for in public engagement approaches, many ethics
bodies are organized as pure expert bodies without any kind of public involve-
ment 4and transparency concerning their proceedings. Thus, the interesting paradox
arises that (expert) ethics seems to be one of the few areas unchallenged by the crisis
of expertise outlined above (Bogner and Menz, 2005). How this “commitology” relates
to the need for more transparency and public engagement otherwise so strongly
stressed in the area of science and technology policy is not addressed.
It is not only on the policy level that the relation between ethical expertise and
public engagement (or social science expertise producing representations of the pub-
lic in public engagement) is unclear. The significance of empirical social sciences for
ethical reasoning has been vividly discussed in bioethics journals under the label of
“empirical ethics” over recent years (e.g., Haimes, 2002). Given the competing role of
both approaches in the policy realm, this debate may also be read as an example of
disciplinary boundary work (Gieryn, 1999) on who is legitimized to give advice on is-
sues concerning science–society relations. Indeed, as a reason for doing “empirical
ethics,” some authors have argued that ethics needs to take into account the implica-
tions of the social sciences more strongly in assessing the role of new technologies, as
it otherwise runs the risk of being marginalized in the policy sphere owing to the lack
of a representation of “societal interests” (Hedgecoe, 2004).
For those arguing for a stronger inclusion of empirical approaches in bioethics,
the principalist deductive approach of expert ethics is the main point of critique
(Hedgecoe, 2004; Lopez, 2004). This approach, it is argued, is ill-prepared to meet the
complexity of the issues at stake in shaping socio-technical futures. In particular, ap-
plied ethics approaches in policy advice often are alleged to feature quite narrow
definitions of the issues at stake, or as Burgess (2004: 10) puts it, “The institutionaliza-
tion of ethics as a source of policy analysis encourages narrow definitions of issues and
terms of reference in response to short time frames.” He further argues that an inte-
gration of ethical reflection and public consultation may benefit both sides of the eth-
ics/social science divide. While social science research may contribute to a more fine-
grained understanding of the societal complexities at stake, ethical analysis may play a
key role in analyzing the values involved in different scenarios and political decision-
making options. In analogy to risk assessment, ethical analysis could then provide a
range of “morally permissible” alternatives, the decision between which (and be-
tween the values behind these options) would have to be a political one.
From the side of established ethics the value of social science contributions, let
alone of public engagement, to ethical questions is little appreciated. One of the main
3
For example the European Group on Ethics in Science and New Technology to the EC.
4
The very notion of who can be attributed to the “public” is, too, highly contested, see Michael and
Brown (2005).
arguments is that while ethics is concerned with values and what society “ought” to
do, the social sciences do research on the “is,” or on the “facts” of social reality (Levitt
and Häyry, 2005). From a STS perspective, this sharp distinction between facts and val-
ues must be seen as problematic at best: facts are shown as fabricated (Knorr-Cetina,
1981), social values play a key role in how scientific facts and technological artifacts are
shaped (Pinch and Bijker, 1984) and visions and versions of social orders are inscribed
in these facts and artifacts (Latour, 1991, 1992). Further, it often is precisely these
“facts,” the way they may be conceptualized and the values associated with them
which are at the heart of public controversies on technoscience (Limoges, 1993). As
evidently this also holds for social scientists producing “facts” on social realities, to
assume the division between facts and values described above seems highly question-
able.
Another common argument against public involvement in this context is that
the “moral” convictions of laypeople do not necessarily have any significance to (ex-
perts’) ethical reflection. In this line of argumentation, validity and quality of an ethical
argument do not stem from its support by public opinion but from the sophistication
of the argument itself (Crosthwaite, 1995). Similar to classical PUS approaches, expert
rationality associated with ethics is assumed to be a priori more “rational” than “moral
sentiments” of laypeople. An analogy to positions in critical PUS research (e.g., Wynne,
1992) may be drawn here, which have shown that laypeople may have very differenti-
ated positionings towards technoscience and the questions associated with it on the
basis of their experiences and situated perspectives.
Taking up these arguments we started from the assumption that there seems to
be no substantive reason not to have public engagement with ethical issues. Whether
an issue is framed as “social,” “political,” “scientific” or “ethical” often much more
seems to be related to boundary work strategies than to any specific features of the
issue itself. Thus, why should “social” and “political” questions be subject to public en-
gagement and “ethical” subject to expert committees?
Still, even though much of the empirical ethics discussion may be seen as boun-
dary work meant to demarcate areas of competences and responsibilities rather than
as a substantive contribution to a better integration or at least mutual understanding
of ethics and social science, much may be learned for our analysis of an engagement
setting from the more reflexive contributions to the debate. In his discussion of the
“empirical turn” in bioethics Ashcroft (2003) argues that from a Foucauldian perspec-
tive this shift may be seen as typically “modern” in the sense that a reflexive represen-
tation of society in the production of (ethical) knowledge becomes a central episte-
mological claim. However, these representations also play a central role in producing
social order (Rose, 1999). Empirical methods as well as public engagement designs are
to be considered not only as representing but also as performing (Law, 2004) social
realities and ethical norms. Reflexivity concerning the politics implied in doing em-
pirical ethics and public engagement thus needs to be a central methodological issue.
This argument has strong resemblance to the critical discussion of participatory de-
signs in the STS context. Irwin (2001) has argued that participatory designs may also be
read as “technologies of community” (Rose, 1999) and that implicit models of the po-
litical and citizenship are built into these designs. It has even been claimed that formal
mechanisms of participation “make” citizens through “formalized mechanisms of voic-
ing” (Michael and Brown, 2005). Thus, a reflection of our empirical design is of central
relevance to this article.
5
http://www.science-et-cite.ch/projekte/tableronde/de.aspx (accessed 8 February 2007).
In our project the Round Tables brought together fourteen laypeople 6 with
seven genome researchers of a bigger consortium working on lipid disorders. Their
research project served as “anchor” and example to discuss the social and ethical as-
pects of genome research. It fulfilled our requirements as it had clear and explicit vi-
sions of the societal problems to be addressed (obesity, diabetes) but still was quite far
from any possible clinical or other application. Thus, the genome research project
was—as a “basic research” project—situated quite far “upstream” in a possible devel-
opment of “anti-obesity” drugs while at the same time already incorporating—even
though very vague—promises for the future.
The actual Round Tables were whole-day discussions, which took place in a
seminar room of the researchers’ laboratory. A series of six meetings were held over a
period of seven months (2004–5). The first three were dedicated to discussing the
genome research project itself including a laboratory visit. During these Round Tables
the participants collectively identified topics to be discussed in the remaining meet-
ings: (1) science and the media, (2) ethical issues of genome research and (3) regula-
tory issues. A specific expert was invited for each of these thematic discussions (a jour-
nalist, an ethicist, a representative of a state regulatory body). Qualitative interviews
were conducted with all participants before the first and after the last Round Table to
trace changes in the participants’ positions and opinions. Generally, discussions took
place in the plenary, however, at some meetings small group discussions were in-
cluded to allow for a different discussion dynamic or to develop inputs for the plenary.
All discussions and interviews were taped, transcribed and coded. The overall
aims of our research project were to investigate the Round Table as a participatory
setting, to study how both laypeople and scientists perceive and discuss the ethical
dimensions of genome research, how images of science/scientists are mobilized as
strategic elements in this discussion, and how potential learning processes may be
conceptualized.
The focus of this article is on the dynamics of debating ethical issues, and espe-
cially to analyze the strategies used to close these debates. Hence, we will focus our
investigation on the Round Table explicitly devoted to the ethical dimension of ge-
nome research. There, an ethicist was present as an expert, giving an input statement,
which was followed by small group discussions in which both laypeople and scientists
separately developed their respective positions towards the issues. These positions
were then presented and debated in a plenary discussion.
6
The laypeople were selected from a nation-wide call for participants. The call was made via posters
and leaflets displayed at public institutions like museums as well as bulk mail and a newspaper ad-
vertisement in the city where the Round Table took place.
7
By “societal facts” we mean presumptions about society, which are assumed to be common knowl-
edge. These facts were often referred to by both laypeople and scientists.
To better understand the power of the fact-value divide we will provide two ex-
emplary discussion sequences. The first started when laypeople presented the out-
comes of a small peer group debate on ethical aspects of genome research to the ple-
nary. Relating to the concepts of ethics given by the invited ethicist they wondered
what makes research worthwhile to be funded and what criteria are drawn upon in
making such decisions: “Does one rather consider insights and knowledge or is it eco-
nomic interests which are in the foreground? Does the end justify the means?” (L5f).8
They compared research on remedies for malaria with research on lipid disorders and
asked why “nobody” is doing research on the former. Their implicit assumption was
that currently society values research on lipid metabolism disorders as economically
more beneficial than malaria research as the former affects more people, especially in
the first world— “there is not so much potential and not so many sick people, of
course” (L5f).
This is a classical case of anecdotal evidence, however, addressing much wider
questions than the malaria vs. lipid metabolism disorders opposition only. In the pic-
ture of science sketched out by the laypeople, mainly those questions are investigated
that have high economic potential and answer problems of the First World. To them,
this clearly seemed to be an ethical issue.
The response given by a scientist is as follows:
Unfortunately, that is not true. There is a huge EU project at the Technical University.
They are working only on malaria. This [project] has, I think, 8 million Euros. But …
it is not done for the people who are there [in nations affected with malaria] but of
course for the tourists. Because their numbers are high enough, so it pays off for the
EU to fund this. [They] receive twice as much money for malaria than we do for lipid
research. (S1f)
There are two main elements in the scientist’s answer: she is stressing that the
laypeople’s argument was built on wrong facts, however, she is also confirming their
point that research money is apparently granted due to economic relevance.
In response to this statement the laypeople tried to find another example, i.e.,
another illness where no research is done in order to be able to uphold their more
general point. But the discussion got very fragmented—people interrupted each
other and the topic meandered for a short time around malaria research and rare ill-
nesses—and then led to the discussion of a completely different issue. Apparently for
the laypeople, having made a wrong fact statement somehow devalued their more
general claim that science is economically driven.
By mobilizing their professional knowledge, scientists upheld hierarchies and
performed the model that if people would have the “right” knowledge certain ethical
questions would not be at stake anymore. The laypeople somehow felt caught using
“wrong” facts for building their ethical argument and did not feel able to challenge
the scientists on a more abstract level. This might explain why they did not take up the
second part of the scientists’ argument (that malaria research is done only for tourists)
to continue the ethical debate. As this example shows, there was a general implicit
8
All quotes translated by the authors. L or S indicates layperson or scientist, the number is to identify
the speaker, and m or f stands for male or female.
feeling of a necessity to use “the right facts” also when building value-based argu-
ments.
Closing an ethical discussion by reformulating an initially ethical question into a
scientific problem is the second constellation we want to address. To exemplify this
point, we enter a discussion on obesity treatment. The question was whether obesity
is to be regarded as a medical condition or if—as the laypeople often argued—it could
also be addressed in a broader psychosocial way or by lifestyle changes. The scientists
conceptualized obesity as a severe health problem, linked with an assumed societal
desire for being slim as well as with the “fact” that diets and changes in lifestyle rarely
lead to positive outcomes—“It does not work!” was a recurrent affirmation made by
different scientists. This line of argumentation— an assumed public ideal in collusion
with medical facts—renders alternatives to genome research hardly arguable. In the
strand of argumentation described in the following we show the rhetoric transforma-
tion of a “lay-sociologist” argumentation based on personal observations to an argu-
mentation based on “hard” scientific facts.
One layperson questions the notion that there is a societal desire for slimness
and that being overweight is necessarily a problem: “there is the saying … the fat ones
are jovial. … They do not want to lose weight at all” (L3m). A scientist giving anecdotal
evidence for the societal desire for being slim counters this statement:
If you have a look at how many people buy dietary journals or such things … For
some, it is obviously a want. If you walk by the women’s journal shelf there is a new
diet on every cover page. In the top sellers lists of non-fiction literature there is the
South Beach diet on the first rank and the Atkins diet on the second—and in fact for
months. (S1f)
The scientist is making a sweeping “lay-sociologist” argument, using the number
of books and magazines dealing with diets as an indicator for a broad public ideal.
Hence, the scientist left her terrain of expertise, drew on experience assumed as
common and transformed it into a factual statement.
In the following, a debate between scientists and laypeople starts on what a
“good life” should look like and what genome research may contribute to it. This
quickly opens up a fundamental ethical dilemma, which is expressed by a layperson
asking: “does this [if people being overweight are happy or not] depend on genome
research?” (L7m).
What is at stake is whether obesity is to be seen as a medical condition and if ge-
nome research has the right to intervene in this issue. Faced with this challenge the
scientist gradually tries to reframe the problem she formerly argued in the framework
of her lay-sociologist observations in terms of a scientific problem, thus retreating to
her core domain of expertise: “excuse me, but obesity is not, is not only a psychological
problem. It is not about if someone is happy and fat. It is unhealthy, even if one is happy
with it” (S1f). As this explanation still turns out to be too vague and too little convin-
cing, she introduces medical “facts” to the discussion:
Fact is that 50% of the people have too many kilos—that is fact. Whether a particu-
lar person thinks she may be too fat or if she really is, that is an individual problem.
Fact is, half of the people would live healthier if they would reduce their weight.
And 20% have to reduce massively. That is fact, and whether they are psychologi-
cally happy with it or not, that makes no difference for obesity being unhealthy.
(S1f)
After this statement the discussion on this ethical issue stopped instantly and
the debate shifted to another topic.
In this exchange the scientist successfully manages to rule out dimensions of
obesity addressed by the laypeople by redefining the framework in which obesity is
seen to be a problem. In a first try the scientist left her own area of expertise and made
value-based arguments—that there is a societal desire for being slim. While she was
arguing with values and lay observations the discussion remained open and laypeople
felt able to challenge the arguments with their lay expertise. The scientist prioritized
health rather than happiness as a common value and once obesity was reframed as a
medical problem based on scientific facts, authority was reinstalled and closing the
discussion became possible. What formerly was negotiable—the meaning and impli-
cations of being obese—is rendered indisputable by putting the problem in the realm
of “hard” facts.
Both examples show the uneasiness of our participants to discuss ethically rel-
evant issues by referring to values “only.” To know the “right” facts seemed to be es-
sential to re-direct the discussion of ethical questions. Both laypeople and scientists
seemed to have internalized the model that facts are a better argumentative resource
than mere values. The scientists were able to close ethical debates by either mobiliz-
ing values hidden behind hard facts as well as professional knowledge or by discur-
sively changing the frame of reference for a problem.
The laypeople implicitly subscribed to this expert model too. While they called
for a certain self-responsibility of each individual, they also stressed the necessity for
more institutionalized forms of ethical reflection and responsibility. As one layperson
states:
the question of responsibility is mainly a question of avoiding that something bad
happens. And a good way of preventing this would be to constantly question if the
path taken is still the right one. That means that … society takes care that there is
always someone who interferes and questions if the path is the right one. (L1m)
This statement was supported by claiming “actually, more money for ethicists”
(L1m). Displacing ethical problems to the expertise of ethicists was an appropriate way
to handle ethical problems in research for large parts of both groups of participants.
The assumption that professional ethics should ideally deal with ethical problems and
that it is the legitimate place in society to do so was widely shared.
But not only professional ethicists were called for. The displacement of ethical
issues to the regulatory/legal space was also a frequent exit strategy. To highlight this
delegation-to-experts we will use a discussion on animal experimentation.
A layperson raised the question whether it is morally acceptable to kill animals
in basic research where there is no explicit benefit but only the “pure purpose to
satisfy the human thirst for knowledge” (L1m). The reaction from the scientists’ side
exemplifies our point:
S1f: But you are not allowed to do that! There are regulations. … You don’t get a free
ticket to do all animal experiments. Every time we, for example, want to make a cer-
tain knock-out-mouse with a certain gene, we have to apply for, explain … what
kind of function that gene has, and what our presumption is, and why we need
that. L1m: And according to which criteria is that decided then, whether that is ok or
not? S1f: That’s a good question, I have no idea. You have to ask the person from the
ministry, he knows that. I don’t know.
In her immediate answer the scientist argued—and this was largely supported
by her colleagues— that she does not need to deal with this ethical question as legal
permission is always required for mice experiments. It is regulated and thus regarded
as unproblematic by the scientists. What is legally permissible is equated with “There
is no ethical problem,” and therefore no need for discussion. This argument managed
to end the ethical debate on the decision criteria for animal experiments almost in-
stantaneously.
This exchange exemplifies the readiness to reduce complex and multilayered
ethical issues to regulatory problems. For the scientists this reduction meant a simpli-
fication of their practice. Their argument is that ethical considerations have to be un-
dertaken by society before political regulations are made. Once regulations are in
place and research practices comply with them, the re-opening of this “black-box of
ethics” seems neither necessary nor useful.
these discussions as an independent single event and not to see the dense network of
ethical arguments that were present. Secondly, “ethics” somehow remained in a space
apart, not tied into and linked with wider societal expert-led debates on ethics.
A second kind of absent presence was created by the way discussions in the peer
group meetings (laypeople or scientists)—where potentially controversial ethical
issues were discussed—did or did not make their way to the plenary. Apparently, the
peer groups were perceived as sufficiently socially robust to allow rather delicate is-
sues to be openly addressed. Both groups developed an implicit understanding of
what issues from the peer group should go into the plenary. While it is not astonishing
that the scientists developed this capacity, as they are used to perceiving themselves
also as an “interest group,” it is remarkable how this took place for the laypeople as
well. They seemed to form an “imagined community” (Anderson, 1983) of non-
scientists, a perception strong enough to allow for quite strong internal openness. The
plenary on the contrary, was seen by both groups as a much more fragile setting in
which people preferred not to address certain ethical questions. Both laypeople and
scientists felt more comfortable when discussing among themselves in the small
groups which were experienced as a safe and more private setting.
To give an example, in the peer group discussion on ethics the laypeople identi-
fied animal experimentation as a crucial ethical topic. In this setting some of them had
rather strong positions, e.g., one layperson stated that he is against animal experimen-
tation in general. When presenting the outcomes of this discussion in the plenary, the
issue of animal experimentation remained unspoken. Even as the issue of killing mice
for research was brought up by the scientists themselves, the laypeople did not take
this opportunity to raise their concerns. Obviously, they had implicitly decided not to
share their position the way they expressed it in the peer group. The interviews with
the participants made after the Round Table meetings support this as one layperson
argued, “there [in the plenary] were however two fronts, I would say. And this inhibi-
tion was overcome when they [the scientists] were not here anymore. There was a
more casual and more direct talking to each other” (L3m, ex-post interview).
Not only the laypeople but also the scientists addressed issues differently in the
peer group. In the scientists’ peer group there was a debate on the status of animals in
society. They identified a paradox in the treatment of animals: society does not see the
need for ethical discussions when killing animals for food, however, one sees a need
for ethical considerations for animal experiments in research. As one scientist put it:
there were two among [the lay] who say they don’t eat meat … because they have a
pity on animals. But they are consistent at least. I think someone who eats meat is
not allowed to be upset with animal experiments that are done according to the
highest standards. And this I think is an important argument concerning animal
experiments, not only if but also how. (S1f)
This argument was also never made explicit by the scientists in the plenary dis-
cussion, as there was apparently a common agreement that not all discussions from
the peer group should find their way into the plenary discussion. Thus it is an import-
ant issue for public engagement settings to reflect about how different spaces are
perceived, whether they are understood as “private” or “public,” and to find ways to
organize a translation between these spaces.
5. Conclusions
When planning and carrying out our Round Tables on the ethical and social di-
mensions of genome research, we considered a number of previous experiences.
Among others, our argument was that participatory settings often did not allow for a
true mutual dialogue between scientists and laypeople, either due to asymmetrical
levels of involvement as in consensus conferences or due to too short timeframes of
interaction. Further, we organized a setting as far “upstream” as possible considering
the boundary conditions for the project by choosing researchers of a genome project
as partners whose research was quite far from any imminent application—while pos-
sible trajectories of its development and its societal consequences were clearly dis-
cernible. The aim in doing so was to allow for an ethical discussion, which both has a
concrete reference point and is as open as possible to ask basic questions about the
values of genome research.
When analyzing our data focusing on the ethical debates at the Round Table, we
were struck by the multiplicity of mechanisms that often resulted in the closure or
displacement of ethical issues. We titled this article “Unruly Ethics” in order to address
the difficulties in grasping, framing and engaging with ethics. Thus, we decided to put
emphasis on the closing mechanisms of potentially controversial issues and hope that
the analysis of these mechanisms offers new empirical insights and contributes to the
academic debate on empirical ethics by pointing to some complexities of such under-
takings.
Concretely, we draw three main conclusions: first, we conclude that the difficult
and uneasy relation between “facts” and “values” is not only at the heart of the contro-
versy and the boundary work efforts surrounding “empirical ethics.” Considering this
question is also central in understanding the interaction processes taking place in
ethical debates between laypeople and scientists. As described above, all our partici-
pants shared or at least accepted the implicit assumption that argumentations using
the authority of (scientific) facts are to be considered superior to “mere values.” We
even witnessed debates in which the discussion was closed because the initial state-
ment had been made on the basis of “wrong” facts.
This raises two points, which seem crucial to consider. It gives scientists a “head
start” in being able to close ethical debates they feel uneasy with as they usually are
more adept in building the authority of factual statements than laypeople are. They
would even feel comfortable to leave their terrain and perform rather broad “factual”
statements about society more generally— and thus perform as “lay sociologists.”
Then again it shows a general uneasiness at work when discussing questions of values
without being easily able to resort to the “bedrock” of hard facts. This relates to the
discussion on upstream engagement, and it might be productive to consider whether
or not our experiment was really situated upstream. Indeed, it was upstream if one
takes the degree of transformation of scientific knowledge into applications and pro-
ducts as an indicator. However, this did not mean that basic values connected to the
scientific endeavor were discussed, because of the deeply rooted cultural belief that
scientific facts are stronger than any other type of value-based knowledge and experi-
ences. When facts were seen as missing, the ethical debate was rather displaced than
carried out on the basis of values. In that sense it seems to be an illusion to solve this
dilemma by “simply” moving further upstream. Much more, the relation to scientific
“facts” in this context has to be actively put in question in the engagement exercise.
Secondly, paying attention to the social dynamics both within the groups of lay-
people and scientists as well as between them, we observed that what may be opened
up as an ethical issue strongly depends on the actual social context. At times rather
strong ethical statements and feelings were expressed in the small group discussions
where laypeople and scientists were among themselves. However, these statements
were rarely carried into the plenary, even though the explicit aim of the groups was to
develop input for the plenary discussion. Thus, a “space of trust,” as was envisaged by
designing the Round Tables as a long-term interaction, could up to a certain degree
not be developed. Rather, what we could observe was a process of “mutual taming,” in
which issues that were potentially controversial for the respective others or which
could be read as a too direct critique were not brought to the table, in order not to
endanger the stability of the social setting. As the laypeople did not feel immediately
affected by the ethical implications of the project at hand, they were reluctant to risk
an open conflict. This could be interpreted as a general tendency in the Austrian cul-
tural context to be compliant in settings perceived as hierarchical.
Thirdly, it became visible in our discussions that strongly rooted linear models of
innovation were invoked to make the point that ethical issues were not to be dis-
cussed “here and now.” A common displacement strategy was to shift the ethical dis-
cussion “downstream”—to an imagined place where there would be more facts avail-
able to assess possible consequences. This points to the conclusion that even though
an upstream setting may offer more possibilities to discuss basic value questions, in
order to do so the relation between the upstream design and the participants’ imagi-
nations of science and the innovation process has to be addressed.
These three points strongly relate to discussions on empirical ethics and up-
stream engagement. Often in our setting, micro versions of the fact–value conflict in
empirical ethics or the upstream/downstream discussions concerning participatory
design seemed to be played out. As a practical conclusion and as a possible recom-
mendation for similar efforts, we would like to make two important suggestions.
To start with, it seems important to challenge some of the most fundamental ba-
sic assumptions on the development of technoscience in their relation with society.
As long as linear innovation models remain the basis of discussion, as long as scientific
knowledge is automatically accepted as superior to any value position and as long as
simple regulations are longed for as a central aim, participatory processes concerning
ethical issues might quickly reach their limits.
Finally, we would like to make a plea for taking the performative dimension of
staging a participatory design more seriously. Reflexivity about the visions of citizen-
ship, politics and related issues embedded in the participatory design is often asked
for, but maybe not as often taken seriously. Moreover, researchers and practitioners
designing participatory settings on ethical issues need to reflect their position on the
relation between facts and values, as well as concerning the limits and possibilities of
upstream engagement. Beyond reflection, it appears central to make these assump-
tions explicit also to the participants, in order to facilitate a reflexive discussion space.
References
Anderson, B. (1983) Imagined Communities: Reflections on the Origin and Spread of Nation-
alism. London: Verso.
Ashcroft, R.E. (2003) “Constructing Empirical Bioethics: Foucauldian Reflections on the Empirical
Turn in Bioethics Research,” Health Care Analysis 11(1): 3–13.
Beck, U. (1997) The Reinvention of Politics: Rethinking Modernity in the Global Social Order.
Cambridge: Polity Press.
Bodmer, W. (1985) The Public Understanding of Science. London: Royal Society.
Bogner, A. and Menz, W. (2005) “Bioethical Controversies and Policy Advice: The Production of
Ethical Expertise and its Role in the Substantiation of Political Decision-making,” in S.
Maasen and P. Weingart (eds) Democratization of Expertise: Exploring Novel Forms of Sci-
entific Advice in Political Decision-making, pp. 21–40. Dordrecht: Springer.
Borry, P., Schotsmans, P. and Dierickx, K. (2005) “The Birth of the Empirical Turn in Bioethics,”
Bioethics 19(1): 49–71.
Burgess, M.M. (2004) “Public Consultation in Ethics: An Experiment in Representative Ethics,”
Journal of Bioethical Inquiry 1(1): 4–13.
Crosthwaite, J. (1995) “Moral Expertise: A Problem in the Professional Ethics of Professional Ethi-
cists”, Bioethics 9(5): 361–379.
Epstein, S. (1996) Impure Science: AIDS, Activism, and the Politics of Knowledge. Berkeley: Univer-
sity of California Press.
European Commission (EC). (2001) European Governance: A White Paper. COM(2001) 428 final,
25 July 2001. Brussels: European Commission.
European Commission (EC). (2002a) Science and Society—Action Plan. Luxembourg: Office for
Official Publications of the European Communities.
European Commission (EC). (2002b) Towards a Reinforced Culture of Consultation and Dia-
logue: General Principles and Minimum Standards for Consultation of Interested Parties
by the Commission. COM(2002) 704 final, 11 December 2002. Brussels: European Com-
mission.
European Commission (EC). (2005) Science in Society Forum 2005: Setting the Stage. Brussels:
European Commission.
Evans, J.H. (2002) Playing God? Human Genetic Engineering and the Rationalization of Public
Bioethical Debate. Chicago and London: University of Chicago Press.
Felt, U. (2003) “Sciences, Science Studies and their Publics: Speculating on Future Relations,” in B.
Joerges and H. Nowotny (eds) Social Studies of Science and Technology: Looking Back
Ahead, pp. 11–31. Dordrecht: Kluwer Academic Publishers.
Felt, U., Fochler, M. and Müller, A. (2003) Sozial robuste Wissenspolitik: Analyse des Wandels von
dialogisch orientierten Interaktionen zwischen Wissenschaft, Politik und Öffentlichkeit.
Gutachten für das Büro für Technikfolgenabschätzung (TAB) beim Deutschen
Bundestag.
Gerold, R. and Liberatore, A. (2001) Democratising Expertise and Establishing Scientific Reference
Systems: Report of the Working Group 1b for the White Paper on Governance. Brussels:
European Commission.
Giddens, A. (1990) The Consequences of Modernity. Cambridge: Polity Press.
Gieryn, T. (1999) Cultural Boundaries of Science: Credibility on the Line. Chicago: University of
Chicago Press.
Haimes, E. (2002) “What can the Social Sciences Contribute to the Study of Ethics? Theoretical,
Empirical and Substantive Considerations,” Bioethics 16(2): 89–113.
Hedgecoe, A.M. (2004) “Critical Bioethics: Beyond the Social Science Critique of Applied Ethics,”
Bioethics 18(2): 120–43.
Irwin, A. (2001) “Constructing the Scientific Citizen: Science and Democracy in the Biosciences,”
Public Understanding of Science 10(1): 1–18.
Irwin, A. (2006) “The Politics of Talk: Coming to Terms with the ‘New Scientific Governance,’” Social
Studies of Science 36(2): 299–320.
Irwin, A. and Michael, M. (2003) Science, Social Theory and Public Knowledge. Maidenhead:
Open University Press.
Irwin, A. and Wynne, B., eds (1996) Misunderstanding Science? The Public Reconstruction of
Science and Technology. Cambridge: Cambridge University Press.
Jasanoff, S. (1997) “Civilization and Madness: The Great BSE Scare of 1996,” Public Understand-
ing of Science 6: 221–32.
Jasanoff, S. (2005) Designs on Nature: Science and Democracy in Europe and the United States.
Princeton and Oxford: Princeton University Press.
Joss, S. and Bellucci, S., eds (2002) Participatory Technology Assessment: European Perspectives.
London: Centre for the Study of Democracy (CSD) at University of Westminster in associa-
tion with TA Swiss.
Knorr-Cetina, K. (1981) The Manufacture of Knowledge: An Essay on the Constructivist and Con-
textual Nature of Science. Oxford: Pergamon Press.
Latour, B. (1991) “Technology is Society Made Durable,” in J. Law (ed.) A Sociology of Monsters:
Essays on Power, Technology and Domination, pp. 103–31. London: Routledge.
Latour, B. (1992) “Where are the Missing Masses? Sociology of a Few Mundane Artefacts,” in W.
Bijker and J. Law (eds) Shaping Technology, Building Society: Studies in Sociotechnical
Change, pp. 225–58. Cambridge, MA: MIT Press.
Law, J. (2004) After Method: Mess in Social Science Research. Abingdon and New York: Rout-
ledge.
Levitt, M. (2003) “Public Consultation in Bioethics. What’s the Point in Asking the Public When
They Have Neither Scientific nor Ethical Expertise?”, Health Care Analysis 11(1): 15–25.
Levitt, M. and Häyry, M. (2005) “Overcritical, Overfriendly? A Dialogue between a Sociologist and
a Philosopher on Genetic Technology and its Applications,” Medicine, Health Care and
Philosophy 8: 377–83.
Limoges, C. (1993) “Expert Knowledge and Decision-making in Controversy Contexts,” Public
Understanding of Science 2(4): 417–26.
Lopez, J. (2004) “How Sociology Can Save Bioethics … Maybe,” Sociology of Health and Illness
26(7): 875–96.
Michael, M. and Brown, N. (2005) “Scientific Citizenships: Self-representations of Xenotransplan-
tation’s Publics,” Science as Culture 14(1): 39–57.
Nowotny, H., Scott, P. and Gibbons, M. (2001) Re-Thinking Science: Knowledge and the Public in
an Age of Uncertainty. Cambridge: Polity Press.
Pinch, T.J. and Bijker, W.E. (1984) “The Social Construction of Facts and Artifacts: Or How the Soci-
ology of Science and the Sociology of Technology Might Benefit Each Other,” in W.E. Bi-
jker, T.P. Hughes and P.T. Trevor (eds) The Social Construction of Technological Systems,
pp. 17–50. Cambridge, MA: MIT Press.
Rose, N. (1999) Powers of Freedom: Reframing Political Thought. Cambridge: Cambridge Univer-
sity Press.
Salter, B. and Salter, C. (2007) “Bioethics and the Global Moral Economy. The Cultural Politics of
Human Embryonic Stem Cell Science”, Science, Technology & Human Values 32(5): 554–
581.
Tallacchini, M. (2006) “Politics of Ethics and EU Citizenship”, Politeia XXII (83): 101–113.
Wilsdon, J. and Willis, R. (2004) See-through Science: Why Public Engagement Needs to Move
Upstream. London: Demos.
Wilsdon, J., Wynne, B. and Stilgoe, J. (2005) The Public Value of Science: Or How to Ensure that
Science Really Matters. London: Demos.
Wynne, B. (1992) “Misunderstood Misunderstanding: Social Identities and Public Uptake of Sci-
ence,” Public Understanding of Science 1(3): 281–304.
Wynne, B. (1996) “May the Sheep Safely Graze? A Reflexive View of the Expert-Lay Knowledge
Divide,” in B. Szerszynski, S. Lash and B.E. Wynne (eds) Risk, Environment and Modernity:
Towards a New Ecology, pp. 44–83. London: SAGE.