You are on page 1of 23

Ulrike Felt

Maximilian Fochler
Annina Müller
Michael Strassnig

Unruly Ethics:
On the Difficulties of a Bottom-up
Approach to Ethics in the Field
of Genomics

October 2009

reprint

STS
Department of Social Studies of Science
University of Vienna 2009
Copyright

You are allowed to download this paper for personal use only. This paper must not be
published elsewhere without the author’s explicit permission. The paper must not be
used for commercial purposes.

The final, definitive version of this paper was published as Felt, Ulrike, Fochler, Maxi-
milian, Müller, Annina and Strassnig, Michael (2009). Unruly Ethics: On the Difficulties of a
Bottom-up Approach to Ethics in the Field of Genomics. Public Understanding of Science
18(3): 354-371.
Please cite, if possible, this final version of the article.

If only the reprint is available, you should cite this paper in the following way:
Felt, Ulrike, Fochler, Maximilian, Müller, Annina and Strassnig, Michael (2009). Unruly Eth-
ics: On the Difficulties of a Bottom-up Approach to Ethics in the Field of Genomics. Pub-
lished by the Department of Social Studies of Science, University of Vienna,
October 2009. Available at http://sciencestudies.univie.ac.at/publications

Address for correspondence:


Ulrike Felt
Department of Social Studies of Science
University of Vienna
Sensengasse 8/10
A-1090 Vienna, Austria
T: ++43 1 4277 49611
E-Mail: ulrike.felt@univie.ac.at

http://sciencestudies.univie.ac.at
Unruly Ethics
On the Difficulties of a Bottom-up Approach
to Ethics in the Field of Genomics

Ulrike Felt, Maximilian Fochler, Annina Müller and Michael Strassnig 1


This paper explores the difficulties of addressing ethical questions of ge-
nome research in a public engagement setting where laypeople and scien-
tists met for a longer period of time. While professional ethics mostly ignores
public meaning, we aimed at a bottom-up approach to ethics in order to
broaden the way in which ethical aspects of genomics can be addressed.
However, within this interaction we identified a number of difficulties that
constrained an open discussion on ethical issues. Thus, we analyze how
ethical issues were approached, framed, debated, displaced or closed. We
then elaborate on the possibilities and limits of dealing with ethics in such a
participatory setting. We conclude by hinting at what should be taken into
consideration when approaching issues of science and ethics more “up-
stream.”

1. Introduction
At the dawn of the twenty-first century, questions concerning the relation of
emerging technosciences and society seem more pressing than ever. Genomics and
nanotechnology, for example, raise fundamental ethical and social questions. Public
controversies hint at the fragility of current socio-technical arrangements and at a lack
of “social robustness” of scientific innovations (Nowotny et al., 2001). In these mo-
ments of conflict and uncertainty, societies started to experiment with ways to reflex-
ively engage (Beck, 1997; Giddens, 1990) with the implications of technosciences for
social order. Both public engagement exercises and ethics committees as well as ethi-
cal review boards have been established as possible answers to this new situation.
The involvement of citizens in technoscientific decision-making has gained high
symbolic importance in recent policy discourses in Europe. The European Commis-
sion’s White Paper on Governance outlines the need for more “participation through-
out the policy chain—from conception to implementation. Improved participation is
likely to create more confidence in the end result and in the institutions which deliver
policies … Legitimacy today depends on involvement and participation.” It is a clear


























































1
This research was funded by the GEN-AU/ELSA program of the Austrian Federal Ministry of Educa-
tion, Science and Culture. We thank the lay participants and the genome scientists who shared their
thoughts and convictions with us. Furthermore, we want to thank the invited experts of the Round
Table. Special thanks go to our colleagues and collaborators Marcus Düwell, Priska Gisler, Michaela
Glanz, Sandra Karner, Astrid Mager, Silke Schicktanz, Rosmarie Waldner, Bernhard Wieser and Brian
Wynne who have been helpful in many different ways in carrying out the project and discussing our
ideas at different stages of the project. An earlier version of this paper was presented at the 4S con-
ference in Pasadena in October 2005. Finally, we want to thank two anonymous reviewers for their
valuable comments.
Reprint 09 3


plea for new forms of governance replacing “the linear model of dispensing policies
from above” (EC, 2001: 10–11). Still, the “details” (EC, 2005: 18) of this endeavor, such as
when engagement should take place in the innovation process, or who should be in-
volved and how, remain open. This becomes visible in the recent call to move more
“upstream” to allow for a much earlier discussion of “deeper questions about the
values, visions, and vested interests that motivate scientific endeavor” (Wilsdon and
Willis, 2004: 18).
The ethical dimensions of research and innovation are also receiving increasing
attention in the policy realm. But contrary to the public engagement rhetoric, the task
of dealing with these questions is largely ascribed to expert ethics committees at
national and European level (EC, 2002a). The relation of this “commitology”2 approach
to public engagement remains largely unclear (Tallacchini, 2006). And even if there is
a theoretical debate on “empirical ethics” (Borry et al., 2005), actual public engage-
ment with ethics neither belongs to the repertoire of suggested actions nor is it dis-
cussed as a serious alternative or complement to the decision-making by expert com-
mittees.
The focus of this article is the exploration of this blind spot concerning the
possibilities and difficulties of more upstream participatory approaches to issues of
ethics and science. The empirical basis of our reflections is six Round Table debates
between genomics researchers and laypeople we organized between October 2004
and April 2005 in Austria. Our aim was to create an open space in which a broad range
of issues concerning the ethical and social implications of genome research in general
and the work of this research group in particular could be discussed. In what follows
we analyze how ethical issues were approached, framed, debated, displaced or closed
in one of these Round Tables which had an explicit focus on ethical dimensions of ge-
nome research. We elaborate on the possibilities and limits of dealing with ethics in
such a participatory setting and hint at what should be taken into consideration when
approaching issues of science and ethics more “upstream.”
We start by discussing the relations between public engagement and expert
ethics, followed by describing our empirical approach and its central underlying as-
sumptions. The core of this article analyzes three different ways in which the partici-
pants performed ethical debates. Finally, summing up our findings, we will point at the
difficulties and limits we encountered in the debate of ethical issues of genome re-
search between researchers and laypeople, and what this could mean in terms of em-
pirical ethics.

2. Public engagement and (expert) ethics:


Two approaches—one problem?
From a historical perspective, the public as an actor regularly referred to in rela-
tion to new technosciences appeared in both scientific and policy discussion in the
mid 1980s, albeit in a very passive role (Felt, 2003). In the “classical” public understand-

























































2
The European Parliament defines commitology “as a process for adopting measures to implement
legislative acts. In this process measures are adopted by the Commission, assisted by a committee of
experts from the Member States.”
See http://www.europarl.europa.eu/igc1996/fiches/fiche21_en.htm (accessed 17 October 2006).

Department of Social Studies of Science | University of Vienna 2009






4 Felt/Fochler/Müller/Strassnig: Unruly Ethics

ing of science (PUS) (Bodmer, 1985) approach, the public was conceptualized as having
knowledge deficits on technoscientific issues, which would inhibit them from seeing
the “true” benefits of the technoscientific innovations envisioned by experts. Exper-
tise in this model remained on the side of science and largely unchallenged—it was
the public that was in need of “education.”
Over recent years, this much criticized approach has lost its dominance in con-
ceptualizing and performing science–society relations. On a discursive level, the
rhetoric of “deficit” and “education” has been sidelined by a new kind of speech: “pub-
lic engagement with science,” “public participation” or “dialogue” are the new buzz-
words. In this “reinforced culture of consultation and dialogue” (EC, 2002b; see also
Irwin, 2006) “citizens” and “civil society” are actors to be more actively involved in the
policy process, even though it largely remains unclear who concretely is to speak in
the name of the society. On the one hand this shift is linked to a perceived crisis in
traditional forms of expertise (Gerold and Liberatore, 2001). Public controversies such
as the one surrounding BSE have highlighted the failure of traditional expert systems
owing to their highly problematic entanglement with the policy world (Jasanoff, 1997;
Irwin, 2001). In this context, public participation is seen as a means to increase the
visibility and accountability of policy processes (EC, 2001) and, subsequently, to in-
crease public trust in science and scientific expertise. On the other hand, in the en-
gagement paradigm the public is no longer seen as a passive entity to be convinced by
the superior rationality of science. Rather it is imagined as carrying its own legitimate
values and opinions towards science and in some areas such as in patient movements
in biomedicine even being able to contribute its own “lay expertise” (Epstein, 1996).
Science and technology studies (STS), especially the critical PUS tradition
(Wynne, 1992; Irwin and Wynne, 1996; Irwin and Michael, 2003) have played an im-
portant role in the development of this engagement paradigm and its theoretical
foundations. Another important reference point, at least in Europe, is a tradition of
staging participatory experiments in the context of Technology Assessment, which
chronologically precedes the recent wave of attention towards engaging the public.
Starting from the first consensus conference in Denmark in 1987, a number of count-
ries in Europe have “imported” this model and a range of new methods including na-
tion-wide “public debates” have been developed and experimented with (Felt et al.,
2003; Joss and Bellucci, 2002).
However, recent contributions (Wilsdon and Willis, 2004; Wilsdon et al., 2005)
have criticized many of these experiments for being situated too far “downstream” in
the innovation process. This implies that by focusing on assessing the risks linked to
imminent application or implementation, public engagement sets in at a point at
which many institutional commitments concerning a technoscientific development
are already in place. Thus, instead of opening up more fundamental questions related
to basic values such as “Do we need such an innovation?” or “What kind of society is
implied in the visions this innovation carries?,” the issue may be narrowed down to
questions of risk and safety. This argument is of high relevance for a “public engage-
ment on ethics” as in our understanding the aim of engaging laypeople in discussing
the ethical questions of genome research entails bringing up and discussing exactly
this type of “upstream” question.
The renewed interest in and the increasing institutionalization of ethics in the
domain of technoscience may also be seen as a response to a perceived need for a dif-

Department of Social Studies of Science | University of Vienna 2009



Reprint 09 5


ferent form of interaction between science and society. Over recent years, ethics as a
resource for science and technology policymaking has gained importance (Evans,
2002), with (bio-)ethics committees increasingly being established on national and
supranational levels3 (Jasanoff, 2005; Salter and Salter, 2007). It may be argued that
these committees deal with similar issues and questions as those envisaged for “up-
stream” public engagement: also concerned with “fundamental values,” an important
task of many ethics committees is to give policy advice on the ethical legitimacy of
socio-technical trajectories. In doing so, they always draw on presumptions of public
interest (Burgess, 2004). Except for in public engagement approaches, many ethics
bodies are organized as pure expert bodies without any kind of public involve-
ment 4and transparency concerning their proceedings. Thus, the interesting paradox
arises that (expert) ethics seems to be one of the few areas unchallenged by the crisis
of expertise outlined above (Bogner and Menz, 2005). How this “commitology” relates
to the need for more transparency and public engagement otherwise so strongly
stressed in the area of science and technology policy is not addressed.
It is not only on the policy level that the relation between ethical expertise and
public engagement (or social science expertise producing representations of the pub-
lic in public engagement) is unclear. The significance of empirical social sciences for
ethical reasoning has been vividly discussed in bioethics journals under the label of
“empirical ethics” over recent years (e.g., Haimes, 2002). Given the competing role of
both approaches in the policy realm, this debate may also be read as an example of
disciplinary boundary work (Gieryn, 1999) on who is legitimized to give advice on is-
sues concerning science–society relations. Indeed, as a reason for doing “empirical
ethics,” some authors have argued that ethics needs to take into account the implica-
tions of the social sciences more strongly in assessing the role of new technologies, as
it otherwise runs the risk of being marginalized in the policy sphere owing to the lack
of a representation of “societal interests” (Hedgecoe, 2004).
For those arguing for a stronger inclusion of empirical approaches in bioethics,
the principalist deductive approach of expert ethics is the main point of critique
(Hedgecoe, 2004; Lopez, 2004). This approach, it is argued, is ill-prepared to meet the
complexity of the issues at stake in shaping socio-technical futures. In particular, ap-
plied ethics approaches in policy advice often are alleged to feature quite narrow
definitions of the issues at stake, or as Burgess (2004: 10) puts it, “The institutionaliza-
tion of ethics as a source of policy analysis encourages narrow definitions of issues and
terms of reference in response to short time frames.” He further argues that an inte-
gration of ethical reflection and public consultation may benefit both sides of the eth-
ics/social science divide. While social science research may contribute to a more fine-
grained understanding of the societal complexities at stake, ethical analysis may play a
key role in analyzing the values involved in different scenarios and political decision-
making options. In analogy to risk assessment, ethical analysis could then provide a
range of “morally permissible” alternatives, the decision between which (and be-
tween the values behind these options) would have to be a political one.
From the side of established ethics the value of social science contributions, let
alone of public engagement, to ethical questions is little appreciated. One of the main

























































3
For example the European Group on Ethics in Science and New Technology to the EC.
4
The very notion of who can be attributed to the “public” is, too, highly contested, see Michael and
Brown (2005).

Department of Social Studies of Science | University of Vienna 2009






6 Felt/Fochler/Müller/Strassnig: Unruly Ethics

arguments is that while ethics is concerned with values and what society “ought” to
do, the social sciences do research on the “is,” or on the “facts” of social reality (Levitt
and Häyry, 2005). From a STS perspective, this sharp distinction between facts and val-
ues must be seen as problematic at best: facts are shown as fabricated (Knorr-Cetina,
1981), social values play a key role in how scientific facts and technological artifacts are
shaped (Pinch and Bijker, 1984) and visions and versions of social orders are inscribed
in these facts and artifacts (Latour, 1991, 1992). Further, it often is precisely these
“facts,” the way they may be conceptualized and the values associated with them
which are at the heart of public controversies on technoscience (Limoges, 1993). As
evidently this also holds for social scientists producing “facts” on social realities, to
assume the division between facts and values described above seems highly question-
able.
Another common argument against public involvement in this context is that
the “moral” convictions of laypeople do not necessarily have any significance to (ex-
perts’) ethical reflection. In this line of argumentation, validity and quality of an ethical
argument do not stem from its support by public opinion but from the sophistication
of the argument itself (Crosthwaite, 1995). Similar to classical PUS approaches, expert
rationality associated with ethics is assumed to be a priori more “rational” than “moral
sentiments” of laypeople. An analogy to positions in critical PUS research (e.g., Wynne,
1992) may be drawn here, which have shown that laypeople may have very differenti-
ated positionings towards technoscience and the questions associated with it on the
basis of their experiences and situated perspectives.
Taking up these arguments we started from the assumption that there seems to
be no substantive reason not to have public engagement with ethical issues. Whether
an issue is framed as “social,” “political,” “scientific” or “ethical” often much more
seems to be related to boundary work strategies than to any specific features of the
issue itself. Thus, why should “social” and “political” questions be subject to public en-
gagement and “ethical” subject to expert committees?
Still, even though much of the empirical ethics discussion may be seen as boun-
dary work meant to demarcate areas of competences and responsibilities rather than
as a substantive contribution to a better integration or at least mutual understanding
of ethics and social science, much may be learned for our analysis of an engagement
setting from the more reflexive contributions to the debate. In his discussion of the
“empirical turn” in bioethics Ashcroft (2003) argues that from a Foucauldian perspec-
tive this shift may be seen as typically “modern” in the sense that a reflexive represen-
tation of society in the production of (ethical) knowledge becomes a central episte-
mological claim. However, these representations also play a central role in producing
social order (Rose, 1999). Empirical methods as well as public engagement designs are
to be considered not only as representing but also as performing (Law, 2004) social
realities and ethical norms. Reflexivity concerning the politics implied in doing em-
pirical ethics and public engagement thus needs to be a central methodological issue.
This argument has strong resemblance to the critical discussion of participatory de-
signs in the STS context. Irwin (2001) has argued that participatory designs may also be
read as “technologies of community” (Rose, 1999) and that implicit models of the po-
litical and citizenship are built into these designs. It has even been claimed that formal
mechanisms of participation “make” citizens through “formalized mechanisms of voic-

Department of Social Studies of Science | University of Vienna 2009



Reprint 09 7


ing” (Michael and Brown, 2005). Thus, a reflection of our empirical design is of central
relevance to this article.

3. Setting and method:


A collective experiment in public participation
The basic idea of the project this article is based on was to stage a “collective ex-
periment” in public engagement with ethical and social dimensions of genome re-
search. It was an “experiment” since our explicit goal was to selectively modify a num-
ber of key elements and dimensions of classical participatory designs to test several
hypotheses about participatory events. And it was “collective” in the sense that both
scientists and laypeople participating were involved in deciding where the experi-
ment should go. Building on prior research surveying and analyzing public engage-
ment activities and their role in a “socially robust politics of knowledge” in several
European countries (Felt et al., 2003), our aim was to build a setting that was “differ-
ent” in a number of key features.
A first central objective was to allow for both the laypeople and the scientists to
equally engage in a process of mutual learning. This objective was based on the obser-
vation that in many classical engagement models (e.g., consensus conferences) the
idea of the public “talking back to science” is taken so far that there is actually very
little room left for scientists to take part outside their role of providing expertise.
Secondly, to facilitate this dialogue, we created a rather open discussion space
imposing as few rules as possible. Accordingly, the moderator was briefed to intervene
as much as necessary to maintain a fair discussion and keep it in the vicinity of the topic
to be discussed but to intervene as little as possible beyond that. Building on the criti-
cal PUS tradition which highlighted the importance of “social” dimensions of lay–
scientist interactions (Wynne, 1996), our goal was to allow for a long-term interaction
in which a social group may form. Thus, ethical issues may possibly be discussed beyond
the entrenched frontiers of “science” and the “public,” and a mutual trust relationship
may develop between scientists and laypeople, which would also reduce the hierarch-
ies usually inherent in this relation.
Thirdly, our objective was to situate the discussion as much “upstream” as pos-
sible because we believe that especially the discussion of ethical aspects depends on
being able to address and discuss typical “upstream issues,” such as the values under-
lying specific technoscientific trajectories.
To fulfill these “experimental parameters,” we chose to adapt a design of the
Swiss foundation Science et Cité—the Round Table.5 Its basic principle is to have a
group of laypeople accompany a topic over a longer period of time. The Round Table is
strongly process-oriented and the precise structure of the engagement design is not
predefined but may and should be developed by the participants in the ongoing pro-
cess. Further, no output is predefined in order to keep the discussion as open as pos-
sible without creating pressure to narrow down issues for a final consensus statement.


























































5
http://www.science-et-cite.ch/projekte/tableronde/de.aspx (accessed 8 February 2007).

Department of Social Studies of Science | University of Vienna 2009






8 Felt/Fochler/Müller/Strassnig: Unruly Ethics

In our project the Round Tables brought together fourteen laypeople 6 with
seven genome researchers of a bigger consortium working on lipid disorders. Their
research project served as “anchor” and example to discuss the social and ethical as-
pects of genome research. It fulfilled our requirements as it had clear and explicit vi-
sions of the societal problems to be addressed (obesity, diabetes) but still was quite far
from any possible clinical or other application. Thus, the genome research project
was—as a “basic research” project—situated quite far “upstream” in a possible devel-
opment of “anti-obesity” drugs while at the same time already incorporating—even
though very vague—promises for the future.
The actual Round Tables were whole-day discussions, which took place in a
seminar room of the researchers’ laboratory. A series of six meetings were held over a
period of seven months (2004–5). The first three were dedicated to discussing the
genome research project itself including a laboratory visit. During these Round Tables
the participants collectively identified topics to be discussed in the remaining meet-
ings: (1) science and the media, (2) ethical issues of genome research and (3) regula-
tory issues. A specific expert was invited for each of these thematic discussions (a jour-
nalist, an ethicist, a representative of a state regulatory body). Qualitative interviews
were conducted with all participants before the first and after the last Round Table to
trace changes in the participants’ positions and opinions. Generally, discussions took
place in the plenary, however, at some meetings small group discussions were in-
cluded to allow for a different discussion dynamic or to develop inputs for the plenary.
All discussions and interviews were taped, transcribed and coded. The overall
aims of our research project were to investigate the Round Table as a participatory
setting, to study how both laypeople and scientists perceive and discuss the ethical
dimensions of genome research, how images of science/scientists are mobilized as
strategic elements in this discussion, and how potential learning processes may be
conceptualized.
The focus of this article is on the dynamics of debating ethical issues, and espe-
cially to analyze the strategies used to close these debates. Hence, we will focus our
investigation on the Round Table explicitly devoted to the ethical dimension of ge-
nome research. There, an ethicist was present as an expert, giving an input statement,
which was followed by small group discussions in which both laypeople and scientists
separately developed their respective positions towards the issues. These positions
were then presented and debated in a plenary discussion.

4. Discussing ethics in a participatory setting:


Closure and displacement
In order to trace “ethical issues” a working concept of ethics was developed. Ra-
ther than opting for a general and abstract definition we chose “to follow the actors”
in our setting and use the concept of ethics laid out by the invited ethicist in his open-
ing statement. He introduced a rather broad definition of ethics hinting at the large


























































6
The laypeople were selected from a nation-wide call for participants. The call was made via posters
and leaflets displayed at public institutions like museums as well as bulk mail and a newspaper ad-
vertisement in the city where the Round Table took place.

Department of Social Studies of Science | University of Vienna 2009



Reprint 09 9


variety of constellations in which moral assumptions play a role in discussing genom-


ics. However, in the following discussions on ethical issues relatively little effort was
devoted to unpacking and discussing them and often debates were closed down
quickly. We thus decided to take a closer look at the repertoire of closure mechanisms,
assuming that they would represent an important obstacle to public engagement
with issues of science and ethics.
We identified three different groups of mechanisms the participants deployed
in order to close ethical discussions. The first subsection, “Values” meet “Facts,” analyses
the role knowledge hierarchies played in the Round Table setting. We investigate
different strategies where, when and by whom knowledge was implicitly or explicitly
employed to counter value arguments and to end discussions on ethics. Secondly, we
discuss displacement strategies, addressing the observation that both laypeople and
scientists quite often tried to shift ethical questions either to institutionalized forms
of expertise or to displace them in time or frame of reference. Thirdly, under the head-
ing Ethical questions as absent presences, we will investigate the phenomenon that
issues were addressed selectively in plenary or subgroups or that they were labeled as
“social” or “political” while they could have been clearly addressed as ethical. This
hints at an implicit model of what can be debated with whom and in which context.

“Values” meet “facts”


Referring to “facts”—be it scientific or societal facts—was one of the central re-
sources framing the discussions at the Round Table 7 on ethics and values connected to
genome research. The Round Table therefore became a space where “values” met
“facts” in many different constellations.
The mobilization of scientific facts happened in two ways. First to possess the
“right” facts about a specific topic—and this was, not surprisingly, mainly the case for
scientists— allowed the discussion to be guided in a specific direction or often even
for it to be closed. An ethical problem put on the table was quite frequently “solved”
by introducing the “right” information. Secondly, by mobilizing facts, a problem could
be reframed in such a way that ethically motivated doubts had no place anymore.
While this might be read as a usual account of lay–expert hierarchies, we would
argue that there is an additional layer of complexity to be considered, which makes
this observation of special relevance to understanding a public engagement with eth-
ics. We suggest that there was a second, analytically separable, hierarchy at work: the
hierarchy between values and facts. Indeed, not only the scientists employed know-
ledge as an instrument of power. The laypeople also performed the assumption that
scientific knowledge (“facts”) stands above values in general as well as above their own
lay knowledge in particular. In the given setting the fact–value hierarchy was to a cer-
tain degree “naturalized” and remained unquestioned. There was implicit agreement
that facts speak for themselves and that the right kind of knowledge overrules value-
based ethical objections. This evidently also had an important impact on identifying
what an ethical issue was and on being able to discuss it.


























































7
By “societal facts” we mean presumptions about society, which are assumed to be common knowl-
edge. These facts were often referred to by both laypeople and scientists.

Department of Social Studies of Science | University of Vienna 2009




 

10 Felt/Fochler/Müller/Strassnig: Unruly Ethics

To better understand the power of the fact-value divide we will provide two ex-
emplary discussion sequences. The first started when laypeople presented the out-
comes of a small peer group debate on ethical aspects of genome research to the ple-
nary. Relating to the concepts of ethics given by the invited ethicist they wondered
what makes research worthwhile to be funded and what criteria are drawn upon in
making such decisions: “Does one rather consider insights and knowledge or is it eco-
nomic interests which are in the foreground? Does the end justify the means?” (L5f).8
They compared research on remedies for malaria with research on lipid disorders and
asked why “nobody” is doing research on the former. Their implicit assumption was
that currently society values research on lipid metabolism disorders as economically
more beneficial than malaria research as the former affects more people, especially in
the first world— “there is not so much potential and not so many sick people, of
course” (L5f).
This is a classical case of anecdotal evidence, however, addressing much wider
questions than the malaria vs. lipid metabolism disorders opposition only. In the pic-
ture of science sketched out by the laypeople, mainly those questions are investigated
that have high economic potential and answer problems of the First World. To them,
this clearly seemed to be an ethical issue.
The response given by a scientist is as follows:
Unfortunately, that is not true. There is a huge EU project at the Technical University.
They are working only on malaria. This [project] has, I think, 8 million Euros. But …
it is not done for the people who are there [in nations affected with malaria] but of
course for the tourists. Because their numbers are high enough, so it pays off for the
EU to fund this. [They] receive twice as much money for malaria than we do for lipid
research. (S1f)
There are two main elements in the scientist’s answer: she is stressing that the
laypeople’s argument was built on wrong facts, however, she is also confirming their
point that research money is apparently granted due to economic relevance.
In response to this statement the laypeople tried to find another example, i.e.,
another illness where no research is done in order to be able to uphold their more
general point. But the discussion got very fragmented—people interrupted each
other and the topic meandered for a short time around malaria research and rare ill-
nesses—and then led to the discussion of a completely different issue. Apparently for
the laypeople, having made a wrong fact statement somehow devalued their more
general claim that science is economically driven.
By mobilizing their professional knowledge, scientists upheld hierarchies and
performed the model that if people would have the “right” knowledge certain ethical
questions would not be at stake anymore. The laypeople somehow felt caught using
“wrong” facts for building their ethical argument and did not feel able to challenge
the scientists on a more abstract level. This might explain why they did not take up the
second part of the scientists’ argument (that malaria research is done only for tourists)
to continue the ethical debate. As this example shows, there was a general implicit


























































8
All quotes translated by the authors. L or S indicates layperson or scientist, the number is to identify
the speaker, and m or f stands for male or female.

Department of Social Studies of Science | University of Vienna 2009



Reprint 09 11


feeling of a necessity to use “the right facts” also when building value-based argu-
ments.
Closing an ethical discussion by reformulating an initially ethical question into a
scientific problem is the second constellation we want to address. To exemplify this
point, we enter a discussion on obesity treatment. The question was whether obesity
is to be regarded as a medical condition or if—as the laypeople often argued—it could
also be addressed in a broader psychosocial way or by lifestyle changes. The scientists
conceptualized obesity as a severe health problem, linked with an assumed societal
desire for being slim as well as with the “fact” that diets and changes in lifestyle rarely
lead to positive outcomes—“It does not work!” was a recurrent affirmation made by
different scientists. This line of argumentation— an assumed public ideal in collusion
with medical facts—renders alternatives to genome research hardly arguable. In the
strand of argumentation described in the following we show the rhetoric transforma-
tion of a “lay-sociologist” argumentation based on personal observations to an argu-
mentation based on “hard” scientific facts.
One layperson questions the notion that there is a societal desire for slimness
and that being overweight is necessarily a problem: “there is the saying … the fat ones
are jovial. … They do not want to lose weight at all” (L3m). A scientist giving anecdotal
evidence for the societal desire for being slim counters this statement:
If you have a look at how many people buy dietary journals or such things … For
some, it is obviously a want. If you walk by the women’s journal shelf there is a new
diet on every cover page. In the top sellers lists of non-fiction literature there is the
South Beach diet on the first rank and the Atkins diet on the second—and in fact for
months. (S1f)
The scientist is making a sweeping “lay-sociologist” argument, using the number
of books and magazines dealing with diets as an indicator for a broad public ideal.
Hence, the scientist left her terrain of expertise, drew on experience assumed as
common and transformed it into a factual statement.
In the following, a debate between scientists and laypeople starts on what a
“good life” should look like and what genome research may contribute to it. This
quickly opens up a fundamental ethical dilemma, which is expressed by a layperson
asking: “does this [if people being overweight are happy or not] depend on genome
research?” (L7m).
What is at stake is whether obesity is to be seen as a medical condition and if ge-
nome research has the right to intervene in this issue. Faced with this challenge the
scientist gradually tries to reframe the problem she formerly argued in the framework
of her lay-sociologist observations in terms of a scientific problem, thus retreating to
her core domain of expertise: “excuse me, but obesity is not, is not only a psychological
problem. It is not about if someone is happy and fat. It is unhealthy, even if one is happy
with it” (S1f). As this explanation still turns out to be too vague and too little convin-
cing, she introduces medical “facts” to the discussion:
Fact is that 50% of the people have too many kilos—that is fact. Whether a particu-
lar person thinks she may be too fat or if she really is, that is an individual problem.
Fact is, half of the people would live healthier if they would reduce their weight.
And 20% have to reduce massively. That is fact, and whether they are psychologi-

Department of Social Studies of Science | University of Vienna 2009




 

12 Felt/Fochler/Müller/Strassnig: Unruly Ethics

cally happy with it or not, that makes no difference for obesity being unhealthy.
(S1f)
After this statement the discussion on this ethical issue stopped instantly and
the debate shifted to another topic.
In this exchange the scientist successfully manages to rule out dimensions of
obesity addressed by the laypeople by redefining the framework in which obesity is
seen to be a problem. In a first try the scientist left her own area of expertise and made
value-based arguments—that there is a societal desire for being slim. While she was
arguing with values and lay observations the discussion remained open and laypeople
felt able to challenge the arguments with their lay expertise. The scientist prioritized
health rather than happiness as a common value and once obesity was reframed as a
medical problem based on scientific facts, authority was reinstalled and closing the
discussion became possible. What formerly was negotiable—the meaning and impli-
cations of being obese—is rendered indisputable by putting the problem in the realm
of “hard” facts.
Both examples show the uneasiness of our participants to discuss ethically rel-
evant issues by referring to values “only.” To know the “right” facts seemed to be es-
sential to re-direct the discussion of ethical questions. Both laypeople and scientists
seemed to have internalized the model that facts are a better argumentative resource
than mere values. The scientists were able to close ethical debates by either mobiliz-
ing values hidden behind hard facts as well as professional knowledge or by discur-
sively changing the frame of reference for a problem.

Displacing ethical questions


A second dominant way of dealing with ethical questions was to identify them
while displacing them to the margins or outside the realm of what should and can be
discussed here and now. Three dominant forms of these displacements were per-
formed: (1) displacement to ethical expert systems; (2) linking “ethical problems” to “a
very different kind of research”; (3) shifting ethical issues downstream to the applica-
tion side of the innovation process.

Displacement to ethical expert systems


Throughout the discussions all participants argued that experts, i.e. people and
institutions seen as holding epistemic authority on ethics, need to deal with ethics.
This attitude was performed differently by scientists and laypeople and was especially
deployed when controversial issues were likely to occur and conflict may have endan-
gered the social relationship (perceived as fragile) between laypeople and scientists.
Then, often a model of society was invoked which builds on a functional division of
labor in which each segment has specialized expertise. This was explicitly expressed in
a small group discussion among scientists:
[The ethicist] should do that, he actually is an ethicist … because we are scientists
in the field of molecular biology and that’s why we do that. We are not expecting
ethicists to do our work, and that’s why the ethicists should not expect us doing
theirs. (S1f)

Department of Social Studies of Science | University of Vienna 2009



Reprint 09 13


The laypeople implicitly subscribed to this expert model too. While they called
for a certain self-responsibility of each individual, they also stressed the necessity for
more institutionalized forms of ethical reflection and responsibility. As one layperson
states:
the question of responsibility is mainly a question of avoiding that something bad
happens. And a good way of preventing this would be to constantly question if the
path taken is still the right one. That means that … society takes care that there is
always someone who interferes and questions if the path is the right one. (L1m)
This statement was supported by claiming “actually, more money for ethicists”
(L1m). Displacing ethical problems to the expertise of ethicists was an appropriate way
to handle ethical problems in research for large parts of both groups of participants.
The assumption that professional ethics should ideally deal with ethical problems and
that it is the legitimate place in society to do so was widely shared.
But not only professional ethicists were called for. The displacement of ethical
issues to the regulatory/legal space was also a frequent exit strategy. To highlight this
delegation-to-experts we will use a discussion on animal experimentation.
A layperson raised the question whether it is morally acceptable to kill animals
in basic research where there is no explicit benefit but only the “pure purpose to
satisfy the human thirst for knowledge” (L1m). The reaction from the scientists’ side
exemplifies our point:
S1f: But you are not allowed to do that! There are regulations. … You don’t get a free
ticket to do all animal experiments. Every time we, for example, want to make a cer-
tain knock-out-mouse with a certain gene, we have to apply for, explain … what
kind of function that gene has, and what our presumption is, and why we need
that. L1m: And according to which criteria is that decided then, whether that is ok or
not? S1f: That’s a good question, I have no idea. You have to ask the person from the
ministry, he knows that. I don’t know.
In her immediate answer the scientist argued—and this was largely supported
by her colleagues— that she does not need to deal with this ethical question as legal
permission is always required for mice experiments. It is regulated and thus regarded
as unproblematic by the scientists. What is legally permissible is equated with “There
is no ethical problem,” and therefore no need for discussion. This argument managed
to end the ethical debate on the decision criteria for animal experiments almost in-
stantaneously.
This exchange exemplifies the readiness to reduce complex and multilayered
ethical issues to regulatory problems. For the scientists this reduction meant a simpli-
fication of their practice. Their argument is that ethical considerations have to be un-
dertaken by society before political regulations are made. Once regulations are in
place and research practices comply with them, the re-opening of this “black-box of
ethics” seems neither necessary nor useful.

Department of Social Studies of Science | University of Vienna 2009




 

14 Felt/Fochler/Müller/Strassnig: Unruly Ethics

Linking ethical problems with “a very different kind of research”


Displacements of ethical questions were also performed by the scientists in el-
aborating on different “kinds” of research as well as on the importance of the place
where scientific work is carried out. They developed two sets of distinctions that al-
lowed them to position themselves on the “safe side”. Having succeeded in doing so,
they could consequently claim that they were simply not the right addressees for
ethical questions.
The first strategy uses the classical distinction between basic and applied sci-
ence, which had only rarely been contested by the laypeople. The debate analyzed
follows the introductory statement of the invited ethicist in which he argued that
genome research always needs to be subject to ethical reflection as it justifies its
financing by moral arguments such as contributing to the solution of a societal prob-
lem (obesity). He thus brought up the ethical issue of whether research is funded for
its scientific quality or also for specific kinds of promises of future benefits, and chal-
lenged the scientists to reflect on this question. He provoked the following reaction:
We are writing purely scientific research proposals to get the money; there is no sin-
gle ethical argument … Our research is absolutely not oriented towards applica-
tion. And that [allegation] annoyed me a little bit … That is not our research. (S2m)
In applying the distinction between pure and applied research, the scientist
tries to position himself and his research group on the “safe side,” describing his work
as a quest for pure knowledge deprived of all ethical concerns. This purification pro-
cess (cutting off the implications for society) even denies one of the central aims of the
program through which the project got funded—to realize innovative products,
technologies and services. The laypeople, however, openly questioned the idea that
researchers are only funded on the basis of their scientific excellence and not also
because of possible societal benefits or future applications of genome research.
I believe society has a very concrete idea of what it expects from that. Society wants
to have something from that in the end. And I do not think that you would get the
money if society had no expectation that there is some benefit for society later.
(L1m)
Here, the simple distinction between basic research and application no longer
holds. The notion of “excellent science” as sole criterion for funding is perceived as an
internalistic view lacking a wider and more realist vision of why research gets public
money. The laypeople intensively dealt with this line of argumentation—with strong
ambivalences. They understood the sidelining of ethical questions by the scientists as
self-protection in order to be able to work, while at the same time they expressed the
strong wish that scientists should engage much more with the ethical issues raised by
their work and deal with them more openly. Another quite powerful distinction was
introduced later in the discussion by the same scientist—the distinction between
what one does within the laboratory and what happens outside in “the real world.”
As a researcher I strongly separate between the things I do in the laboratory, where I
am forced by the law to let nothing escape—to take care that my mutants really
stay in the lab. And as it is not able to do harm, I am in a very different position than
a plant physiologist, who plants corn; or any company, whatever, which releases

Department of Social Studies of Science | University of Vienna 2009



Reprint 09 15


genetically modified potatoes, tomatoes or whatever on the open field … But I as a


natural, as a laboratory scientist … I am in an entirely different position. (S2m)
As a “laboratory scientist”—he stressed—he is not concerned by ethical con-
siderations as he sees the laboratory as a closed space with impenetrable boundaries
towards society. The epistemic culture of the biological laboratory is perceived as
neutral, bereft of any ethical and social dimensions—the latter simply do not apply to
his practice. Risk comes into play only if laboratory and society intermix. Therefore,
societal, political, and ethical concerns should be kept out of the laboratory in order
not to hinder the creation of pure knowledge. For the scientists, setting up this boun-
dary was a means to be able to refuse confrontation with ethical questions, which may
lie in their daily work—ethical issues should have a place elsewhere.

Shifting ethics downstream


The question of the right moment to ask ethical questions is closely related to
the distinction between basic and applied science. The main argument developed in a
number of statements is that ethical consequences of research would have to be
treated at a later stage in the innovation process but not during basic research. Thus,
an argument of temporality is made here: ethical considerations, yes, but not now.
Being confronted with the statement of the laypeople that only the extensive prom-
ises of future applications would secure the generous funding of the field an even
more refined distinction is introduced:
within our research aims we have classified it like this: we have direct aims, which
are ours in the laboratory. Our direct aim is to identify genes and to clarify meta-
bolic pathways … Then there are indirect and long-term aims … And the long-term
aims would be to reduce obesity, to reduce arteriosclerosis, heart attacks, cancers
and so forth. But these indirect and long-term aims are not our aims. These are only
societal aims, which are realized by others. We do not do them ourselves. We can-
not even do it. (S1f)
The scientist argues that even if their research is somehow driven by societal
aims, these are not immediately linked to their present work. Somebody else will take
these steps. Future and present are tied together in particular ways in order to end up
on the ethically safe side. The future made present—labeled as “long-term aims”—
serves to justify the funding of their research. The societal goal and desire to cure dis-
eases legitimizes the funding of genomics. But as the present knowledge is seen to
enter society only in the future through the work of others, also no need is perceived
for discussing ethical concerns at present. Hence, only applications are perceived as
relevant for ethical considerations. Furthermore, as future applications remain un-
clear, facts are seen as missing to make an ethical evaluation. Thus ethics is pushed
downstream, to a moment when concrete applications can be discussed. Then, ethical
reasoning should be done by the society using this knowledge. A social displacement
of ethics therefore goes along with this temporal argument. It is society which has to
consider ethics once scientific knowledge is turned into applications.
Another way of displacing ethical questions was to switch the level of narratives
(from micro to macro narratives or vice versa) to argue that ethics has to be discussed
more downstream in the innovation process. Especially micro narratives, which mainly
referred to small episodes, individual scientists, elements of history and anecdotal

Department of Social Studies of Science | University of Vienna 2009




 

16 Felt/Fochler/Müller/Strassnig: Unruly Ethics

cases, served to uphold the borders of basic/applied research as well as to demon-


strate that upstream discussions of ethical problems are hardly possible.
This can be illustrated by a discussion triggered by the ethicist’s statement that a
debate on responsibility and consequences of research has to start earlier in the pro-
cess of research, a position also supported by the laypeople. This was discussed using
the example of the fission bomb. The ethicist’s argument was transferred by the scien-
tists to the level of individual responsibility and linked with the question of the know-
ledge available to the scientists: “When is in time? Before Niels Bohr learned about the
structure of atoms, is that in time? Or before the [Manhattan] project started?” (S1f).
The ethicist then argued that not every single researcher has to pose this question
individually, but rather society as a whole. However, the scientist insisted on request-
ing a normative statement about the moment when ethical considerations should
come into play: “Where do we have to stop? Niels Bohr, the atom model, because we
would not have been allowed to develop this? Or does it start with the Manhattan
project itself?” (S1f). Several times she insisted on this, culminating in the question: “Is
Darwin responsible for euthanasia?” (S1f).
Starting from the example of the figure of the individual researcher being un-
able to assess the full consequences of his or her research, this argument is extended
to science and society as a whole. As neither the individual researcher nor society can
have sufficient knowledge at any point in time, nobody can prevent unintended nega-
tive consequences of research. Consequently, the ethical debate could be closed by
arguing that dealing with ethical consequences of research should happen further
downstream—when concrete applications are foreseeable. By carrying the argument
of individual responsibility to the extreme, the very idea of discussing ethics already in
basic research suddenly seemed unrealistic. Furthermore, and connected to the ar-
gument above on the division between facts and values, the example shows that as
“facts” were seen as not being available so far upstream in the innovation process, ad-
dressing ethical issues was considered as not sufficiently grounded and thus of not
much use.

Ethical questions as absent presences


Our third set of observations focuses on the non-said, or on “absent presences”
(Law, 2004). The latter term stresses that in numerous situations ethical issues seemed
to be on the participants’ minds when discussing at the Round Table, however, for a
complex set of reasons they either did not explicitly address them or tried to label
them differently, calling them social or political issues. The participants discussed the
uneasy relationship of economic interests and scientific choices or complained about
the unsatisfactory political handling of hot technoscientific issues, but avoided
labeling them as ethical. As outlined above, a common agreement was that for discuss-
ing ethics some kind of formal expertise was needed. It seemed possible to debate and
make judgments concerning even quite complex social, political or economic issues,
but ethical issues seemed to hold a place apart. In that sense “ethics” was in many ways
an absent presence—not spoken out loud, not labeled, not explicitly addressed, but
nevertheless omnipresent.
The non-use of the term ethics has important consequences: first, by using dif-
ferent terminologies for debating the same issues it was possible to treat each of

Department of Social Studies of Science | University of Vienna 2009



Reprint 09 17


these discussions as an independent single event and not to see the dense network of
ethical arguments that were present. Secondly, “ethics” somehow remained in a space
apart, not tied into and linked with wider societal expert-led debates on ethics.
A second kind of absent presence was created by the way discussions in the peer
group meetings (laypeople or scientists)—where potentially controversial ethical
issues were discussed—did or did not make their way to the plenary. Apparently, the
peer groups were perceived as sufficiently socially robust to allow rather delicate is-
sues to be openly addressed. Both groups developed an implicit understanding of
what issues from the peer group should go into the plenary. While it is not astonishing
that the scientists developed this capacity, as they are used to perceiving themselves
also as an “interest group,” it is remarkable how this took place for the laypeople as
well. They seemed to form an “imagined community” (Anderson, 1983) of non-
scientists, a perception strong enough to allow for quite strong internal openness. The
plenary on the contrary, was seen by both groups as a much more fragile setting in
which people preferred not to address certain ethical questions. Both laypeople and
scientists felt more comfortable when discussing among themselves in the small
groups which were experienced as a safe and more private setting.
To give an example, in the peer group discussion on ethics the laypeople identi-
fied animal experimentation as a crucial ethical topic. In this setting some of them had
rather strong positions, e.g., one layperson stated that he is against animal experimen-
tation in general. When presenting the outcomes of this discussion in the plenary, the
issue of animal experimentation remained unspoken. Even as the issue of killing mice
for research was brought up by the scientists themselves, the laypeople did not take
this opportunity to raise their concerns. Obviously, they had implicitly decided not to
share their position the way they expressed it in the peer group. The interviews with
the participants made after the Round Table meetings support this as one layperson
argued, “there [in the plenary] were however two fronts, I would say. And this inhibi-
tion was overcome when they [the scientists] were not here anymore. There was a
more casual and more direct talking to each other” (L3m, ex-post interview).
Not only the laypeople but also the scientists addressed issues differently in the
peer group. In the scientists’ peer group there was a debate on the status of animals in
society. They identified a paradox in the treatment of animals: society does not see the
need for ethical discussions when killing animals for food, however, one sees a need
for ethical considerations for animal experiments in research. As one scientist put it:
there were two among [the lay] who say they don’t eat meat … because they have a
pity on animals. But they are consistent at least. I think someone who eats meat is
not allowed to be upset with animal experiments that are done according to the
highest standards. And this I think is an important argument concerning animal
experiments, not only if but also how. (S1f)
This argument was also never made explicit by the scientists in the plenary dis-
cussion, as there was apparently a common agreement that not all discussions from
the peer group should find their way into the plenary discussion. Thus it is an import-
ant issue for public engagement settings to reflect about how different spaces are
perceived, whether they are understood as “private” or “public,” and to find ways to
organize a translation between these spaces.

Department of Social Studies of Science | University of Vienna 2009




 

18 Felt/Fochler/Müller/Strassnig: Unruly Ethics

5. Conclusions
When planning and carrying out our Round Tables on the ethical and social di-
mensions of genome research, we considered a number of previous experiences.
Among others, our argument was that participatory settings often did not allow for a
true mutual dialogue between scientists and laypeople, either due to asymmetrical
levels of involvement as in consensus conferences or due to too short timeframes of
interaction. Further, we organized a setting as far “upstream” as possible considering
the boundary conditions for the project by choosing researchers of a genome project
as partners whose research was quite far from any imminent application—while pos-
sible trajectories of its development and its societal consequences were clearly dis-
cernible. The aim in doing so was to allow for an ethical discussion, which both has a
concrete reference point and is as open as possible to ask basic questions about the
values of genome research.
When analyzing our data focusing on the ethical debates at the Round Table, we
were struck by the multiplicity of mechanisms that often resulted in the closure or
displacement of ethical issues. We titled this article “Unruly Ethics” in order to address
the difficulties in grasping, framing and engaging with ethics. Thus, we decided to put
emphasis on the closing mechanisms of potentially controversial issues and hope that
the analysis of these mechanisms offers new empirical insights and contributes to the
academic debate on empirical ethics by pointing to some complexities of such under-
takings.
Concretely, we draw three main conclusions: first, we conclude that the difficult
and uneasy relation between “facts” and “values” is not only at the heart of the contro-
versy and the boundary work efforts surrounding “empirical ethics.” Considering this
question is also central in understanding the interaction processes taking place in
ethical debates between laypeople and scientists. As described above, all our partici-
pants shared or at least accepted the implicit assumption that argumentations using
the authority of (scientific) facts are to be considered superior to “mere values.” We
even witnessed debates in which the discussion was closed because the initial state-
ment had been made on the basis of “wrong” facts.
This raises two points, which seem crucial to consider. It gives scientists a “head
start” in being able to close ethical debates they feel uneasy with as they usually are
more adept in building the authority of factual statements than laypeople are. They
would even feel comfortable to leave their terrain and perform rather broad “factual”
statements about society more generally— and thus perform as “lay sociologists.”
Then again it shows a general uneasiness at work when discussing questions of values
without being easily able to resort to the “bedrock” of hard facts. This relates to the
discussion on upstream engagement, and it might be productive to consider whether
or not our experiment was really situated upstream. Indeed, it was upstream if one
takes the degree of transformation of scientific knowledge into applications and pro-
ducts as an indicator. However, this did not mean that basic values connected to the
scientific endeavor were discussed, because of the deeply rooted cultural belief that
scientific facts are stronger than any other type of value-based knowledge and experi-
ences. When facts were seen as missing, the ethical debate was rather displaced than
carried out on the basis of values. In that sense it seems to be an illusion to solve this

Department of Social Studies of Science | University of Vienna 2009



Reprint 09 19


dilemma by “simply” moving further upstream. Much more, the relation to scientific
“facts” in this context has to be actively put in question in the engagement exercise.
Secondly, paying attention to the social dynamics both within the groups of lay-
people and scientists as well as between them, we observed that what may be opened
up as an ethical issue strongly depends on the actual social context. At times rather
strong ethical statements and feelings were expressed in the small group discussions
where laypeople and scientists were among themselves. However, these statements
were rarely carried into the plenary, even though the explicit aim of the groups was to
develop input for the plenary discussion. Thus, a “space of trust,” as was envisaged by
designing the Round Tables as a long-term interaction, could up to a certain degree
not be developed. Rather, what we could observe was a process of “mutual taming,” in
which issues that were potentially controversial for the respective others or which
could be read as a too direct critique were not brought to the table, in order not to
endanger the stability of the social setting. As the laypeople did not feel immediately
affected by the ethical implications of the project at hand, they were reluctant to risk
an open conflict. This could be interpreted as a general tendency in the Austrian cul-
tural context to be compliant in settings perceived as hierarchical.
Thirdly, it became visible in our discussions that strongly rooted linear models of
innovation were invoked to make the point that ethical issues were not to be dis-
cussed “here and now.” A common displacement strategy was to shift the ethical dis-
cussion “downstream”—to an imagined place where there would be more facts avail-
able to assess possible consequences. This points to the conclusion that even though
an upstream setting may offer more possibilities to discuss basic value questions, in
order to do so the relation between the upstream design and the participants’ imagi-
nations of science and the innovation process has to be addressed.
These three points strongly relate to discussions on empirical ethics and up-
stream engagement. Often in our setting, micro versions of the fact–value conflict in
empirical ethics or the upstream/downstream discussions concerning participatory
design seemed to be played out. As a practical conclusion and as a possible recom-
mendation for similar efforts, we would like to make two important suggestions.
To start with, it seems important to challenge some of the most fundamental ba-
sic assumptions on the development of technoscience in their relation with society.
As long as linear innovation models remain the basis of discussion, as long as scientific
knowledge is automatically accepted as superior to any value position and as long as
simple regulations are longed for as a central aim, participatory processes concerning
ethical issues might quickly reach their limits.
Finally, we would like to make a plea for taking the performative dimension of
staging a participatory design more seriously. Reflexivity about the visions of citizen-
ship, politics and related issues embedded in the participatory design is often asked
for, but maybe not as often taken seriously. Moreover, researchers and practitioners
designing participatory settings on ethical issues need to reflect their position on the
relation between facts and values, as well as concerning the limits and possibilities of
upstream engagement. Beyond reflection, it appears central to make these assump-
tions explicit also to the participants, in order to facilitate a reflexive discussion space.

Department of Social Studies of Science | University of Vienna 2009




 

20 Felt/Fochler/Müller/Strassnig: Unruly Ethics

References
Anderson, B. (1983) Imagined Communities: Reflections on the Origin and Spread of Nation-
alism. London: Verso.
Ashcroft, R.E. (2003) “Constructing Empirical Bioethics: Foucauldian Reflections on the Empirical
Turn in Bioethics Research,” Health Care Analysis 11(1): 3–13.
Beck, U. (1997) The Reinvention of Politics: Rethinking Modernity in the Global Social Order.
Cambridge: Polity Press.
Bodmer, W. (1985) The Public Understanding of Science. London: Royal Society.
Bogner, A. and Menz, W. (2005) “Bioethical Controversies and Policy Advice: The Production of
Ethical Expertise and its Role in the Substantiation of Political Decision-making,” in S.
Maasen and P. Weingart (eds) Democratization of Expertise: Exploring Novel Forms of Sci-
entific Advice in Political Decision-making, pp. 21–40. Dordrecht: Springer.
Borry, P., Schotsmans, P. and Dierickx, K. (2005) “The Birth of the Empirical Turn in Bioethics,”
Bioethics 19(1): 49–71.
Burgess, M.M. (2004) “Public Consultation in Ethics: An Experiment in Representative Ethics,”
Journal of Bioethical Inquiry 1(1): 4–13.
Crosthwaite, J. (1995) “Moral Expertise: A Problem in the Professional Ethics of Professional Ethi-
cists”, Bioethics 9(5): 361–379.
Epstein, S. (1996) Impure Science: AIDS, Activism, and the Politics of Knowledge. Berkeley: Univer-
sity of California Press.
European Commission (EC). (2001) European Governance: A White Paper. COM(2001) 428 final,
25 July 2001. Brussels: European Commission.
European Commission (EC). (2002a) Science and Society—Action Plan. Luxembourg: Office for
Official Publications of the European Communities.
European Commission (EC). (2002b) Towards a Reinforced Culture of Consultation and Dia-
logue: General Principles and Minimum Standards for Consultation of Interested Parties
by the Commission. COM(2002) 704 final, 11 December 2002. Brussels: European Com-
mission.
European Commission (EC). (2005) Science in Society Forum 2005: Setting the Stage. Brussels:
European Commission.
Evans, J.H. (2002) Playing God? Human Genetic Engineering and the Rationalization of Public
Bioethical Debate. Chicago and London: University of Chicago Press.
Felt, U. (2003) “Sciences, Science Studies and their Publics: Speculating on Future Relations,” in B.
Joerges and H. Nowotny (eds) Social Studies of Science and Technology: Looking Back
Ahead, pp. 11–31. Dordrecht: Kluwer Academic Publishers.
Felt, U., Fochler, M. and Müller, A. (2003) Sozial robuste Wissenspolitik: Analyse des Wandels von
dialogisch orientierten Interaktionen zwischen Wissenschaft, Politik und Öffentlichkeit.
Gutachten für das Büro für Technikfolgenabschätzung (TAB) beim Deutschen
Bundestag.
Gerold, R. and Liberatore, A. (2001) Democratising Expertise and Establishing Scientific Reference
Systems: Report of the Working Group 1b for the White Paper on Governance. Brussels:
European Commission.
Giddens, A. (1990) The Consequences of Modernity. Cambridge: Polity Press.
Gieryn, T. (1999) Cultural Boundaries of Science: Credibility on the Line. Chicago: University of
Chicago Press.
Haimes, E. (2002) “What can the Social Sciences Contribute to the Study of Ethics? Theoretical,
Empirical and Substantive Considerations,” Bioethics 16(2): 89–113.
Hedgecoe, A.M. (2004) “Critical Bioethics: Beyond the Social Science Critique of Applied Ethics,”
Bioethics 18(2): 120–43.

Department of Social Studies of Science | University of Vienna 2009



Reprint 09 21


Irwin, A. (2001) “Constructing the Scientific Citizen: Science and Democracy in the Biosciences,”
Public Understanding of Science 10(1): 1–18.
Irwin, A. (2006) “The Politics of Talk: Coming to Terms with the ‘New Scientific Governance,’” Social
Studies of Science 36(2): 299–320.
Irwin, A. and Michael, M. (2003) Science, Social Theory and Public Knowledge. Maidenhead:
Open University Press.
Irwin, A. and Wynne, B., eds (1996) Misunderstanding Science? The Public Reconstruction of
Science and Technology. Cambridge: Cambridge University Press.
Jasanoff, S. (1997) “Civilization and Madness: The Great BSE Scare of 1996,” Public Understand-
ing of Science 6: 221–32.
Jasanoff, S. (2005) Designs on Nature: Science and Democracy in Europe and the United States.
Princeton and Oxford: Princeton University Press.
Joss, S. and Bellucci, S., eds (2002) Participatory Technology Assessment: European Perspectives.
London: Centre for the Study of Democracy (CSD) at University of Westminster in associa-
tion with TA Swiss.
Knorr-Cetina, K. (1981) The Manufacture of Knowledge: An Essay on the Constructivist and Con-
textual Nature of Science. Oxford: Pergamon Press.
Latour, B. (1991) “Technology is Society Made Durable,” in J. Law (ed.) A Sociology of Monsters:
Essays on Power, Technology and Domination, pp. 103–31. London: Routledge.
Latour, B. (1992) “Where are the Missing Masses? Sociology of a Few Mundane Artefacts,” in W.
Bijker and J. Law (eds) Shaping Technology, Building Society: Studies in Sociotechnical
Change, pp. 225–58. Cambridge, MA: MIT Press.
Law, J. (2004) After Method: Mess in Social Science Research. Abingdon and New York: Rout-
ledge.
Levitt, M. (2003) “Public Consultation in Bioethics. What’s the Point in Asking the Public When
They Have Neither Scientific nor Ethical Expertise?”, Health Care Analysis 11(1): 15–25.
Levitt, M. and Häyry, M. (2005) “Overcritical, Overfriendly? A Dialogue between a Sociologist and
a Philosopher on Genetic Technology and its Applications,” Medicine, Health Care and
Philosophy 8: 377–83.
Limoges, C. (1993) “Expert Knowledge and Decision-making in Controversy Contexts,” Public
Understanding of Science 2(4): 417–26.
Lopez, J. (2004) “How Sociology Can Save Bioethics … Maybe,” Sociology of Health and Illness
26(7): 875–96.
Michael, M. and Brown, N. (2005) “Scientific Citizenships: Self-representations of Xenotransplan-
tation’s Publics,” Science as Culture 14(1): 39–57.
Nowotny, H., Scott, P. and Gibbons, M. (2001) Re-Thinking Science: Knowledge and the Public in
an Age of Uncertainty. Cambridge: Polity Press.
Pinch, T.J. and Bijker, W.E. (1984) “The Social Construction of Facts and Artifacts: Or How the Soci-
ology of Science and the Sociology of Technology Might Benefit Each Other,” in W.E. Bi-
jker, T.P. Hughes and P.T. Trevor (eds) The Social Construction of Technological Systems,
pp. 17–50. Cambridge, MA: MIT Press.
Rose, N. (1999) Powers of Freedom: Reframing Political Thought. Cambridge: Cambridge Univer-
sity Press.
Salter, B. and Salter, C. (2007) “Bioethics and the Global Moral Economy. The Cultural Politics of
Human Embryonic Stem Cell Science”, Science, Technology & Human Values 32(5): 554–
581.
Tallacchini, M. (2006) “Politics of Ethics and EU Citizenship”, Politeia XXII (83): 101–113.
Wilsdon, J. and Willis, R. (2004) See-through Science: Why Public Engagement Needs to Move
Upstream. London: Demos.
Wilsdon, J., Wynne, B. and Stilgoe, J. (2005) The Public Value of Science: Or How to Ensure that
Science Really Matters. London: Demos.

Department of Social Studies of Science | University of Vienna 2009




 

22 Felt/Fochler/Müller/Strassnig: Unruly Ethics

Wynne, B. (1992) “Misunderstood Misunderstanding: Social Identities and Public Uptake of Sci-
ence,” Public Understanding of Science 1(3): 281–304.
Wynne, B. (1996) “May the Sheep Safely Graze? A Reflexive View of the Expert-Lay Knowledge
Divide,” in B. Szerszynski, S. Lash and B.E. Wynne (eds) Risk, Environment and Modernity:
Towards a New Ecology, pp. 44–83. London: SAGE.

Department of Social Studies of Science | University of Vienna 2009


You might also like