You are on page 1of 12

RESEARCH NOTE

ABSTRACT Perhaps because there is no precise definition of scientific originality,


studies of peer review in scientific journals have shown that reviews can be arbitrary
or ineffective. In this Note, a potential reference point in evaluating originality that
may be useful in analyzing the problem is presented and tested – a typology of
scientific originality based on a structural analysis of the scientific paper. As reported
in a paper, each of three elements of scientific work (hypothesis, methods and
results) either has been previously reported in the scientific literature, or is newly
reported. Scientific originality can then be defined as a permutation of new and old
information: eight types of originality, ranging from all three elements being
previously reported, to all elements being new. To determine if the typology has face
validity, highly experienced scientists were asked by mail survey to use the typology
in two exercises: rating the eight originality types, and assigning an originality type
to highly-cited articles the scientists had written. Of 301 scientists, 206 (68%)
responded: between them, they had authored a total of 230 articles. The eight
originality types were rated by 84% of the scientists, and an originality type was
assigned to 209 of the 230 articles: the most frequent type was new hypothesis/
previously-reported methods/new results. To see if scientific journals might vary in
the type of originality they prefer, the articles were divided into two equal groups,
by the age of the journals publishing them: younger (0–29 years, n = 106) or older
(30–185 years, n = 103). The distribution of originality types was virtually identical
for the two groups. The results of this study indicate that the typology merits further
study as a means of investigating or evaluating scientific originality.

A Measure of Originality:
The Elements of Science
Lynn Dirk

Within science, ‘the gap between the enormous emphasis placed upon
original discovery and the great difficulty a good many scientists experi-
ence in making one’,1 is made wider by the lack of either a precise
definition or an objective measure of ‘originality’. Indeed, a long-time
medical editor who has closely studied journal peer review (which involves
evaluating originality) commented that there is not much ‘point of asking
[referees] to classify a paper’s originality or scientific credibility into one of
five categories unless the editor has first defined what each of these is’.2

Social Studies of Science 29/5(October 1999) 765–76


© SSS and SAGE Publications (London, Thousand Oaks CA, New Delhi)
[0306-3127(199910)29:5;765–76;010273]
766 Social Studies of Science 29/5

Although the process of evaluating scientific work is called ‘peer review’, it


is a conundrum that originators, by definition, have no peers.3 Such
problems related to originality and its evaluation through peer review can
adversely affect science, as both a body of knowledge and a community of
individuals.4 For example, the recognition of valuable new ideas can be
delayed for long periods, or can be thwarted in other ways. One Nobel
Laureate described the following experience:
The original paper describing [our Nobel-awarded discovery] was re-
jected by Science and initially rejected by The Journal of Clinical Investiga-
tion. A compromise with the editors eventually resulted in acceptance of
the paper but only after we omitted [a term for the newly discovered
substance] from the title and documented our conclusion that [the new
substance] met the definition of [a well-known substance] given in a
standard textbook of bacteriology and immunity.5

Sadly, some originators can become so dispirited by reactions to their new


ideas that it affects them personally in tragic ways.6
Thus there is a double jeopardy: the difficulty in being original is made
more difficult by the problem of evaluating originality. Some reference
point in evaluating scientific originality might therefore be helpful. One
analysis of originality that has been proposed uses an ‘underutilized
resource in empirical examinations of science’,7 the scientific paper. That
analytic approach, which is also used in this Note, is based on the way
sections of the scientific paper (Introduction, Methods and Results) de-
scribe the corresponding elements of scientific work (hypothesis, methods
and results). In a particular paper, each element is presented either as
having been previously reported in the scientific literature, or as being
newly reported. Portraying originality as a permutation of previously- or
newly-reported elements produces a typology of eight types of originality,
ranging from all three elements being previously reported (P-P-P) to all
three elements being new (N-N-N).8 In between would be, for example, a
work with a newly-reported hypothesis, previously-reported methods, and
newly-reported results (N-P-N). Interestingly, the interaction between
newly- and previously-reported elements in scientific work shows how the
two critical, yet opposite, processes in science – replicability and originality
– are parts in one sum. Each may rarely, if ever, be expressed in a ‘pure’
form (all elements either previously- or newly-reported): like a figure/
ground image, the original and the replicable together compose science,
and always border each other; attention to one makes the other a hidden
background. This may explain how both originality and replicability can
seem to be lacking in the scientific literature.9
Aside from problems due to the nature of originality itself, non-
scientific factors can also influence which scientific work is published.10
For example, an older, high-circulation journal may prefer a type of
original work that engenders less risk (that is, work with only one or two
new elements), and such a journal will have first access to those papers
because scientists likely prefer that their work reach the greatest number of
colleagues. Therefore, newer journals may be more likely to receive and
Research Note: Dirk: A Measure of Originality 767

publish work that is riskier (for example, work with two or three new
elements), as well as work with no new elements (replication).
The best judges of whether this originality typology accurately reflects
scientific work are the persons who are continuously involved in it: scien-
tists. Therefore, to determine if the typology has face validity, both the
willingness and the ability of scientists to work with the typology were
tested in an international survey. Highly experienced scientists were asked
to use the originality typology in two exercises, rating the eight originality
types, and assessing published papers with it. Their assessments of the
papers were also used to examine whether journals vary by the originality
type they publish.

Methods
To simultaneously obtain a sample of experienced scientists and of pub-
lished papers proven to have value to other scientists, I selected authors
and papers featured as ‘Citation Classics’ in Current Contents, Life Sciences
over a five-year period.11 The survey was approved by the Institutional
Review Board of the University of Florida, and was conducted in 1995.
The survey method used has been shown to produce a high response
rate,12 including sending the scientists two reprints about peer review to
encourage them to participate, and to thank them.13 By means of a three-
page questionnaire (see Appendix, below), the scientists were asked to use
the originality typology in two exercises: to rate the types, and to assign one
to their papers.
The rating exercise was presented first, so as to familiarize the scien-
tists with each type before they assigned one to their papers. Each of the
eight types was rated for its scientific value on a scale from 1 (least
valuable) to 5. Rating was used instead of ranking because, ideally, each
type could be equally valuable in contributing to science. For example, the
type in which all three elements have been previously published would be
‘replication’, which lacks originality but is critical to science as its self-
correcting mechanism. For the second exercise (assigning an originality
type to his or her own paper), to make the exercise more realistic, the
scientists could indicate that an element in the paper had been previously
reported in part, as well as previously- or newly-reported. However, in
analyzing the data, all responses of ‘previously-reported in part’ were
treated as ‘previously reported’, to highlight originality.
To examine whether journals varied by the type of originality they
published, the age of journals at the time they published the papers was
recorded. Journal age was used because it is very objective and easily
determined; it may also, however, reflect the integral relationship between
journals and the development of disciplines over time.14 Journal age was
calculated from information in a university on-line library system. If a
journal’s title changed over time, the first year of the earliest title was
considered to be the first year of the journal. This was done because, even
768 Social Studies of Science 29/5

though the title changed, the journal itself would be familiar to readers and
authors at that time; this would not be the case for a totally new journal.
At the end of the questionnaire, the scientists were asked: to report the
extent of their experience in science and in peer review; if they thought the
typology might be helpful in peer review; and to comment on any aspect of
the typology or scientific journal peer review.
The data were not subjected to parametric statistical testing because
the study was designed primarily to determine if the typology has face
validity. Also, as a result of the way in which the samples of scientists and
papers were selected, they could not be considered representative of even
highly-cited authors or papers: the papers featured in the ‘Citation Classic’
column were self-selected; they were featured because the authors could be
reached by the Current Contents editor, and were willing to write an essay
about their papers.15

Results
Of 301 authors surveyed, 68% (n = 206) responded, and some reported
on more than one paper (n = 230). The respondents were highly experi-
enced, not only in doing science but also in evaluating it: they were
primary authors of an average of 127 peer-reviewed papers, and were very

TABLE 1
Demographics of Experienced Scientists Sampled for a Survey on Scien-
tific Originality

Frequency in Sample
% n

Total respondents 100 206


Country of residencea
English-speaking (including America) 77 159
Non-English-speaking 23 47
Professional positionb
Professor/professor emeritus 73 150
Assistant/associate professor 7 14
Otherc 14 28
Type of peer review experienced
Peer reviewer 96 198
Editorial board member 84 173
Editor 19 40

a. Non-American English-speaking countries represented were England (12%,


n = 25), Canada (7%, n = 14), and Australia (1%, n = 3). Non-English-
speaking countries or regions represented were: Belgium, Czechoslovakia,
France, Germany, Israel, Italy, Japan, Mexico, the Netherlands, Russia, Scandi-
navia, South America, Spain, Switzerland, and the Ukraine (all < 10 each).
b. No data for 14.
c. For example retired or a corporate scientist.
d. Extent of experience (averages): Respondents served as peer reviewers for 12
journals over 25 years; as editorial board members for 14 journals over 12 years;
and as editors for 1.4 journals over 9 years.
Research Note: Dirk: A Measure of Originality 769

TABLE 2
Ratings of 8 Types of Scientific Originality by Scientists Compared with the Frequency
of Originality Types Assigned by the Same Scientists to their Highly-Cited Papers

Originality Authors’ Ratings of Frequency of Originality


Typea Originality Typeb Type among Papers
(n = 209)
Median Mode % N

N+P+N 5 3 42 87
N+N+N 5 5 15 30
N+P+P 4 5 4 9
P+N+N 3 4 11 22
P+P+N 3 2 11 22
N+N+P 3 3 1 3
P+N+P 2 3 3 7
P+P+P 1 1 13 29

a. The hypothesis, methods, and results reported in a paper are characterized, respectively, as
previously reported in the scientific literature (P) or newly reported (N).
b. On a 5-point scale, 5 being most valuable.

experienced as peer reviewers (Table 1). When asked whether the typology
might help to make journal peer review fairer, 92% (n = 189) responded:
16% (n = 33) said no, and the number who were uncertain (40%, n =
82) was similar to the number who said yes (36%, n = 74).
The eight originality types were rated by 84% of the respondents (173
of 206) (Table 2). Ratings were partially completed by 7% (n = 15), and
not provided at all by 9% (n = 17). Some scientists were not able to grasp
all the types, as indicated by question marks or comments. Others,
however, used the ‘grammar’ of the typology in their comments discussing
peer review, the following being the most extensive examples: ‘Established
hypotheses need new methods (at least) and new results to lead to
originality, but new methods can expand our understanding of established
hypothesis’; ‘New valid methods are always original and can lead to new
hypotheses/results’; ‘New results (if valid) are original and seminal to new
hypotheses’; ‘Established hypotheses by definition need some degree of
unoriginal confirmation’.
A complete originality type (all three elements included) was provided
for 91% of the papers (209 of 230). The most frequent originality type was
N-P-N (42%) (Table 2). Of those who did not provide a complete
originality type (n = 21), almost half (n = 12) were authors of ‘review’
papers, who thought that none of the types applied to reviews. The sample
of papers, however, contained 43 reviews, and the authors of most (72%,
n = 31) assigned a complete originality type to their reviews. Others who
had problems, similar to the review authors, often stated explicitly that the
paper was a ‘method’ or ‘theory’ paper.
When the papers were divided into two groups of equal number by
‘younger’ and ‘older’ age of the journals that published them,16 the
distribution of originality types was virtually identical for both groups
770 Social Studies of Science 29/5

TABLE 3
Frequency of Originality Types in a Sample of Highly-Cited Papers Divided into Two
Equally Sized Groups by Age of Journal Publishing the Papers

Frequency of Originality Typesb


Journals 0 to 29 Journals 30 to 185
Years of Age (n = 106) Years of Age (n = 103)
Originality Typea % n % n

N-P-P 3 3 6 6
P-P-N 10 11 10.5 11
P-P-P 15 16 12.5 13
N-P-N 42 44 42 43
N-N-N 16 17 12.5 13
P-N-N 9 10 11.5 12
P-N-P + N-N-P 5 5 (4 + 1) 5 5 (3 + 2)

a. The hypothesis, methods, and results reported in a paper are characterized, respectively, as
previously reported in the scientific literature (P) or newly reported (N). The originality types
were assigned to the papers by their authors.
b. The distribution of the originality types is arranged as a type of curve, types with previously
reported methods and newly reported methods falling on either side of the most frequent
type.

(Table 3).17 Further, the distribution of originality types could be arranged


as a type of curve, with types containing previously-reported methods
falling on one side of the most frequent type (N-P-N), and types contain-
ing new methods on the other side (Table 3).

Discussion
The results of this study indicate that the typology may have face validity. A
next step would be determining how typologic assessment should or can be
done. In the present study, it was done by authors who simply indicated
with a check if an element were ‘new’ or ‘previously reported’ (Appendix,
item 2). This could also be done by peers. Indeed, studies could be done to
see if typologic assessment would be useful as part of peer review. For
example, using a journal’s editorial files, reviews of submissions with and
without the use of typologic assessment could be compared. A better
method might be to require a brief description of what made the element
new or not. For example, during review of the present Note, one referee
assessed it spontaneously with the typology, and used brief descriptions as
follows: P (whether originality can be measured)-N (typology)-N; the
study could also be assessed as N (whether originality can be represented
as a typology)-P (survey)-N. This illustrates the different ways in which a
study can be interpreted, and how the typology may provide a way of
articulating those differences. Another possible study of the typology, using
papers submitted to journals, would be to ask authors to provide their own
typologic assessments of their papers (with brief descriptions), and then to
compare those with referees’ typologic assessments.
Research Note: Dirk: A Measure of Originality 771

An alternative method of typologic assessment would be through


content analysis by non-experts. That is, a scientific paper must state –
explicitly – how it relates to what has been previously published in the
scientific literature,18 and those statements could be used to determine the
originality type. This may appear to be a ‘citation analysis’; but the
information of interest is in the claims per se about what is or is not new.
Such a typologic content analysis has been carried out on five papers that
were submitted several times to journals, and never published.19 The
pertinent statements were extracted by a non-expert and compared with
authors’ assessments. The statements did seem to reflect the authors’
assessments, but the sample was too small to draw any definite conclu-
sions. Also, citations were not always used in conjunction with the state-
ments. An advantage of typologic assessment over citation analysis, and its
difference from that method, is that the typology shifts the focus from
scientists to the science itself. Also, the typology may provide a means of
articulating how transfer across disciplines occurs, and how disciplines or
subdisciplines develop around modifications in the elements of research.
Despite possible concerns about the reliability of using claims that
authors make in papers, ‘whatever the researcher’s initial ignorance of the
literature may be, it should have been corrected by colleagues and expert
reviewers by the time [the] paper is published, and probably much of it is
corrected by these contacts’.20 Judging from the quotation by the Nobel
Laureate I used earlier, however, a different problem occurs with new
findings, which constitutes the information of greatest interest: if there is
an error, it will likely be a conservative one – and this error will affect any
typologic assessment based solely on content analysis by a non-author.
In the present study, authors had difficulty assessing those papers
characterized as ‘reviews’, or as ‘theory’ or ‘method’ papers (9%, n = 21).
This may indicate how closely science is identified with the controlled
experiment. A review, however, is itself an established method in science –
an experiment in synthesis and argumentation, if you will. Thus another
area of study with the typology could be in classifying types of scientific
work. For example, patterns such as P-N-P and N-N-P (that is, patterns
with a new method but previously-reported results), which seem contra-
dictory, could represent studies such as computer modelling, where pre-
viously-reported results are used to prove the accuracy of a new model.
Indeed, the typology may be flexible enough to capture the variety of
scientific work,21 and may help to mitigate tension between preferences
(for example, for theory or methods) by showing how each contributes to
progress. The typology may also fulfil the requirement for multidimension-
ality of a classification system: arbitrariness and ambiguity are low, the
elements are arranged as mutually interacting, and a continuous scale is
converted into a discrete one.22
If ‘theory’, as reported in the Discussion section of a paper, were
added to the typology as a fourth element, this might prevent the problems
that some authors of reviews had in assessing their papers. With the
additional element, they may have considered their reviews to be a type
772 Social Studies of Science 29/5

such as P-P-P-N. Further, adding theory/Discussion as an element to the


typology provides a way of distinguishing between ‘theory as hypothesis’
and ‘theory as a general set of principles’, which may be critical to
developing ‘a theory of scientific progress that is historically sound or
philosophically adequate’.23 A practical benefit of distinguishing between
hypothesis and theory is that a hypothesis is very concrete, and its relative
newness would be much easier to confirm than that of a complex theoret-
ical system. Therefore, at least in initial studies, simply using the hypo-
thesis as articulated in a paper’s Introduction, and disregarding the theo-
retical relations described in its Discussion, would likely make studies with
the typology more reliable and easier to perform.
The structural analysis of papers can result in objective supporting
evidence for claims put forward in the philosophy and sociology of
science, but only if we expand our currently limited vision of what
constitutes a structural analysis of scientific papers.24

Typologic assessment is based on the ‘IMRAD’ structure of the scientific


paper (Introduction, Methods, Results and Discussion), which very suc-
cessfully forces the highly variable experience that is scientific work into a
tightly controlled presentation that creates a link between old and new
information.25 While this seemingly unnatural moulding of scientific work
by the scientific paper has moved some to consider it fraudulent,26 it has
also been characterized as the essence of science:
The technique of the scientific paper, though simple and probably acci-
dental in its origin, was revolutionary in its effects. The paper became not
just a means of communicating a discovery, but, in quite a strong sense, it
was the discovery itself.27

An alternative theoretical perspective places the scientific paper some-


where between those two extremes, as an essential step in a process that
evolved out of the need for individuals, because of problems in human
perception, cognition, and expression,28 to reach a consensus about what
they perceive and how they interpret it. Scientists, who are in a continual
loop of thinking and observing, would be extremely vulnerable to those
problems, and would need some way to resolve them. From this per-
spective, science is a process of continuous feedback that rigidly constrains
both the form and the content of information. Content is constrained
through the discipline of specialization. Form is constrained through the
following prescribed sequence: experimentation by the scientific method
(feedback between environment and individual); documentation of that
experiment in the scientific paper (self-feedback); evaluation of the in-
formation through journal peer review (feedback between originator and
informed critics); and then, hopefully, publication and citation (feedback
between individual and community). Interestingly, peer review may be the
least structured step in the sequence, which may account for the problems
associated with it.
According to this theoretical perspective, the experiment and the
scientific paper are integrated parts of a complex system. Thirty years ago,
Research Note: Dirk: A Measure of Originality 773

the process was described by an experienced natural scientist in these


words:

One of the major purposes of the whole scientific enterprise is to draw


from the confused, vague, inchoate ‘stuff of experience’ a few precise,
clearly defined, ‘objective’ (that is, if I may use the term, ‘consensible’)
concepts, principles, or observations. It is essential that scientific work
should be ‘written up’ in full, with all the details of technique, inter-
pretation and logical limitation necessary to persuade the reader of the
truth of the conclusion – or at least sufficient for [a scientist] to repeat the
experiment or calculation.29

In writing scientific papers, however, as described above and elsewhere,


there occurs another complex process, one of interpretation and persua-
sion.30 Typologic assessment, by focusing on the elements of scientific
work, may provide a means of extracting from the scientific paper the
information that most faithfully represents the work on which it is based,
and how that work relates to the body of scientific knowledge.

Appendix
The text of the questionnaire used in this study is presented below. When a question
presented multiple choice answers, the options that were given to the respondent are given in
parentheses. The present Note does not include responses from some questions that were
designed to obtain data on whether originality types may vary by difficulty in getting accepted
(items 3 to 9). Almost all these articles (90%) were published by the first journals they were
submitted to; therefore, this group of articles was probably not a good sample for studying
difficulty in getting published. Those questions, however, may be worthwhile exploring in a
separate study. Bracketed text in italics represents explanatory notes; multiple choice answers
provided in parenthesis were formatted with blanks for respondents to check the options they
chose.

The Questionnaire
A research article can describe theory, methods, and results that vary by whether each is
established, modified, or new as defined in the following way:
established = previously reported in full and confirmed in the scientific literature.
modified = previously reported in part in the scientific literature.
new = not previously reported in full in the scientific literature.
1. From two perspectives, your own and that of the scientific journal peer review system as
you perceive it, please rate each of the following eight permutations of scientific
originality according to its value in contributing to science from not very valuable (1) to
very valuable (5). [The 8 permutations were then listed, and 2 blanks were provided for the
ratings.]

The remaining questions pertain to the following article of yours that has become a Citation
Classic: [Citation of the pertinent Citation Classic was provided.]

2. Indicate below with a check whether the hypothesis, methods, and results reported in this
Citation Classic article were established, modified – slightly or substantially – or new, as
defined above.
3. Please check the direction of the results in this article (positive, negative, other).
4. Please indicate how many different journals the manuscript was submitted to and check
whether the number is exact or approximate.
774 Social Studies of Science 29/5

5. Please indicate with a check how the manuscript was accepted (accepted as is; minor
revision required; major revision required).
6. For each of the following types of rejection, please check if any are applicable to your
manuscript and, if possible, the name of the journal(s) associated with that form of
rejection (rejected by a journal after it requested revisions and you complied; rejected
with reviewers’ comments; rejected without reviewers’ comments).
7. Please indicate with a check the amount of revision the manuscript underwent from the
time it was first submitted until it was accepted (none; minor; substantive revision one
time, several times, innumerable times).
8. When you submitted the manuscript, did you start with (the most suitable journal; the
most prestigious journal; the journal most likely to accept the paper; other, for example,
a new journal)?
9. Characterize the relationship between the discipline most relevant to your Citation
Classic article and the discipline most relevant to the journal that published it (same
primary discipline; same subdiscipline; different primary disciplines; different
subdisciplines; other article/journal discipline relationship, for example, your article was
most pertinent to a primary discipline and the journal that published it represented a
subdiscipline of a different primary discipline [primary/different subdiscipline]).
10. Please use the space below to provide any other pertinent information, or to comment
about the questions, the survey, scientific originality, or journal peer review.
11. Do you think objective criteria for judging scientific originality as presented above would
help make journal peer review fairer? (yes; no; uncertain).
12. Were you the corresponding author for the article in question above? (yes; no).
13. Please indicate your current title/position (assist. prof.; assoc. prof.; professor; other).
14. Number of peer reviewed articles you have been primary author of.
15. Number of journals you subscribe to or read frequently.
16. Indicate your cumulative experience as a peer reviewer for scientific journals, as a board
member, and as an editor (number of cumulative years; number of journals).

Notes
I am grateful for the editor’s and reviewers critiques, which resulted in significant
improvements, and for the copy editor’s close scrutiny of the paper, which led to an
important correction. This work was inspired by the American Medical Association’s
Congresses on Peer Review. Material support was provided by the Department of
Anesthesiology, University of Florida College of Medicine. For their support and
comments, special thanks to Don Caton, Marilyn Southwick Fregly and Robert A. Hatch
(University of Florida), and Fred Grinnell (University of Texas). Invaluable assistance was
provided by Stephen Lock (Wellcome Institute) and each of the 206 scientists who
responded to my survey. This paper was presented in part in poster form at the Annual
Meeting, American Association for the Advancement of Science (Atlanta, GA, 16–21
February 1995), and at the Third International Congress on Peer Review, American
Medical Association (Prague, Czech Republic, 18–20 September 1997).

1. Robert K. Merton, ‘Reference Groups, Invisible Colleges, and Deviant Behavior in


Science’, in Hubert J. O’Gorman (ed.), Surveying Social Life: Papers in Honor of Herbert
H. Hyman (Middletown, CT: Wesleyan University Press, 1988), 174–89, quote at 182.
2. Sherri Bowen, quoting Stephen Lock, ‘Current Controversies in Editorial Peer Review’
(meeting report), Council of Biology Editors [CBE] Views, Vol. 16 (1994), 91–93, quote at
93. For wider discussion, see S. Lock, A Difficult Balance: Editorial Peer Review in
Medicine (London: Provincial Hospitals Trust, 1985).
3. David F. Horrobin, ‘The Philosophical Basis of Peer Review and the Suppression of
Innovation’, Journal of the American Medical Association, Vol. 263 (9 March 1990),
1438–41.
4. For relevant discussions, see: Alan Lightman and Owen Gingerich, ‘When Do
Anomalies Begin?’, Science, Vol. 255 (7 February 1992), 690–95; Juan Miguel
Campanario, ‘Consolation for the Scientist: Sometimes it Is Hard to Publish Papers
Research Note: Dirk: A Measure of Originality 775

that Are Later Highly-Cited’, Social Studies of Science, Vol. 23, No. 2 (May 1993),
342–62; Campanario, ‘Peer Review for Journals As It Stands Today, Part 1’, Science
Communication, Vol. 19 (1998), 181–211; and the papers by Yalow, Boring and
Morowitz cited below.
5. Rosalyn S. Yalow, ‘Radioimmunoassay: A Probe for the Fine Structure of Biologic
Systems’, Science, Vol. 200 (16 June 1978), 1236–39, quote at 1237. The unedited
quote reads:

The original paper describing these findings was rejected by Science and
initially rejected by The Journal of Clinical Investigation. A compromise with
the editors eventually resulted in acceptance of the paper but only after we
omitted ‘insulin antibody’ from the title and documented our conclusion that
the binding globulin was indeed an antibody by showing how it met the
definition of antibody given in a standard textbook of bacteriology and
immunity.

6. Edwin G. Boring, ‘The Problem Of Originality In Science’, American Journal of


Psychology, Vol. 39 (1927), 70–90, at 84; Harold J. Morowitz, ‘Legacy’, Hospital Practice
(January 1980), 139–40.
7. Nicholas C. Mullins, William Snizek and Kay Oehler, ‘The Structural Analysis of a
Scientific Paper’, in Anthony F.J. van Raan (ed.), Handbook of Quantitative Studies of
Science and Technology (New York: Elsevier Science, 1988), 81–105, quote at 82.
8. Lynn Dirk, ‘From Laboratory to Scientific Literature: The Life and Death of
Biomedical Research Results’, Science Communication, Vol. 18 (1996), 3–28.
9. Horrobin, op. cit. note 3; H.M. Collins, Changing Order: Replication and Induction in
Scientific Practice (Beverly Hills, CA: Sage, 1985), 1–50.
10. Richard Horton, ‘The Scientific Paper: Fraudulent or Formative?’, paper presented at
the Third International Congress on Peer Review, American Medical Association
(Prague, Czech Republic, 19 September 1997): http://www.ama-assn.org/public/peer/
thfo.htm (last visited 10 June 1999).
11. The explanatory text accompanying each ‘Citation Classic’ essay in Current Contents, a
publication of the Institute of Scientific Information, is as follows:

A ‘Citation Classic’ is a highly cited publication as identified by the Science


Citation Index . . . ‘Citation Classic’ authors are asked to write an abstract and
a commentary about the publication, emphasizing the human side of the
research – how the project was initiated, whether any obstacles were
encountered, and why the work was highly cited.

12. Don A. Dillman, ‘The Design and Administration of Mail Surveys’, in W. Richard
Scott (ed.), Annual Review of Sociology, Vol. 17 (Palo Alto, CA: Annual Reviews, 1991),
225–49.
13. Merton, op. cit. note 1; John C. Bailar III and Kay Patterson, ‘Journal Peer Review:
The Need for a Research Agenda’, New England Journal of Medicine, Vol. 312 (7 March
1985), 654–57.
14. John M. Ziman, ‘The Proliferation of Scientific Literature: A Natural Process’, Science,
Vol. 208 (25 April 1980), 369–71.
15. Eugene Garfield, personal communication (phone call, 7 March 1994).
16. Current Contents classified the journals as ‘life sciences’. High-circulation general
journals like Nature and Science were represented, as were general medical journals
such as the New England Journal of Medicine and the British Medical Journal, which
published several articles in the sample. Most of the journals, however, were highly
specialized journals such as American Naturalist, Biopolymers, Critical Care Medicine,
Neuroscience, Stain Technology and Virology, and had only published one article in the
sample.
17. The almost identical distribution of originality types by younger and older journals that
published the papers may indicate that the typology is a reliable method, even
776 Social Studies of Science 29/5

considering that they were assessed by their authors. It would be interesting to see if
the same pattern of distribution would occur in a more representative set of papers.
Further, if a set of papers published in recent issues of journals were used, it may be
possible to analyze originality type by impact factor of the journals. Use of the impact
factor was not possible in the present study, because the dates when the articles were
published ranged from 1947 to 1987 and, as far as I know, journal impact factors have
not been calculated for those years.
18. G. Nigel Gilbert, ‘Referencing as Persuasion’, Social Studies of Science, Vol. 7, No. 1
(February 1977), 113–22; Fred Grinnell, The Scientific Attitude (New York: Guilford
Press, 2nd edn, 1992), 1–22, 69–82.
19. Dirk, op. cit. note 8.
20. Michael H. MacRoberts and Barbara R. MacRoberts, ‘Quantitative Measures of
Communication in Science: A Study of the Formal Level’, Social Studies of Science, Vol.
16, No. 1 (February 1986), 151–72, quote at 157.
21. W.I.B. Beveridge, The Art of Scientific Investigation (New York: Vintage Books reprint,
1950), 1–71, 142–76, 215–25; Melvin J. Fregly and Marilyn Southwick Fregly, ‘Role of
Chance in Discovery’, Journal of the Florida Medical Association, Vol. 66 (1979), 632–35.
22. Michael J. Moravcsik, ‘Some Contextual Problems of Science Indicators’, in van Raan
(ed.), op. cit. note 7, 1–30.
23. Laurens Laudan, ‘From Theories to Research Traditions’, in Baruch A. Brody and
Richard E. Grandy (eds), Readings in the Philosophy of Science (Englewood Cliffs, NJ:
Prentice-Hall, 1989), 368–79, quote at 369.
24. Mullins et al., op. cit. note 7, 82.
25. Beveridge, op. cit. note 21; Fregly & Fregly, op. cit. note 21; Charles Bazerman,
Shaping Written Knowledge: The Genre and Activity of the Experimental Article in Science
(Madison, WI: University of Wisconsin Press, 1988), 59–79; Grinnell, op. cit. note 18.
26. Horton, op. cit. note 10.
27. Derek de Solla Price, ‘The Development and Structure of the Biomedical Literature’,
in Kenneth S. Warren (ed.), Coping with the Biomedical Literature: A Primer for the
Scientist and the Clinician (New York: Praeger, 1981), 3–16, quote at 3.
28. Grinnell, op. cit. note 18; Bazerman, op. cit. note 25, 3–27; C. Daniel Saltzman and
William T. Newsome, ‘Neural Mechanisms For Forming A Perceptual Decision’,
Science, Vol. 264 (8 April 1994), 231–32, 235–36; Jim Bogen and Jim Woodward,
‘Observations, Theories and the Evolution of the Human Spirit’, Philosophy of Science,
Vol. 59 (1992), 590–611; Hilary Farris and Russell Revlin, ‘The Discovery Process: A
Counterfactual Strategy’, Social Studies of Science, Vol. 19, No. 3 (August 1989),
497–513.
29. John M. Ziman, ‘Information, Communication, Knowledge’, Nature, Vol. 224 (25
October 1969), 318–24, quote at 320.
30. Gilbert, op. cit. note 18; MacRoberts & MacRoberts, op. cit. note 20; Bazerman, op.
cit. note 25, 128–50; Bogen & Woodward, op. cit. note 28.

Lynn Dirk is currently Coordinator of Research Services and Programs of the


Institutional Review Board of the University of Florida Health Services
Center, and is studying the correspondence of Henry Oldenburg in the
University’s History of Science Program. Previously, she served as a long-time
Editor in the Department of Anesthesiology of the University’s College of
Medicine, where she conducted the present study as part of her master’s
work in Mass Communication, specializing in science communication.

Address: Institutional Review Board, Office of Research Technology and


Graduate Education, University of Florida, PO Box 100173, Gainesville,
Florida 32610–0173, USA; fax: +1 352 846 1497; email:
ldirk@vpha.health.ufl.edu

You might also like