Professional Documents
Culture Documents
A Measure of Originality:
The Elements of Science
Lynn Dirk
Within science, ‘the gap between the enormous emphasis placed upon
original discovery and the great difficulty a good many scientists experi-
ence in making one’,1 is made wider by the lack of either a precise
definition or an objective measure of ‘originality’. Indeed, a long-time
medical editor who has closely studied journal peer review (which involves
evaluating originality) commented that there is not much ‘point of asking
[referees] to classify a paper’s originality or scientific credibility into one of
five categories unless the editor has first defined what each of these is’.2
publish work that is riskier (for example, work with two or three new
elements), as well as work with no new elements (replication).
The best judges of whether this originality typology accurately reflects
scientific work are the persons who are continuously involved in it: scien-
tists. Therefore, to determine if the typology has face validity, both the
willingness and the ability of scientists to work with the typology were
tested in an international survey. Highly experienced scientists were asked
to use the originality typology in two exercises, rating the eight originality
types, and assessing published papers with it. Their assessments of the
papers were also used to examine whether journals vary by the originality
type they publish.
Methods
To simultaneously obtain a sample of experienced scientists and of pub-
lished papers proven to have value to other scientists, I selected authors
and papers featured as ‘Citation Classics’ in Current Contents, Life Sciences
over a five-year period.11 The survey was approved by the Institutional
Review Board of the University of Florida, and was conducted in 1995.
The survey method used has been shown to produce a high response
rate,12 including sending the scientists two reprints about peer review to
encourage them to participate, and to thank them.13 By means of a three-
page questionnaire (see Appendix, below), the scientists were asked to use
the originality typology in two exercises: to rate the types, and to assign one
to their papers.
The rating exercise was presented first, so as to familiarize the scien-
tists with each type before they assigned one to their papers. Each of the
eight types was rated for its scientific value on a scale from 1 (least
valuable) to 5. Rating was used instead of ranking because, ideally, each
type could be equally valuable in contributing to science. For example, the
type in which all three elements have been previously published would be
‘replication’, which lacks originality but is critical to science as its self-
correcting mechanism. For the second exercise (assigning an originality
type to his or her own paper), to make the exercise more realistic, the
scientists could indicate that an element in the paper had been previously
reported in part, as well as previously- or newly-reported. However, in
analyzing the data, all responses of ‘previously-reported in part’ were
treated as ‘previously reported’, to highlight originality.
To examine whether journals varied by the type of originality they
published, the age of journals at the time they published the papers was
recorded. Journal age was used because it is very objective and easily
determined; it may also, however, reflect the integral relationship between
journals and the development of disciplines over time.14 Journal age was
calculated from information in a university on-line library system. If a
journal’s title changed over time, the first year of the earliest title was
considered to be the first year of the journal. This was done because, even
768 Social Studies of Science 29/5
though the title changed, the journal itself would be familiar to readers and
authors at that time; this would not be the case for a totally new journal.
At the end of the questionnaire, the scientists were asked: to report the
extent of their experience in science and in peer review; if they thought the
typology might be helpful in peer review; and to comment on any aspect of
the typology or scientific journal peer review.
The data were not subjected to parametric statistical testing because
the study was designed primarily to determine if the typology has face
validity. Also, as a result of the way in which the samples of scientists and
papers were selected, they could not be considered representative of even
highly-cited authors or papers: the papers featured in the ‘Citation Classic’
column were self-selected; they were featured because the authors could be
reached by the Current Contents editor, and were willing to write an essay
about their papers.15
Results
Of 301 authors surveyed, 68% (n = 206) responded, and some reported
on more than one paper (n = 230). The respondents were highly experi-
enced, not only in doing science but also in evaluating it: they were
primary authors of an average of 127 peer-reviewed papers, and were very
TABLE 1
Demographics of Experienced Scientists Sampled for a Survey on Scien-
tific Originality
Frequency in Sample
% n
TABLE 2
Ratings of 8 Types of Scientific Originality by Scientists Compared with the Frequency
of Originality Types Assigned by the Same Scientists to their Highly-Cited Papers
N+P+N 5 3 42 87
N+N+N 5 5 15 30
N+P+P 4 5 4 9
P+N+N 3 4 11 22
P+P+N 3 2 11 22
N+N+P 3 3 1 3
P+N+P 2 3 3 7
P+P+P 1 1 13 29
a. The hypothesis, methods, and results reported in a paper are characterized, respectively, as
previously reported in the scientific literature (P) or newly reported (N).
b. On a 5-point scale, 5 being most valuable.
experienced as peer reviewers (Table 1). When asked whether the typology
might help to make journal peer review fairer, 92% (n = 189) responded:
16% (n = 33) said no, and the number who were uncertain (40%, n =
82) was similar to the number who said yes (36%, n = 74).
The eight originality types were rated by 84% of the respondents (173
of 206) (Table 2). Ratings were partially completed by 7% (n = 15), and
not provided at all by 9% (n = 17). Some scientists were not able to grasp
all the types, as indicated by question marks or comments. Others,
however, used the ‘grammar’ of the typology in their comments discussing
peer review, the following being the most extensive examples: ‘Established
hypotheses need new methods (at least) and new results to lead to
originality, but new methods can expand our understanding of established
hypothesis’; ‘New valid methods are always original and can lead to new
hypotheses/results’; ‘New results (if valid) are original and seminal to new
hypotheses’; ‘Established hypotheses by definition need some degree of
unoriginal confirmation’.
A complete originality type (all three elements included) was provided
for 91% of the papers (209 of 230). The most frequent originality type was
N-P-N (42%) (Table 2). Of those who did not provide a complete
originality type (n = 21), almost half (n = 12) were authors of ‘review’
papers, who thought that none of the types applied to reviews. The sample
of papers, however, contained 43 reviews, and the authors of most (72%,
n = 31) assigned a complete originality type to their reviews. Others who
had problems, similar to the review authors, often stated explicitly that the
paper was a ‘method’ or ‘theory’ paper.
When the papers were divided into two groups of equal number by
‘younger’ and ‘older’ age of the journals that published them,16 the
distribution of originality types was virtually identical for both groups
770 Social Studies of Science 29/5
TABLE 3
Frequency of Originality Types in a Sample of Highly-Cited Papers Divided into Two
Equally Sized Groups by Age of Journal Publishing the Papers
N-P-P 3 3 6 6
P-P-N 10 11 10.5 11
P-P-P 15 16 12.5 13
N-P-N 42 44 42 43
N-N-N 16 17 12.5 13
P-N-N 9 10 11.5 12
P-N-P + N-N-P 5 5 (4 + 1) 5 5 (3 + 2)
a. The hypothesis, methods, and results reported in a paper are characterized, respectively, as
previously reported in the scientific literature (P) or newly reported (N). The originality types
were assigned to the papers by their authors.
b. The distribution of the originality types is arranged as a type of curve, types with previously
reported methods and newly reported methods falling on either side of the most frequent
type.
Discussion
The results of this study indicate that the typology may have face validity. A
next step would be determining how typologic assessment should or can be
done. In the present study, it was done by authors who simply indicated
with a check if an element were ‘new’ or ‘previously reported’ (Appendix,
item 2). This could also be done by peers. Indeed, studies could be done to
see if typologic assessment would be useful as part of peer review. For
example, using a journal’s editorial files, reviews of submissions with and
without the use of typologic assessment could be compared. A better
method might be to require a brief description of what made the element
new or not. For example, during review of the present Note, one referee
assessed it spontaneously with the typology, and used brief descriptions as
follows: P (whether originality can be measured)-N (typology)-N; the
study could also be assessed as N (whether originality can be represented
as a typology)-P (survey)-N. This illustrates the different ways in which a
study can be interpreted, and how the typology may provide a way of
articulating those differences. Another possible study of the typology, using
papers submitted to journals, would be to ask authors to provide their own
typologic assessments of their papers (with brief descriptions), and then to
compare those with referees’ typologic assessments.
Research Note: Dirk: A Measure of Originality 771
Appendix
The text of the questionnaire used in this study is presented below. When a question
presented multiple choice answers, the options that were given to the respondent are given in
parentheses. The present Note does not include responses from some questions that were
designed to obtain data on whether originality types may vary by difficulty in getting accepted
(items 3 to 9). Almost all these articles (90%) were published by the first journals they were
submitted to; therefore, this group of articles was probably not a good sample for studying
difficulty in getting published. Those questions, however, may be worthwhile exploring in a
separate study. Bracketed text in italics represents explanatory notes; multiple choice answers
provided in parenthesis were formatted with blanks for respondents to check the options they
chose.
The Questionnaire
A research article can describe theory, methods, and results that vary by whether each is
established, modified, or new as defined in the following way:
established = previously reported in full and confirmed in the scientific literature.
modified = previously reported in part in the scientific literature.
new = not previously reported in full in the scientific literature.
1. From two perspectives, your own and that of the scientific journal peer review system as
you perceive it, please rate each of the following eight permutations of scientific
originality according to its value in contributing to science from not very valuable (1) to
very valuable (5). [The 8 permutations were then listed, and 2 blanks were provided for the
ratings.]
The remaining questions pertain to the following article of yours that has become a Citation
Classic: [Citation of the pertinent Citation Classic was provided.]
2. Indicate below with a check whether the hypothesis, methods, and results reported in this
Citation Classic article were established, modified – slightly or substantially – or new, as
defined above.
3. Please check the direction of the results in this article (positive, negative, other).
4. Please indicate how many different journals the manuscript was submitted to and check
whether the number is exact or approximate.
774 Social Studies of Science 29/5
5. Please indicate with a check how the manuscript was accepted (accepted as is; minor
revision required; major revision required).
6. For each of the following types of rejection, please check if any are applicable to your
manuscript and, if possible, the name of the journal(s) associated with that form of
rejection (rejected by a journal after it requested revisions and you complied; rejected
with reviewers’ comments; rejected without reviewers’ comments).
7. Please indicate with a check the amount of revision the manuscript underwent from the
time it was first submitted until it was accepted (none; minor; substantive revision one
time, several times, innumerable times).
8. When you submitted the manuscript, did you start with (the most suitable journal; the
most prestigious journal; the journal most likely to accept the paper; other, for example,
a new journal)?
9. Characterize the relationship between the discipline most relevant to your Citation
Classic article and the discipline most relevant to the journal that published it (same
primary discipline; same subdiscipline; different primary disciplines; different
subdisciplines; other article/journal discipline relationship, for example, your article was
most pertinent to a primary discipline and the journal that published it represented a
subdiscipline of a different primary discipline [primary/different subdiscipline]).
10. Please use the space below to provide any other pertinent information, or to comment
about the questions, the survey, scientific originality, or journal peer review.
11. Do you think objective criteria for judging scientific originality as presented above would
help make journal peer review fairer? (yes; no; uncertain).
12. Were you the corresponding author for the article in question above? (yes; no).
13. Please indicate your current title/position (assist. prof.; assoc. prof.; professor; other).
14. Number of peer reviewed articles you have been primary author of.
15. Number of journals you subscribe to or read frequently.
16. Indicate your cumulative experience as a peer reviewer for scientific journals, as a board
member, and as an editor (number of cumulative years; number of journals).
Notes
I am grateful for the editor’s and reviewers critiques, which resulted in significant
improvements, and for the copy editor’s close scrutiny of the paper, which led to an
important correction. This work was inspired by the American Medical Association’s
Congresses on Peer Review. Material support was provided by the Department of
Anesthesiology, University of Florida College of Medicine. For their support and
comments, special thanks to Don Caton, Marilyn Southwick Fregly and Robert A. Hatch
(University of Florida), and Fred Grinnell (University of Texas). Invaluable assistance was
provided by Stephen Lock (Wellcome Institute) and each of the 206 scientists who
responded to my survey. This paper was presented in part in poster form at the Annual
Meeting, American Association for the Advancement of Science (Atlanta, GA, 16–21
February 1995), and at the Third International Congress on Peer Review, American
Medical Association (Prague, Czech Republic, 18–20 September 1997).
that Are Later Highly-Cited’, Social Studies of Science, Vol. 23, No. 2 (May 1993),
342–62; Campanario, ‘Peer Review for Journals As It Stands Today, Part 1’, Science
Communication, Vol. 19 (1998), 181–211; and the papers by Yalow, Boring and
Morowitz cited below.
5. Rosalyn S. Yalow, ‘Radioimmunoassay: A Probe for the Fine Structure of Biologic
Systems’, Science, Vol. 200 (16 June 1978), 1236–39, quote at 1237. The unedited
quote reads:
The original paper describing these findings was rejected by Science and
initially rejected by The Journal of Clinical Investigation. A compromise with
the editors eventually resulted in acceptance of the paper but only after we
omitted ‘insulin antibody’ from the title and documented our conclusion that
the binding globulin was indeed an antibody by showing how it met the
definition of antibody given in a standard textbook of bacteriology and
immunity.
12. Don A. Dillman, ‘The Design and Administration of Mail Surveys’, in W. Richard
Scott (ed.), Annual Review of Sociology, Vol. 17 (Palo Alto, CA: Annual Reviews, 1991),
225–49.
13. Merton, op. cit. note 1; John C. Bailar III and Kay Patterson, ‘Journal Peer Review:
The Need for a Research Agenda’, New England Journal of Medicine, Vol. 312 (7 March
1985), 654–57.
14. John M. Ziman, ‘The Proliferation of Scientific Literature: A Natural Process’, Science,
Vol. 208 (25 April 1980), 369–71.
15. Eugene Garfield, personal communication (phone call, 7 March 1994).
16. Current Contents classified the journals as ‘life sciences’. High-circulation general
journals like Nature and Science were represented, as were general medical journals
such as the New England Journal of Medicine and the British Medical Journal, which
published several articles in the sample. Most of the journals, however, were highly
specialized journals such as American Naturalist, Biopolymers, Critical Care Medicine,
Neuroscience, Stain Technology and Virology, and had only published one article in the
sample.
17. The almost identical distribution of originality types by younger and older journals that
published the papers may indicate that the typology is a reliable method, even
776 Social Studies of Science 29/5
considering that they were assessed by their authors. It would be interesting to see if
the same pattern of distribution would occur in a more representative set of papers.
Further, if a set of papers published in recent issues of journals were used, it may be
possible to analyze originality type by impact factor of the journals. Use of the impact
factor was not possible in the present study, because the dates when the articles were
published ranged from 1947 to 1987 and, as far as I know, journal impact factors have
not been calculated for those years.
18. G. Nigel Gilbert, ‘Referencing as Persuasion’, Social Studies of Science, Vol. 7, No. 1
(February 1977), 113–22; Fred Grinnell, The Scientific Attitude (New York: Guilford
Press, 2nd edn, 1992), 1–22, 69–82.
19. Dirk, op. cit. note 8.
20. Michael H. MacRoberts and Barbara R. MacRoberts, ‘Quantitative Measures of
Communication in Science: A Study of the Formal Level’, Social Studies of Science, Vol.
16, No. 1 (February 1986), 151–72, quote at 157.
21. W.I.B. Beveridge, The Art of Scientific Investigation (New York: Vintage Books reprint,
1950), 1–71, 142–76, 215–25; Melvin J. Fregly and Marilyn Southwick Fregly, ‘Role of
Chance in Discovery’, Journal of the Florida Medical Association, Vol. 66 (1979), 632–35.
22. Michael J. Moravcsik, ‘Some Contextual Problems of Science Indicators’, in van Raan
(ed.), op. cit. note 7, 1–30.
23. Laurens Laudan, ‘From Theories to Research Traditions’, in Baruch A. Brody and
Richard E. Grandy (eds), Readings in the Philosophy of Science (Englewood Cliffs, NJ:
Prentice-Hall, 1989), 368–79, quote at 369.
24. Mullins et al., op. cit. note 7, 82.
25. Beveridge, op. cit. note 21; Fregly & Fregly, op. cit. note 21; Charles Bazerman,
Shaping Written Knowledge: The Genre and Activity of the Experimental Article in Science
(Madison, WI: University of Wisconsin Press, 1988), 59–79; Grinnell, op. cit. note 18.
26. Horton, op. cit. note 10.
27. Derek de Solla Price, ‘The Development and Structure of the Biomedical Literature’,
in Kenneth S. Warren (ed.), Coping with the Biomedical Literature: A Primer for the
Scientist and the Clinician (New York: Praeger, 1981), 3–16, quote at 3.
28. Grinnell, op. cit. note 18; Bazerman, op. cit. note 25, 3–27; C. Daniel Saltzman and
William T. Newsome, ‘Neural Mechanisms For Forming A Perceptual Decision’,
Science, Vol. 264 (8 April 1994), 231–32, 235–36; Jim Bogen and Jim Woodward,
‘Observations, Theories and the Evolution of the Human Spirit’, Philosophy of Science,
Vol. 59 (1992), 590–611; Hilary Farris and Russell Revlin, ‘The Discovery Process: A
Counterfactual Strategy’, Social Studies of Science, Vol. 19, No. 3 (August 1989),
497–513.
29. John M. Ziman, ‘Information, Communication, Knowledge’, Nature, Vol. 224 (25
October 1969), 318–24, quote at 320.
30. Gilbert, op. cit. note 18; MacRoberts & MacRoberts, op. cit. note 20; Bazerman, op.
cit. note 25, 128–50; Bogen & Woodward, op. cit. note 28.