You are on page 1of 11

Journal of Informetrics 18 (2024) 101481

Contents lists available at ScienceDirect

Journal of Informetrics
journal homepage: www.elsevier.com/locate/joi

Exploring the scientific impact of negative results


Dan Tian a, Xiao Hu b, Yuchen Qian a, Jiang Li a, *
a
School of Information Management, Nanjing University, Nanjing 210032, China
b
Faculty of Education, The University of Hong Kong, Hong Kong 999077, China

A R T I C L E I N F O A B S T R A C T

Keywords: Negative results are a routine part of the scientific research journey, yet they often receive
Negative results insufficient attention in scientific publications. In this study, we investigate the scientific impact
Scientific impact of negative results by comparing the citations and citation context between negative and positive
Citations
results. Specifically, we compared 159 negative result papers from three journals: Journal of
Citation context
Negative Results in BioMedicine, PLoS One, and BMC Research Notes, with 1,058 matched positive
result papers authored by the same first and corresponding authors. The citation context was
categorized according to three dimensions: citation aspect, citation purpose, and citation polarity.
The first two were automatically provided by Citation Opinion Retrieval and Analysis (CORA),
while citation polarity was manually annotated. Our analysis revealed several key findings.
Firstly, negative results received 38.6 % fewer citations than positive results, even after con­
trolling for bibliographic factors. Secondly, negative results were associated with a significantly
higher proportion of negative citations when compared to positive results. Lastly, a higher pro­
portion of negative results were negatively cited in the methods section.

1. Introduction

The scientific community has developed a culture that favors significant positive results, leaving researchers to pursue publishing
positive results while neglecting failed attempts (Bakker et al., 2012; Fanelli, 2010). This issue of negative results is prevalent in any
experiment-based discipline (Herbet et al., 2022). Negative results typically refer to research results that lack statistical significance, i.
e., those that support the null hypotheses (Matosin et al., 2014). In the biomedical field, Knight (2003) identified three typical negative
results: (1) a specific genetic marker not related to a specific inherited disease, (2) no significant difference between the group of mice
treated with the experimental drug and the control group, and (3) the study’s findings contradict a commonly accepted belief, such as
Michelson-Morley Experiment that overturned the hypothesis of the existence of the ether (Sayao et al., 2021). Negative results in the
biomedical field can occur due to three main reasons, including incorrect initial hypotheses, failure to confirm findings in existing
publications, and technical reasons, such as reagent errors, poor research design, and insufficient statistical power. Theoretically,
except for those caused by technical reasons, negative results meet publication requirements (Bespalov et al., 2019).
Although negative results are crucial in scientific research, they are often overlooked in scientific publications due to the resistance
from scientists, publishers, and market competitiveness, and are difficult to find in publications across most disciplines (Fanelli, 2012a;
Franco et al., 2014; Gumpenberger et al., 2013; Sayao et al., 2021). However, negative results are vital for science and contribute to
avoiding duplication of scientific work, saving public funds, and promoting scientific communication (Arechavala-Gomeza &

* Corresponding author.
E-mail address: lijiang@nju.edu.cn (J. Li).

https://doi.org/10.1016/j.joi.2023.101481
Received 24 April 2023; Received in revised form 30 November 2023; Accepted 4 December 2023
Available online 8 December 2023
1751-1577/© 2023 Elsevier Ltd. All rights reserved.
D. Tian et al. Journal of Informetrics 18 (2024) 101481

Aartsma-Rus, 2021; Gumpenberger et al., 2013). In response, a few publishers have established journals to exclusively publish
negative results, such as Journal of Negative Results in BioMedicine, New Negatives in Plant Science, Journal of Pharmaceutical Negative
Results, and Journal of Articles in Support of the Null Hypothesis. Additionally, PLoS One and BMC Research Notes, launched special
collections to publish negative results (BMC Research Notes, 2019; PLOS, 2015).
While negative results undergo the same peer-review process and ensure reproducibility as positive results, they suffer from lower
acceptance rates and are not fully recognized as positive ones (Fanelli, 2012a; Matosin et al., 2014; Sayao et al., 2021). Despite the
importance of negative results in scientific research, few studies have explored their impact on science. To address this gap, this study
aims to investigate the scientific impact of negative results by comparing how negative and positive results are cited in subsequent
works. Specifically, we collected articles with negative results from the Journal of Negative Results in BioMedicine, PLoS One, and BMC
Research Notes, and matched authors’ positive result papers written by the same first and corresponding authors for each negative
result paper. We then employed linear regression models to investigate the association between negative/positive results and citations,
and explored the differences in citation context between negative and positive results.

2. Related works

The scientific community is confronted with challenges stemming from a cultural preference for positive results in scientific
research. One issue is that failed attempts are often replicated in different research teams due to a lack of communication, resulting in a
waste of time, energy, and delay in new scientific discoveries (Rzhetsky et al., 2015). Negative results are commonly discussed within
research teams but infrequently shared across them, which leads to inadequate communication. In fact, in a survey on the fate of
negative results, nearly 30 % of respondents reported being informed that another laboratory had performed the same experiments
with negative results, indicating a lack of effective communication (Herbet et al., 2022). Pursuing positive results can also lead to
questionable research practices, such as p-hacking, which has led to replication crises and hindered scientific development (Fraser
et al., 2018; Head et al., 2015; Makel & Plucker, 2014; Nosek et al., 2012; Tackett et al., 2019). Additionally, a low publication rate of
negative results leads to survivorship bias towards positive results, which may mislead researchers and policymakers in
decision-making (Sharma & Verma, 2019; van Aert et al., 2019).
Although scientists believe negative results are vital to scientific advance and willing to publish negative results (Herbet et al.,
2022), the reality is not as expected. A recent survey found that 81 % of participating researchers had encountered negative results in
scientific research, and of those, 75 % said they were willing to publish their negative results, but only 12.5 % had a chance of
publishing their negative results (Herbet et al., 2022). Accordingly, many negative results that could have contributed to scientific
progress were left in dusty drawers forever, known as the file-drawer effect (Sayao et al., 2021). Moreover, another survey suggested
that researchers considered negative results necessary for scientific development, but they rarely published negative results, mainly
because they did not have enough time and believed that negative results were less cited than positive ones (Echevarría et al., 2020). As
a result, many negative results were not recorded by scientists (Franco et al., 2014).
The scientific impact of negative results is an area of research interest (Fanelli, 2012b; Gumpenberger et al., 2013). Gumpenberger
et al. (2013) found that most negative results published in the Journal of Negative Results in BioMedicine were cited by different journals,
and only a third were uncited, indicating the importance of negative results. However, the Journal of Negative Results in BioMedicine still
suffers a low impact factor. Jannot et al. (2013) found that statistically significant works studies were cited twice the rate of insig­
nificant ones. Furthermore, Fanelli (2012b) compared the total citations of negative and positive results published between 2000 and
2007. They found that positive results significantly received more citations than negative ones only in Neuroscience & Behaviour,
Molecular Biology & Genetics, Clinical Medicine, and Plant and Animal Science. In contrast, no such correlation was found in other
disciplines of ESI, such as Economics & Business. Accordingly, whether negative results receive fewer citations than positive ones vary
across disciplines.
Notwithstanding attempts have been made to investigate the scientific impact of negative results, there are some limitations to
these studies. Firstly, previous works examined the scientific impact of negative results based on citation counts. The citation count is
used to evaluate the scientific impact of a paper on the premise that citation implies acknowledgment (Abramo et al., 2021; Lyu et al.,
2022; Tokmachev, 2023). However, the purpose of citation is not only to acknowledge previous works but can also be used to point out
the shortcomings of previous works as negative citations (Catalini et al., 2015). Accordingly, citation counts cannot fully capture the
importance of a study, and the citation context is critical to understanding the negative results’ scientific impact. In contrast to prior
research, our study investigates the impact of negative results in scientific research through the analysis of citations and citation
context. Additionally, we address authors’ research proficiency by conducting a fixed-window citation comparison between negative
and positive results of the same first and corresponding authors. This approach is a unique feature of our study.

3. Research method

3.1. Data collection

Journal of Negative Results in BioMedicine (JNRB) is committed to publishing research that reports negative results (Gumpenberger
et al., 2013). Although PLoS One and BMC Research Notes, do not primarily publish negative result papers, they have collections of
such studies. In this study, we used a sample of 200 articles published in JNRB between 2002 and 2017, 39 articles published in PLoS
One between 2008 and 2021, and 38 articles published in BMC Research Notes, between 2019 and 2021. We then matched each
negative result paper with one or more positive result articles published by the same first author and corresponding author to control

2
D. Tian et al. Journal of Informetrics 18 (2024) 101481

for authors’ influence on the papers. Scopus indexes the JNRB, PLoS One, and BMC Research Notes, and provides a unique identifier for
each author of the indexed papers, which facilitates the retrieval of the positive counterparts of the negative result articles. As a result,
126 negative result articles in JNRB, 19 negative result articles in PLoS One, and 14 negative result articles in BMC Research Notes,
matched 891, 111, and 56 articles published in non-negative journals by the same first and corresponding authors, respectively. After
manually checking 100 papers selected randomly from the 1058 non-negative journal articles, we found that 95 % had positive results.

3.2. Annotation scheme

This study measures scientific impact of negative results by using citation counts and citation context. We applied a fixed citation
window, i.e., five-year citations, to compare the citation differences between negative and positive results. Citation context provides the
details of the relationship between the citing and the cited work, explain the intent of the citations, and reveal the author’s attitude
toward the cited work. We used CORA (Citation Opinion Retrieval and Analysis), a platform for automated citation context retrieval
and analysis developed by Yu’s team (2017), to conduct our research. The citation context has been categorized into three dimensions:
citation aspect, purpose, and tone, based on Yu’s work (2015). Specifically, the citation aspect pertains to which part of the cited work
is referenced in the citing literature, including five elements: the goal-problem-challenge, method, data, claim, and general back­
ground (Yu & Zhang, 2015). The citation purpose is to explain the purpose of citing the cited work in a given context, which is divided
into four categories: comparison, critique, use, and information (Yu & Zhang, 2015). The citation tone is related to the citation aspect
and purpose. For instance, when the citing paper cites the claim of the cited work for comparison purposes, the citation tone can be
accordant or discordant; if the citing paper critiques the methods or data in cited work, the citation tone can be positive, negative, or
neutral (Yu & Zhang, 2015).
We manually examined the accuracy of CORA annotations. Firstly, we randomly chose 30 papers with negative results, along with
the citation contexts from 201 publications that cite the 30 papers. Two annotators, trained in information science and biomedicine,
independently annotated these citing contexts according to CORA’s citation aspect and purpose scheme. Tables 1 and 2 show the
CORA’s citation aspect and purpose annotation schemes. Kappa coefficients for citation purposes and citation aspect were 0.726 and
0.789, respectively, indicating high consistency between the two annotators. We then randomly selected 30 papers with positive
results, along with their citation contexts from 251 publications. One of the annotators annotated the contexts since previous tasks
showed consistency between the two annotators. We then compared our manually annotated results with those provided by CORA.
The consistencies between these two annotations for citation purpose and aspect were 79 % and 92 %, respectively.
CORA’s citation tone scheme is related to the citation aspect and purpose. The annotation accuracy of the citation aspect and
purpose affects the citation tone’s accuracy. If the results of the first two-dimension annotations are inaccurate, then the results of the
latter annotation are also unreliable. CORA’s citation tone description differs from our research goal. Our focus is on the author’s
perspective towards the referenced work. So, citation polarity is divided into positive, negative, and neutral, and we annotated the
dimension of citation polarity based on the description in previous studies (Huang et al., 2022). Therefore, we annotated the citation
polarity manually according to previous studies (Hernández-Alvarez & Gómez, 2015; Huang et al., 2022; Jha et al., 2017). Table 3
provides the annotation scheme used in this study. We present descriptions for each category. Like the manual annotation process of
the citation purpose and citation aspect, the two annotators annotated 201 citation contexts of 30 randomly selected negative inde­
pendently and obtained a Kappa coefficient of 0.931, indicating high consistency. One annotator annotated all the rest citation
contexts in the dataset.

3.3. Regression model

We established Model (1) to examine the difference in citation counts between papers with negative results and those with positive
results. The likelihood of a paper receiving citations can be influenced by various factors beyond its results. These factors can be
categorized into three distinct groups: author-related variables, journal-related variables, and paper-related variables (Tahamtan et al.,
2016). To ensure rigorous control in our analysis, we incorporated these three groups of variables into our regression model.
Initially, we rigorously controlled for author-related variables by implementing a matching strategy. This strategy ensured that
papers with negative and positive results shared identical first and corresponding authors, effectively accounting for diverse author-
related factors, including author reputation, academic performance, gender, age, ethnicity, institutional attributes, and geographic
location. We also controlled the number of authors. Subsequently, we addressed the potential influence of journals on paper citations

Table 1
The annotation scheme for the citation aspect.
Category Description

Goal-problem- Refers to the cited work’s research purpose and problems.


challenge
Method Refers to the research methods used in a cited work, such as the sampling method used to collect data, the procedure used in
experiments, and the statistical methods applied in data analysis.
Data Means the dataset used in a cited work.
Claim Refers to the research result or finding of a cited work. Additionally, the claim can also be the arguments and positions of the cited work.
General background Applies when the referenced section of a cited work is not very specific.

Notes: The category description of the citation aspect is based on CORA.

3
D. Tian et al. Journal of Informetrics 18 (2024) 101481

Table 2
The annotation scheme for the citation purpose.
Category Description

Critique The authors cite another work to provide a clear and definitive viewpoint.
Comparison The authors cite other works for comparison, which can be applied to compare certain aspects of their work with other research or to compare the
cited works.
Use The authors use a specific aspect of the cited work.
Information The authors refer to another work to offer the reader additional information.

Notes: The category description of the citation purpose is based on CORA.

Table 3
The annotation scheme for the citation polarity.
Category Description

Positive Include three situations: the authors acknowledge the advantages of the cited work, express that the cited work is better than other similar works, or
agree with the opinion expressed in the cited work.
Negative Include three situations: the authors point out the shortcomings or limitations of the cited work, express that the cited work is worse than other similar
works, or oppose the opinion stated in the cited work.
Neutral The citations do not meet the criteria of positive or negative citations.

Notes: The category description of the citation polarity is based on the work of H. Huang et al. (2022).

by incorporating the journal’s CiteScore into our model. Finally, to mitigate the impact of paper-related variables on citation counts,
we implemented controls that encompassed various factors, including the number of references, title length, research field, and paper
accessibility. The regression model is presented in Model (1).

ln (Citationsikt + 1) = β1 Negative resultikt + ϕX′ikt + ωk + δt + εikt , (1)

where ln (Citationikt + 1) is the natural logarithm of the citations (we added one to include zero-citation papers) of article i written by
author k in year t. Negative resultikt is a dummy variable that indicates whether article i is a paper with negative results (yes=1 and
no=0). δt represents the fixed effect of publication year, controlling the differences in citations caused by different publication years.
ωk represents the fixed effect of the primary authors and controls for author-related factors. X′ikt represents a series of control variables,
including the number of authors (#author), CiteScore of journals (citescore), number of references (#reference), title length (title_length),
research filed (research_filed) and accessibility (is_openaccess, yes=1 and no=0). We acquire the research fields of the sampled papers
using the Application Programming Interface (API) offered by Semantic Scholar which is an open knowledge discovery platform
indexing a vast repository of more than 207 million academic papers (Fricke, 2018). This platform categorizes the indexed papers into
23 distinct fields based on their content (MacMillan & Feldman, 2022).
We established Model (2) to compare the citation context differences between negative and positive results.

Proportionikt = β1 Negative resultikt + ϕX′ikt + ωk + δt + εijt , (2)

where Proportionikt represents a set of dependent variables, i.e., the proportions of the method, data, claim, and background in the
citation aspect; the proportions of the use, critique, comparison, and information in the citation purpose; the proportions of the
positive, negative, and neutral tone in the citation polarity. For instance, for the paper i wrote by author k and published in year t, the
method proportion is equal to the proportion of method citations among all citations. Negative resultikt is a dummy variable that in­
dicates whether article i has negative results (yes=1, no=0). X′ikt represents a series of control variables, including citation counts (five-
year citations) and the CiteScore of journals (Citescore). ωk represents the fixed effect of authors. δt indicates the fixed effect of pub­
lication year.

Table 4
Descriptive statistics of the variables.
Obs. Mean Std. Dev. Min 25 % 50 % 75 % Max

Five-year citations 1035 14.85 21.27 0 3 8 18 319


Negative_result 1035 0.13 0.33 0 0 0 0 1
Citescore 1035 5.70 5.29 0 2.70 4.50 6.30 80.60
#author 1035 5.46 3.26 1 3 5 7 25
Title_length 1035 14.85 5.55 3 11 14 18 39
#reference 1035 31.10 22.14 1 17 27 40 199

4
D. Tian et al. Journal of Informetrics 18 (2024) 101481

4. Results

4.1. Negative results received significantly fewer citations than positive results

Table 4 presents the descriptive statistics of the variables used in the regression model. There are 182 papers in the dataset with a
publication period of less than 5 years up to 2022, making it impossible to calculate their five-year citations. Consequently, they have
not been included in the regression analysis. Consequently, they have not been included in the regression analysis. The dependent
variable is Five-year citations, which ranges from 0 to 319, with a mean of 14.85 and a standard deviation of 21.27. The mean of
Negative_result is 0.13, indicating that 13 % of the papers being analyzed were with negative results. We also presented statistics of
control variables that are relevant to citations.
Table 5 displays the estimations of the regression models on five-year citations. Column (1) only includes the Negative_result variable,
and the coefficient of this variable was negative and statistically significant at the 0.01 level, indicating that papers with negative
results receive fewer citations than studies with positive results. In columns (2) to (4), we introduced author-related factors, journal-
related factors, and paper-related factors, respectively. In Column (2), we controlled for the primary author fixed effects and the
number of authors. The coefficient for Negative_result is significantly negative, indicating that, after controlling for factors related to the
authors, papers with negative results have significantly fewer citations than papers with positive results. In Column (3), we introduced
Citescore, and after controlling for factors related to the journal, the coefficient for Negative_result remains significantly negative. In
Column (4), we further control for paper-related factors, and the coefficient for Negative_result is − 0.386 (p < 0.01), indicating that
under otherwise identical conditions, papers with negative results receive 38.6 % fewer citations than papers with positive results.
To check the robustness of the results, we first considered whether or not our results were robust over different citation windows.
We replaced the five-year citations in the dependent variable with three-year citations. The results are robust, as shown in Column (1) (N
= 1180) of Table 6. The coefficient of Negative_result is negative and statistically significant, suggesting that articles with negative
results receive fewer citations than studies with positive ones. Second, we used different criteria to select studies with positive results
for the control group. We restricted the publication year difference between the negative and corresponding positive results to no more
than five and three years to eliminate changes in the author’s research capacity and interest. Upon narrowing the sample range, the
regression results in columns (2) and (3) of Table 6 (N = 676 and 542) align with those in Table 5 (N = 1035). These results highlight
that the papers reporting negative results received significantly fewer citations when compared to papers with positive ones.
To further investigate the reasons behind the lower citation counts of negative result papers, we conducted an analysis of the
"attention" garnered by the sampled papers. Specifically, we measured attention of the papers by the numbers of abstract views,
readers, and exports/saves counts. We employed Scopus API services to retrieve all the papers’ abstract views, readership, and exports/
saves, as provided by PlumX. PlumX Metrics offer valuable insights into how people interact with various types of research, including
articles, conference proceedings, book chapters, and more, within the online environment. These metrics are categorized into five
groups: Citations, Usage, Captures, Mentions, and Social Media (Elsevier, 2023).
Abstract views fall under the Usage metrics category, while readership and exports/saves are categorized as Capture metrics which
serve as leading indicators of future citations. Abstract views represent the number of times a study’s abstract has been accessed and
viewed. Readership indicates the number of individuals who have added the artifact to their personal library or briefcase. Exports/
Saves encompass the number of instances in which a paper’s citation has been directly exported to bibliographic management tools or
downloaded as files, as well as the number of times an artifact’s citation/abstract and HTML full text (if available) have been saved,
emailed, or printed. We chose the above three indicators primarily because they have relatively lower rates of absence within the
sample.

Table 5
The citation difference between negative and positive results.
(1) (2) (3) (4)
Five-year citations

Negative_result − 0.409*** − 0.413*** − 0.305*** − 0.386***


(0.095) (0.089) (0.090) (0.103)
#author 0.070*** 0.058*** 0.050***
(0.015) (0.015) (0.015)
Citescore 0.035*** 0.030***
(0.008) (0.008)
Title_length − 0.007
(0.007)
#reference 0.010***
(0.002)
Is_openaccess √
Research_field √
Primary author fixed effects Yes Yes Yes Yes
Publication year fixed effects No No No Yes
Observations 1035 1035 1035 1035
R2 0.016 0.370 0.390 0.511
Adjusted R2 0.015 0.259 0.283 0.394

Notes: Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1.

5
D. Tian et al. Journal of Informetrics 18 (2024) 101481

Table 6
Robustness of citation difference between negative and positive results.
(1) (2) (3)
Three-year Five-year citations Five-year citations
citations The difference in publication year between negative The difference in publication year between negative
Overall and positive results <= 5 years and positive results <= 3 years

Negative_result − 0.284*** − 0.359*** − 0.350***


(0.090) (0.123) (0.116)
#author 0.046*** 0.054*** 0.066***
(0.013) (0.018) (0.021)
Citescore 0.035*** 0.036* 0.071***
(0.007) (0.020) (0.013)
Title_length − 0.011* − 0.010 − 0.011
(0.006) (0.010) (0.010)
#reference 0.009*** 0.009*** 0.005
(0.002) (0.003) (0.003)
Is_openaccess √ √ √
Research_field √ √ √
Primary author fixed Yes Yes Yes
effects
Publication year fixed Yes Yes Yes
effects
Observations 1180 676 541
2
R 0.506 0.561 0.612
Adjusted R2 0.400 0.407 0.451

Notes: Standard errors are in parentheses. *** p < 0.01, ** p < 0.05, * p < 0.1.

We utilized regression analysis to compare the differences between negative and positive result papers on the three aforementioned
indicators. The regression results were reported in Table 7. The coefficients for Negative_result in all three models are statistically
negative and consistently indicate that negative result papers have significantly lower counts of abstract views, readers, and exports/
saves in comparison to positive result papers. We hence speculated that less attention leads to fewer citations for negative result papers.

4.2. Negative results significantly received a higher proportion of negative citations

The proportions of each category under citation aspect, purpose, and polarity for both negative and positive results are close to each
other, as shown in Fig. 1. In Fig. 1a (citation aspect), "claim" is shown to be the most frequently cited aspect for both negative and
positive results, the citation proportions of the method follow-up, while those of background, data, and goal-problem-challenge are
low. In Fig. 1b (citation purpose), for both negative and positive results, more than half of the citations are for providing further
information to the readers, while the citations for critique purposes are rare. In Fig. 1c (citation polarity), most citations of negative
(93.0 %) and positive results (96.7 %) are neural, and less than 10 % are either positive or negative.
Table 8 reports the regression results of five dependent variables of the citation aspect, i.e., the citation proportions of goal-
problem-challenge, method, data, claim, and background. The main effects in Columns (1)-(5) are not statistically significant.

Table 7
The difference in attention between negative result papers and positive result papers.
(1) (2) (3)
ln(abstract_views) ln(reader_count) ln(exports_saves)

Negative_result − 1.260*** − 0.144** − 1.027***


(0.144) (0.071) (0.209)
#author 0.041* 0.042*** 0.000
(0.023) (0.010) (0.031)
Citescore − 0.006 0.015*** − 0.007
(0.011) (0.004) (0.017)
Title_length 0.015 0.001 0.035***
(0.011) (0.004) (0.013)
#reference 0.005* 0.011*** 0.005
(0.003) (0.001) (0.004)
Is_openaccess √ √ √
Research_field √ √ √
Primary author fixed effects Yes Yes Yes
Publication year fixed effects Yes Yes Yes
Observations 1076 1214 478
R2 0.551 0.673 0.573
Adjusted R2 0.453 0.602 0.445

Notes: Standard errors are in parentheses. *** p<0.01, ** p<0.05, * p<0.1.

6
D. Tian et al. Journal of Informetrics 18 (2024) 101481

Fig. 1. Differences in citation aspect (a), citation purpose (b), and citation polarity (c) between negative and positive results.

Table 8
Differences in citation aspect between negative and positive results.
(1) (2) (3) (4) (5)
Goal proportion Method proportion Data proportion Claim proportion Background proportion

Negative_result − 0.001 0.063 − 0.016 − 0.041 − 0.006


(0.005) (0.045) (0.010) (0.048) (0.020)
Five-year citation − 0.000 0.001 − 0.000 − 0.001 0.000
(0.000) (0.001) (0.000) (0.001) (0.000)
Citescore 0.000 − 0.002 0.001 0.001 − 0.001
(0.000) (0.002) (0.001) (0.003) (0.001)
Primary author fixed effect Yes Yes Yes Yes Yes
Publication year fixed effect Yes Yes Yes Yes Yes
Constant − 0.008 0.204 − 0.041 0.565* 0.280**
(0.010) (0.319) (0.050) (0.291) (0.131)
Observations 500 500 500 500 500
R2 0.164 0.275 0.215 0.262 0.309
Adjusted R2 − 0.089 0.056 − 0.023 0.038 0.100

Notes: Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1.

Table 9 reports the regression results of four dependent variables of citation purpose, i.e., the citation proportions of use, critique,
comparison, and information. The main effect in Column (2) is significantly positive, indicating that the proportions of citations of
negative results for critiquing were higher than those of positive results. In Column (4), the coefficient of Negative_result is significantly
negative (β= − 0.069 ± 0.041, p<0.1), indicating that the proportion of citations of negative results is 6.9 % lower than that of positive
results for the purpose of providing further information.
Table 10 reports the regression results of three dependent variables of citation polarity, i.e., the proportions of positive, negative,
and neutral citations. The main effects are significant in Columns (2) and (3). The proportion of negative citations of negative results is
5.0 % larger than that of negative citations of positive results. However, the proportion of neutral citations of negative results is lower
than that of neutral citations of positive results.

Table 9
Differences in citation purpose between negative and positive results.
(1) (2) (3) (4)
Use proportion Critique proportion Comparison proportion Information proportion

Negative_result − 0.008 0.033** 0.044 − 0.069*


(0.029) (0.014) (0.034) (0.041)
Five-year citation 0.001 0.000 0.000 − 0.001
(0.001) (0.000) (0.001) (0.001)
Citescore 0.000 − 0.000 − 0.004 0.004
(0.002) (0.000) (0.002) (0.003)
Primary author fixed effect Yes Yes Yes Yes
Publication year fixed effect Yes Yes Yes Yes
Constant 0.080 − 0.028 0.071 0.876***
(0.281) (0.032) (0.140) (0.275)
Observations 500 500 500 500
R2 0.274 0.349 0.280 0.277
Adjusted R2 0.055 0.152 0.062 0.058

Notes: Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1.

7
D. Tian et al. Journal of Informetrics 18 (2024) 101481

Table 10
Differences in citation polarity between negative and positive results.
(1) (2) (3)
Positive proportion Negative proportion Neutral proportion

Negative_result − 0.003 0.050*** − 0.046**


(0.005) (0.018) (0.019)
Five-year citation 0.000 0.000* − 0.000**
(0.000) (0.000) (0.000)
Citescore − 0.000 − 0.001 0.001
(0.000) (0.001) (0.001)
Primary author fixed effect Yes Yes Yes
Publication year fixed effect Yes Yes Yes
Constant − 0.047 − 0.047 1.094***
(0.038) (0.034) (0.048)
Observations 500 500 500
R2 0.288 0.340 0.335
Adjusted R2 0.073 0.140 0.134

Notes: Standard errors in parentheses. *** p<0.01, ** p<0.05, * p<0.1.

We conducted robustness checks similar to those in Section 4.1. We first replaced five-year citations with three-year years to re-
examine the citation context differences between negative and positive results regarding citation aspect, purpose, and polarity.
Then we limited the publication year of the matched positive result papers to five/three years before or after the publication year of the
negative result papers, respectively. The regression results are reported in Tables S1-S9 in Supplementary materials. The results on
citation polarity are robust across different samples, i.e., compared with positive results, negative results have a higher proportion of
negative citations and a smaller proportion of neutral citations.
Now that negative result papers receive a significantly higher proportion of negative citations, an intuitive question is: what aspects
or purposes of a negative result are cited negatively? To address this question, we combined the citation aspect and polarity and
calculated their proportions of citations. Moreover, we combined citation purpose and polarity to understand the purpose associated
with negative citations. We then compared the proportions between negative and positive result papers using T-test. As shown in
Fig. 2, negative result papers were negatively cited in the method aspect with a significantly higher proportion than papers with
positive results. As shown in Fig. 3, papers with negative results were more likely to be cited negatively for critique purposes than those
with positive results.

5. Discussion and conclusions

In this study, we explored the scientific impact of negative result papers by comparing their citations and citation context to those of
their positive counterparts. We deliberately matched the data with positive result papers written by the same first and corresponding
authors. The main contribution of this study is that we consider the scientific impact of negative results from a more granular
perspective. Specifically, we examined the differences between papers with negative and positive results, not only on citation counts
but also in the citation context, including citation aspect, purpose, and polarity. The three main findings of this study are as follows:
negative results receive significantly fewer citations than positive ones, the proportion of negative citations of negative results is
significantly higher than that of positive results, and negative results are significantly more cited in negative tones in the method aspect
and for critique purposes, as compared to positive results.
The findings of this study suggest that negative results papers may face not only publication barriers but also lower citation rates,
which may under-represent their scientific value. Our results show that negative results papers would receive 38.6 % fewer citations
than positive result papers written by the same authors (Table 5). An outlier analysis of the negative results uncovered that there are
some highly cited negative results and they received more positive citations than their positive counterparts (see Supplementary
Information for more details). It thus may not be sufficient to assess the scientific value of negative results based on their citation
counts alone (Gerow et al., 2018). In this study, we propose considering citation context when evaluating the scientific influence of
negative results. For negative and positive results, claims (findings, arguments, and positions) and methods are the two most frequently
cited aspects in subsequent studies (Fig. 1), indicating that negative and positive results contribute to new knowledge in similar ways.
Given the limitation of citation counts on which many academic metrics are based, negative results may need to be recognized in more
comprehensive approaches, for example, through setting up awards to acknowledge the impact of negative results to scientific
advance. As Nature advocates, "Rewarding negative results keeps science on track." (Nature Editorial, 2017).
Our findings revealed that the abstract views, readership, and export/save counts of negative result papers were significantly lower
than those of positive result papers (Table 7). In essence, negative result papers attract less attention, potentially accounting for their
lower citation counts. Furthermore, there exist several potential explanations for the lower citation rates of negative results compared
to positive results. Firstly, negative results are frequently erroneously linked with subpar research design, even though there is no
inherent logical connection (Matosin et al., 2014). Secondly, researchers may perceive positive results as more substantial and,
consequently, more deserving of citation. Thirdly, researchers might read negative result papers, learn from them, but not cite them.
For example, negative results might indicate that "this way is blocked," which stops subsequent scientists from spending time in the
same direction and, hence, receiving fewer citations.

8
D. Tian et al. Journal of Informetrics 18 (2024) 101481

Fig. 2. Difference between the negative and positive results based on the combination of citation aspect and citation polarity.

The proportion of neutral citations is relatively high for both negative and positive results, while the proportion of negative ci­
tations is relatively low for both (Fig. 1). However, we found that negative results had a higher proportion of being cited negatively;
negative results had a lower proportion of being cited neutrally. These findings are consistent across different time windows and
samples. Combining citation polarity and citation aspect, we found that the method section of negative results is more likely to be
negatively cited than that of positive results (Fig. 2). In addition, compared with positive results, negative citations of negative results
had a higher proportion for the critique purpose (Fig. 3). Although negative results are more likely to be cited negatively, it does not
necessarily suggest that the quality of negative results is poor. Previous research has indicated that when a paper receives enough
attention, it might receive negative citations. Negatively cited studies are typically high quality (Catalini et al., 2015).
Regarding citation aspect and purpose, the claim of negative and positive results is the most frequently cited, with less than one-
third of citations coming from the methods, background, data, and goal-problem-challenge sections for both negative and positive
results (Fig. 1). Subsequent studies cite negative and positive results, most commonly for the purpose of providing further information
to the reader (Fig. 1). These results indicated that the negative and positive results provide a similar knowledge foundation for sub­
sequent research. Furthermore, according to regression results, there is insufficient evidence to suggest that there are significant
differences between the way negative and positive research results are cited in terms of citation aspect (i.e., goal-problem-challenge,
method, data, claim, and background) and citation purpose (i.e., use, critique, comparison, and information).
Journals with negative results face severe survival problems, as their scientific impact is measured only by the number of citations.
Journals with low citation rates are less likely to be indexed by the Web of Science and hence suffer low visibility in terms of
recognition (Huang et al., 2017). Journal of Negative Results in BioMedicine was founded in 2002 and ceased publication in 2017, with
200 articles published. Similarly, New Negatives in Plant Science was founded in 2015 and discontinued in 2016. The short lives of both
journals reveal the survival reality of negative results journals. Perhaps a better approach is not to launch negative journals but to
encourage more journals to accept papers with negative results but rigorous research designs, which would help the dissemination of
research with negative results. Several journals, such as PLoS One and BMC Research Notes, have attempted to launch negative results
collections. Furthermore, the rise of the open science movement has provided new avenues for publishing negative results (Chalmers
et al., 2013; Elliott & Resnik, 2019). Preregistering studies is one of the critical initiatives of the open science movement, which

9
D. Tian et al. Journal of Informetrics 18 (2024) 101481

Fig. 3. Difference between the negative and positive results based on the combination of citation purpose and citation polarity.

advocates the preregistration of studies based on a review of methods to guarantee publication regardless of positive or negative
results.

CRediT authorship contribution statement

Dan Tian: Data curation, Formal analysis, Investigation, Writing – original draft. Xiao Hu: Conceptualization, Validation. Yuchen
Qian: Data curation, Formal analysis. Jiang Li: Conceptualization, Methodology, Supervision, Writing – review & editing.

Acknowledgments

The manuscript is a new and extended version of our previous work which was accepted by the 26th International Conference on
Science, Technology and Innovation Indicators (STI 2022) (Tian et al., 2022). This work was supported by the Postgraduate Research
& Practice Innovation Program of Jiangsu Province (KYCX23_0083).

Supplementary materials

Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.joi.2023.101481.

References

Abramo, G., D’Angelo, C. A., & Grilli, L. (2021). The effects of citation-based research evaluation schemes on self-citation behavior. Journal of Informetrics, 15(4),
Article 101204. https://doi.org/10.1016/j.joi.2021.101204
Arechavala-Gomeza, V., & Aartsma-Rus, A. (2021). Sharing "negative" results in neuromuscular research: A positive experience. Journal of Neuromuscular Diseases, 8
(5), 765–767. https://doi.org/10.3233/JND-219007

10
D. Tian et al. Journal of Informetrics 18 (2024) 101481

Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7(6), 543–554. https://doi.
org/10.1177/1745691612459060
Bespalov, A., Steckler, T., & Skolnick, P. (2019). Be positive about negatives–recommendations for the publication of negative (or null) results. European
Neuropsychopharmacology, 29(12), 1312–1320. https://doi.org/10.1016/j.euroneuro.2019.10.007
BMC Research Notes (2019). BMC Research Notes | Negative results. Retrieved December 22, 2022 from https://bmcresnotes.biomedcentral.com/articles/
collections/negative-results.
Catalini, C., Lacetera, N., & Oettl, A. (2015). The incidence and role of negative citations in science. Proceedings of the National Academy of Sciences, 112(45),
13823–13826. https://doi.org/10.1073/pnas.1502280112
Chalmers, I., Glasziou, P., & Godlee, F. (2013). All trials must be registered and the results published. British Medical Journal, 346(9), f105. https://doi.org/10.1136/
bmj.f105
Echevarría, L., Malerba, A., & Arechavala-Gomeza, V. (2020). Researcher’s perceptions on publishing “negative” results and open access. Nucleic Acid Therapeutics, 31
(3), 185–189. https://doi.org/10.1089/nat.2020.0865
Elsevier (2023). About PlumX Metrics. Retrieved October 4th, 2023 from https://plumanalytics.com/learn/about-metrics/.
Elliott, K. C., & Resnik, D. B. (2019). Making open science work for science and society. Environmental Health Perspectives, 127(7), 75002. https://doi.org/10.1289/
ehp4808
Fanelli, D. (2010). Do pressures to publish increase scientists’ bias? An empirical support from US states data. PLoS One, 5(4), e10271. https://doi.org/10.1371/
journal.pone.0010271
Fanelli, D. (2012a). Negative results are disappearing from most disciplines and countries. Scientometrics, 90(3), 891–904. https://doi.org/10.1007/s11192-011-0494-
7
Fanelli, D. (2012b). Positive results receive more citations, but only in some disciplines. Scientometrics, 94(2), 701–709. https://doi.org/10.1007/s11192-012-0757-y
Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502–1505. https://doi.org/
10.1126/science.1255484 (New York, N.Y.).
Fraser, H., Parker, T., Nakagawa, S., Barnett, A., & Fidler, F. (2018). Questionable research practices in ecology and evolution. PLoS One, 13(7), Article e0200303.
https://doi.org/10.1371/journal.pone.0200303
Fricke, S. (2018). Semantic scholar. Journal of the Medical Library Association JMLA, 106, 145–147.
Gerow, A., Hu, Y., Boyd-Graber, J., Blei, D. M., & Evans, J. A. (2018). Measuring discursive influence across scholarship. Proceedings of the National Academy of
Sciences, 115(13), 3308–3313. https://doi.org/10.1073/pnas.1719792115
Gumpenberger, C., Gorraiz, J., Wieland, M., Roche, I., Schiebel, E., Besagni, D., et al. (2013). Exploring the bibliometric and semantic nature of negative results.
Scientometrics, 95(1), 277–297. https://doi.org/10.1007/s11192-012-0829-z
Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLoS Biology, (3), 13. https://doi.
org/10.1371/journal.pbio.1002106
Herbet, M. E., Leonard, J., Santangelo, M. G., & Albaret, L. (2022). Dissimulate or disseminate? A survey on the fate of negative results. Learned Publishing, 35(1),
16–29. https://doi.org/10.1002/leap.1438
Hernández-Alvarez, M., & Gómez, J. M. (2015). Citation impact categorization: For scientific literature. Paper presented. In Proceedings of the 2015 IEEE 18th
International Conference on Computational Science and Engineering. Paper presented.
Huang, H., Zhu, D. H., & Wang, X. F. (2022). Evaluating scientific impact of publications: Combining citation polarity and purpose. Scientometrics, 127(9), 5257–5281.
https://doi.org/10.1007/s11192-021-04183-8
Huang, Y., Zhu, D., Lv, Q., Porter, A. L., Robinson, D. K. R., & Wang, X. (2017). Early insights on the emerging sources citation index (ESCI): An overlay map-based
bibliometric study. Scientometrics, 111(3), 2041–2057. https://doi.org/10.1007/s11192-017-2349-3
Jannot, A. S., Agoritsas, T., Gayet-Ageron, A., & Perneger, T. V. (2013). Citation bias favoring statistically significant studies was present in medical research. Journal
of Clinical Epidemiology, 66(3), 296–301. https://doi.org/10.1016/j.jclinepi.2012.09.015
Jha, R., Jbara, A. A., Qazvinian, V., & Radev, D. R. (2017). NLP-driven citation analysis for scientometrics. Natural Language Engineering, 23(1), 93–130. https://doi.
org/10.1017/S1351324915000443
Knight, J. (2003). Null and void. Nature, 422(6932), 554–555. https://doi.org/10.1038/422554a
Lyu, H., Bu, Y., Zhao, Z., Zhang, J., & Li, J. (2022). Citation bias in measuring knowledge flow: Evidence from the web of science at the discipline level. Journal of
Informetrics, 16(4), Article 101338. https://doi.org/10.1016/j.joi.2022.101338
Makel, M. C., & Plucker, J. A. (2014). Facts are more important than novelty: Replication in the education sciences. Educational Researcher, 43(6), 304–316. https://
doi.org/10.3102/001318914545513
Matosin, N., Frank, E., Engel, M., Lum, J. S., & Newell, K. A. (2014). Negativity towards negative results: A discussion of the disconnect between scientific worth and
scientific culture. Disease Models and Mechanisms, 7(2), 171–173. https://doi.org/10.1242/dmm.015123
Nature Editorial. (2017) Rewarding negative results keeps science on track, 551 (7681) 414-414. 10.1038/d41586-017-07325-2.
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on
Psychological Science, 7(6), 615–631. https://doi.org/10.1177/1745691612459058
MacMillan K., & Feldman S. (2022). Announcing S2FOS, an open source academic field of study classifier. Retrieved October 4th, 2023 from https://blog.allenai.org/
announcing-s2fos-an-open-source-academic-field-of-study-classifier-9d2f641949e5.
PLOS (2015). Positively Negative: A New PLOS ONE Collection focusing on Negative, Null and inconclusive results. Retrieved December 22, 2022 from https://
everyone.plos.org/2015/02/25/positively-negative-new-plos-one-collection-focusing-negative-null-inconclusive-results/.
Rzhetsky, A., Foster, J. G., Foster, I. T., & Evans, J. A. (2015). Choosing experiments to accelerate collective discovery. Proceedings of the National Academy of Sciences,
112(47), 14569–14574. https://doi.org/10.1073/pnas.1509757112
Sayao, L. F., Sales, L. F., & Felipe, C. B. M. (2021). Invisible science: Publication of negative research results. Transinformacao, 33. https://doi.org/10.1590/2318-
0889202133e200009
Sharma, H., & Verma, S. (2019). Is positive publication bias really a bias, or an intentionally created discrimination toward negative results? Saudi Journal Of
Anaesthesia, 13(4), 352–355. https://doi.org/10.4103/sja.SJA_124_19
Tackett, J. L., Brandes, C. M., King, K. M., & Markon, K. E. (2019). Psychology’s replication crisis and clinical psychological science. Annual Review Clinical Psychology,
15(7), 579–604. https://doi.org/10.1146/annurev-clinpsy-050718-095710
Tahamtan, I., Afshar, A. S., & Ahamdzadeh, K. (2016). Factors affecting number of citations: A comprehensive review of the literature. Scientometrics, 107(3),
1195–1225. https://doi.org/10.1007/s11192-016-1889-2
Tian, D., Qian, Y., & Li, J. (2022). Contributions of negative results in scientific research: Evidence from journal of negative results in BioMedicine. Paper presented at
the. In Proceedings of the 26th International Conference on Science and Technology Indicators.
Tokmachev, A. M. (2023). Hidden scales in statistics of citation indicators. Journal of Informetrics, 17(1), Article 101356. https://doi.org/10.1016/j.joi.2022.101356
van Aert, R. C. M., Wicherts, J. M., & van Assen, M. (2019). Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis. PLoS
One, 14(4), Article e0215052. https://doi.org/10.1371/journal.pone.0215052
Yu, B., Hegde, Y., & Li, Y. (2017). CORA: A platform to support citation context analysis. In Proceedings of the iConference 2017.
Yu, B., & Zhang, F. (2015). Disciplinary difference in citation opinion expressions. In Proceedings of the iConference 2015.

11

You might also like