You are on page 1of 1

Editorial

Writing and reviewing for us in AI times


The launch of ChatGPT in November 2022 caused a lot declared the use of ChatGPT; 0·6% used Grammarly,
of excitement, but also trepidation, about the potential an AI-powered grammar checker; and 1·8% used AI as

Flickr – Matthew Hurst


use or abuse of the tool, including in scientific and a tool in their research (which is allowed and should
medical publishing. Would this artificial intelligence (AI) be elaborated on in the methods). By contrast, 3·6%
tool, and/or other applications based on large language reported involvement of medical writers. We suspect the
models (LLMs), help researchers, peer reviewers, editors, small numbers reporting AI use represent the tip of the For our information for authors
see https://www.thelancet.com/
and publishers deal with the ever increasing need to iceberg and would like to encourage transparency. AI use pb-assets/Lancet/authors/tlid-
write, assess, publish, and digest research? Would it is not a criterion for rejection, although a poorly written info-for-authors.pdf

mitigate some of the disadvantages for people whose and inaccurate paper written by ChatGPT would be, as For Elsevier’s reviewer
information see https://www.
first language is not English? Or would it fuel paper mills, would be a poorly written and inaccurate paper written elsevier.com/en-gb/reviewer/
plagiarism, copyright infringement, and fraud? There by a human. Of note, we currently do not use tools to how-to-review
See Correspondence
are no clear answers to these questions, or the answers check for AI writing as none are validated, and a high Lancet Infect Dis 2023; 23: 781
keep changing as tools and usage evolve. Nevertheless, false-positive rate might disadvantage certain authors.
almost 1·5 years later, we can take stock on how LLMs We check for plagiarism, which, together with copyright
have affected us so far. infringement, can be a problem for LLM-generated text,
LLMs can write scientific text, including abstracts, as exemplified by several lawsuits filed by, for example,
summaries, or even whole papers. Related tools can also the New York Times, against OpenAI, the maker of
generate figures or data. As such, the first question was ChatGPT.
whether LLMs could or should be authors. Eventually, Besides authorship, peer review is another area
most publishers who released guidelines prohibited one could imagine using LLMs. However, similar to
LLMs from being an author (see information for authorship, LLMs cannot take on the responsibility
authors). Authorship comes with responsibility and for the critical appraisal of an article. For example, one
oversight for the content, which LLMs cannot provide. If of our authors experimented with one of their own
authors use LLMs to draft or refine their articles, they are papers, generating a peer review report, and although
still responsible for what is written. In particular, LLMs on the surface it looked like an actual report, ChatGPT
(similar to humans) are prone to biases, blind spots, made up statistical feedback and non-existent
and sometimes make up content. For example, when references (see correspondence). Thus, we currently
we asked ChatGPT “who gets mpox?”, the reply was do not permit the use of AI for peer reviewing (see
individuals in central and west Africa, people in close reviewer information). Furthermore, peer review is
contact with infected animals, health-care workers, confidential, and privacy and proprietary rights cannot
close contacts of infected individuals (family, caregiver, be guaranteed if reviewers upload parts of an article
household), and travellers to endemic areas. There was or their report to an LLM. We as editors also do not
no mention of men who have sex with men, nor the use AI to assess a manuscript during peer review or to
particular risk of sexual contact. Another problematic conduct peer review. We have a tool based on Scopus
issue is fictious references generated by LLMs. Therefore, data to recommend peer reviewers (based on their
we require authors to carefully review and edit AI- publication history and their peer review history with
generated text. Elsevier). Of course, we also invite peer reviewers from
Furthermore, authors should declare this use in a our network, but the tool can be helpful to broaden
statement at the end of the article, similar to how the the diversity, expertise, and background of our peer
involvement of medical writers should be declared. So reviewer pool.
far, we have seen only few such declarations (potentially To conclude, AI is shaking up scientific publishing,
because authors do not know about this requirement). slowly and not in a linear fashion, but human expertise
For example, we checked all submissions we received and judgement is and will be more important than ever.
during February 2024 and only 0·6% of articles ■ The Lancet Infectious Diseases

www.thelancet.com/infection Vol 24 April 2024 329

You might also like