Science journals ban listing of ChatGPT as co-author on papers | Peer review and scientific publishing
Publishers of thousands of scientific journals have banned or restricted the use of an advanced AI-driven chatbot over fears it could flood the academic literature with erroneous and even fabricated research.
ChatGPT, a fluent but unreliable chatbot developed by OpenAI in California, has impressed or upset over a million users by posting poetry, short stories, essays and even personal advice since its launch in November.
But while the chatbot has proven to be a huge source of fun, his take on how free the peanut butter sandwich from the VCRin the style of the King James Bible, is one of the notable hits – the program can also create fake scientific abstracts that are convincing enough to deceive reviewers.
The more legitimate use of ChatGPT in the preparation of articles has already led him to become a co-author of several articles.
The sudden appearance of ChatGPT has caused controversy among publishers. On Thursday, the editor-in-chief of the leading US journal Science, Holden Thorpe, announced an updated editorial policy to prohibit the use of text from ChatGPT and clarify that the program cannot be credited as the author.
“Given the hype that has been generated around this, it would be nice to be clear that we will not allow ChatGPT to be the author or use its text in articles,” Thorp said.
Leading scientific journals require authors to sign a form acknowledging that they are responsible for their contributions to the work. Because ChatGPT can’t do that, it can’t be the author, Thorpe says.
But even using ChatGPT in preparing the article is problematic, he says. ChatGPT makes a lot of mistakes that can find their way into the literature, he says, and if scientists rely on AI programs to prepare literature reviews or summarize their results, the proper work context and deep scrutiny that the results deserve could be lost. “This is the opposite direction of where we need to go,” he said.
Other publishers have made similar changes. On Tuesday, Springer-Nature, which publishes about 3,000 journals, updated its rules to say that ChatGPT cannot be listed as an author. But the publisher didn’t completely ban ChatGPT. The tool and similar ones can be used in the preparation of articles, provided that the full information is disclosed in the manuscript.
“A particular event that we felt very strongly that we needed to respond to was the fact that, almost all of a sudden, the tool showed up as a co-author,” said Magdalena Skipper, editor-in-chief of Nature.
Skipper believes that with the right restrictions, ChatGPT and similar AI tools can be useful for science, not least in terms of leveling the playing field for non-native English speakers who can use AI programs to make the language in their articles. more fluent.
Elsevier, which publishes about 2,800 journals including Cell and Lancet, has taken a similar stance to Springer-Nature. Its guidelines allow the use of AI tools “to improve the readability and language of a research article, but not to replace key tasks that authors must perform, such as interpreting data or generating scientific inferences,” Elsevier’s Andrew Davies said, adding that authors must state whether they used AI tools and how.
Michael Eisen, editor-in-chief of eLife, said that ChatGPT cannot be the author, but considers its adoption inevitable. “I think the better question is not whether to allow it, but how to manage the fact that it is being used,” he said. “The most important thing, at least for now, is that authors have to be very sincere about using it and describing how it was used, and it’s clear to us that by using a tool, they take responsibility for its outcome.” .
Sandra Wachter, professor of technology and regulation at the University of Oxford, said: “It’s good to see publishers taking action. ChatGPT allows you to cut corners, and this is especially troubling if the proposed content is not subjected to thorough cross-checking, but is simply considered correct. This can lead to misinformation and pseudoscience. I think many other sectors such as education, the arts and journalism will have to consider similar steps because they face similar challenges.”