ChatGPT, an Artificial Intelligence (AI) based text-based dialogue system, has been trained on vast amounts of internet data and is capable of reorganizing information to create texts that are almost indistinguishable from those produced by human minds. (Watch this video about “What is ChatGPT”)This poses significant new challenges for schools and universities. However, scientific publishers are also faced with new challenges as AI systems are increasingly being used to assist in research and writing of scientific papers. At least four studies have been published so far in which AI was listed as a co-author, as the scientific journal “Nature” has just reported in an article. The topic has also been discussed by leaders during the World Economic Forum this week.
The Impact of AI systems like ChatGPT on the Scientific Publication Industry
The use of ChatGPT as an author on a medRxiv study about the use of chatbots in medical education is one example that has sparked a debate among publishers and experts about the question of authorship. The team behind the preprint server and its sister site bioRxiv is currently discussing whether this is acceptable. Co-founder Richard Sever, also the deputy publisher of Spring Harbor Laboratory Press, points out that only humans can take legal responsibility as authors. The fact that AI chatbots slip into the author list can happen – similar to pets and fictional people in the past, Sever said. But this is more “a control problem than a fundamental one.
Similarly, an editorial on using ChatGPT in nursing education listed the AI system as an author, but the responsible parties said it was a mistake. Nature has found two more studies that list ChatGPT in the author line. (here und here)
Authors are responsible for the validity and integrity of their work – not AI
Editors of leading scientific journals have also weighed in on the issue, with Magdalena Skipper, editor-in-chief of Nature in London, stating that ChatGPT does not meet the standard for authorship. She believes that those who use systems like ChatGPT to create articles should document this, but not in the author line, but in the methods or acknowledgments section. “We would not allow an AI to be listed as an author in a paper published by us,” adds Holden Thorp, editor-in-chief of the Science magazine family. The use of AI-generated text without proper citation may be considered plagiarism.
According to Taylor & Francis, a British scientific publisher based in London, they have not received any submissions of studies with ChatGPT authorship. The publisher is currently discussing the fundamental approach to this phenomenon, says Sabina Alam, the head of the ethics and integrity department of the publisher. She also emphasizes that authors are responsible for the validity and integrity of their work and that AI systems should be transparently mentioned in the acknowledgments.
AI Systems listed as authors or not?
Many experts in the field agree that AI systems like ChatGPT should be acknowledged in the methods or acknowledgments section rather than listed as authors. Matt Hodgkinson, a researcher on scientific integrity in the UK, points out that co-authors must make a “significant scientific contribution” to an article, which is something that software can do but cannot take responsibility for the study. This idea of AI co-authorship is limited by the fact that AI systems cannot agree to the terms of publication and cannot take responsibility for the work.
Overall, as AI systems like ChatGPT become more advanced and widely used in scientific research, the question of authorship will continue to be a topic of debate among publishers and experts. While AI systems can assist in the research and writing process, they do not meet the criteria for authorship. Therefore, the use of AI systems should be properly documented and acknowledged in the methods or acknowledgments section of scientific papers.
Get in contact with us email@example.com