If you use ChatGPT, you must acknowledge it

A chimps face with a slogan next to it

Artificial Intelligence is here to stay

Whether you are a fan of AI; or not, whether you believe it should be used as part of the research process; or not and whether you have experimented with these tools; or not, there is one certainty – AI is here and it is here to stay.

The question is, how do we deal with these new tools?

What are Artificial Intelligence's use cases in scientific publishing?

It’s generally accepted (for now anyway) that ChatGPT (or more generally large language models – we will use the terms interchangeably) is not capable of producing insights, or new knowledge, that would warrant publication in a peer reviewed scientific paper.

 

However, there are many ways that ChatGPT can help scholars, including (but certainly not limited to):

  1. Given the main text of your paper, it could write an abstract.
  2. Given an abstract, a Large Language Model can suggest a title.
  3. Given an abstract, an LLM can suggest the main headings you might consider for your paper.
  4. Given various parts of your paper, ChatGPT can write a conclusion section.
  5. Give some text of the points you want to make and the LLM can draft a section of the paper for you.
  6. If you are struggling with some code, you can ask ChatGPT to write the code for you.
  7. If you are not happy with your text, you can ask ChatGPT to summarize it in the form of a bulleted list.
  8. An LLM can suggest ideas to develop your paper further, which could then be used as the basis of a Future Work section.
  9. Ask ChatGPT to suggest the best way to analyze some data that you will use in the paper.
  10. You can ask the LLM to rephrase text so that the writing is more formal, and more suitable for a scientific journal.

There are many (many, many) videos on how you can use ChatGPT for your research. If you want to get lost in the myriad of material that is out there, the four videos below might be good starting points. We’ll see you in a few hours :-).

The author's responsibility when writing a scientific paper

For each of the above use cases it could be argued that these are the responsibility of the author. In fact, let’s be clear, they are the responsibility of the author, and let nobody tell you otherwise. You can delegate these tasks to AI but the final paper is the responsibility of the author(s).

 

Most journals now make statements that ChatGPT cannot be an author on a paper with two often used reasons being:

  1. The AI engine cannot be held responsible for what is written
  2. The copyright of the text from the AI engine cannot be attributed to the AI, therefore, the author(s) holds the copyright.

Here are a couple of examples about authorships from well-known publishers; Elsevier (archived here) and Taylor & Francis (archived here).

Guidelines for author(s) who use a Large Language Model

If authors you use an AI tool to help you write their paper there is nothing wrong with that but its use must acknowledge in the paper.

 

We would go slightly further and suggest the following guidelines:

  1. The author(s) must acknowledge that they used an AI tool in the preparation of your paper.
  2. The author(s) should provide details about the AI tool that was used. For example “ChatGPT version 3.5”
  3. The author(s) should specify which parts of the paper were prepared with AI assistance.
  4. Brief details should be provided as to how the AI tools was used.
  5. Although it does not have to be provided as part of the paper, it would be useful for the author(s) to retain logs of the AI tool, in the same way that you would make a record in a research/lab notebook.
  6. If the paper has multiple authors, then it should be assumed, unless otherwise stated, that all the authors are aware that an AI tool was used.

Guidelines for publishers on the use of Large Language Models

Publishers/journals should ensure that their advice to authors are clear, unambiguous and what actions may be taken if that guidance is not followed.

 

We would suggest that the guidelines from the publishers follow a similar model to those suggested above for the authors.

 

IEEE have some guidelines (archived here), which states:

The use of artificial intelligence (AI)–generated text in an article shall be disclosed in the acknowledgements section of any paper submitted to an IEEE Conference or Periodical. The sections of the paper that use AI-generated text shall have a citation to the AI system used to generate the text. For more information

 

We welcome this statement but would argue that it does not go far enough.

At a minimum, we would suggest that a sentence should be added that says that if it is discovered that this guidance has not been followed and that full disclosure about the use of AI is not given then the paper could be retracted and the authors banned from submitting to journals published by that publisher for (say) five years.

Call to Action

Whether we like it or not, Artificial Intelligence is here to stay. We cannot stop it and we have to learn to live with it and accept it as now part of our lives.
 
From a scientific publishing point of view, we urge all publishers to provide guidelines that state how authors should acknowledge the use of AI tools and, importantly, what are the consequences of violating those guidelines.
 
As an author, you should acknowledge the use of AI tools and be honest in how they have contributed to the article.

Recent Posts