In November 2023, Times Higher Education published an article titled “Science journals overturn ban on ChatGPT-authored papers.” You read the article here.
This goes against what most other journals are doing. They do not allow generative AI content “as-is” and also require an acknowledement when large language models are used.
Our Thoughts
In a tweet, we presented our thoughts, when we read this article.
- It seems a very significant change of direction. Whether that is good or bad, only time will tell.
- The key, for us, is that the use of Artifical Intelligence, Large Languge Models, ChatGPT etc. must be declared and, in a transparent way.
- The problem is that there are people out there who will not declare that they have used AI.
- Let’s assume that this is done and they later get caught. What should happen?
- In our view, the paper should be retracted immediately, no questions asked, no long drawn out investigation. Retract the paper, with the reason being stated as “The undeclared use of AI.“
The question remains though is should authors be able to use ChatGPT (and other such tools) without changing the text and without having to declare it?
We recently tweeted about Som Biswas, who used ChatGPT to genereate papers. We have quite a few thoughts on this (see the tweet) but the one question we would ask is:
If anybody can use ChatGPT, is there anything stopping anybody “writing” a paper and publishing it, even if they have no knowledge, or expertise, on the topic being written about?