Why (and how) would somebody write more than a letter every day to scientific journals?

Image showing four letters to scientific journals and a screen shot from Scopus showing how many letters have been published.

Introduction

We have no evidence that there is anything wrong here, but just want to seek the view of others. Please take a look at the article below and see what you think.

Letters written

So far this year (16 Aug 2024, extracted from Scopus) Wiwanitkit, V. has published 379 ‘documents’. Of those, 345 of them are letters.

That is, in 2024, to date (as at 16 Aug) he has published 1.51 letters each day. By way of comparison, in 2023, he published 414 letters, which is 1.13 letters each day.

Example Letters

By way of an example, the header image shows four of the letters.

These were chosen at random (we just chose the first four that were returned from Scopus). This shows that two of the letters utilize an AI tool and two did not; at least there is no acknowledgement.

Our questions

  1. What is the benefit in writing so many letters?

    Okay, it boosts the number of publications you have but letters are not counted in my research returns. For example, we don’t think that QS
    recognizes letters when it looks at how many articles have been published by a given institution.

  2. What is the workflow for publishing letters?

    Look at new articles published each day and write a letter about that article? To us, that is a lot of time/commitment, but we can’t think of another way.

  3. The letters appear to follow a similar format.

    We have not looked at all the letters, indeed, only a small sample, but they follow a similar format. They say, they are responding to the article. highlight one or two ‘flaws’ and suggest some other work that could be done.

  4. AI tools are used – sometimes.

    Two of the letters we looked at (see green highlight) say “The author use language editing computational tool in preparation of the article.

    We are not quite sure what this means. Does it means that (something like) ChatGPT was used to correct a draft that was written? Or does it mean something else?

    We have previously suggested that an acknowledgement that an Large Language Model (LLM) has been used should be much more detailed (e.g. show the prompt and the generated text in a supplementary file). Moreover, there should be an explicit statement that an LLM was not used.

  5. There is an alternative workflow.

    i) Get a daily (or weekly) list of relevant articles published
    ii) Develop a set of LLM prompts that asks the model to analyze the article and write a letter that gives an introduction, suggests a few ‘flaws’ and suggests further work.
    iii) Wrap it all up in a letter, and send to the relevant journal. We still think that it would take at least an hour to do this, but with a team of people perhaps that is a good investment?

Your thoughts/views

Please let us know what you think about what we say above.

Graham Kendall

I have been an academic for the past 20+ years. Prior to this I worked in the IT industry. As an academic I have held several senior positions, worked internationally and have (I believe) a strong publication record. See: Google Scholar | LinkedIn | ORCID

Recent Posts