Eleven papers are published every day with ChatGPT in the title

Graph showing how many papers with ChatGPT in the title

Last year, we regularly looked at how many papers had been published that had ‘ChatGPT’ in the title. See, for example. https://x.com/fake_journals/status/1738469538969121100.

 

The graphic shows the current status as at 28 Jun 2024. It shows that four papers were published in 2022. In 2023, 2,660 papers were published and this year, so far, 2,127 papers have been published. There has even been one paper published in 2025.

 

It won’t be too long before the number of papers published in 2024 passes the number published in 2023, bearing in mind we are only a little over half way through 2024.

Some statistics

🔴In 2023, the 2,660 papers represents 7.29 papers being published each day.

 

🔴We are 180 days into 2024 so, given the 2,127 papers that have been published, this represents 11.82 papers being published each day.

Simply declaring the use of ChatGPT is not enough

Two requirements when authors use ChatGPT

Our tweet

In one of our tweets we said that only acknowledging the use of ChatGPT and, by extension, the use of Large Language Models is not enough.

The issue

Most journals, if not all, now require authors to declare the use of Large Language Models such as ChatGPT, Bard etc. This is a good start but it does not go far enough? We have come to the conclusion that it is not. 

 

Let us take an extreme example. Actually, perhaps it is not that extreme – perhaps this is the norm. An author generates an entire paper using ChatGPT, then they simply acknowledge the use of this large language model.

 

Now consider the case where an author generates one paragraph using ChatGPT – say 5% of the paper, and then they heavily edit that paragraph. They do the right thing and acknowledge the assistance of ChatGPT.

 

These two scenarios are obviously very different and yet the acknowledgement could read exactly the same, something bland such as “The use of ChaptGPT 3.5 has been used in assisting the writing of this paper.” 

 

With such a bland acknowledgement, the reader will not know how much of the paper was written with ChatGPT assistance, or which parts of the paper had received help.

 

We would suggest that something stronger is required. As an opening gambit, we present our suggestion here.

The solution

We suggest that something, above and beyond a simple acknowledgement, is required.

The main thrust of the suggestion is shown in the header image but can be stated as:

🔴An acknowledgement is still required.

 

🔴In addition, the author(s) are required to supply the ChatGPT prompts that were used as well as the text that this generated. This enables the reviewers to know which parts of the article had help from a large language model and also to see how much that generated text had been edited.

 

🔴We also suggest that the prompts and generated text are uploaded as supplementary files. These can be made available to the reviewers and also the readers, should the paper be accepted.

These files might be useful to future researchers who might be researching the use of large language models, how this area has developed etc. That is, it might help researchers who are not interested in the research topic the author(s) have written about but could be useful to other research disciplines.

 

Your view?

We would welcome your views on this suggestion, either via out Twitter (X) account or via email at admin@predatory-publishing.com.

Can anybody now write a scientific paper?

An image showing four elements of a paper written using ChatGPT

The paper shown in the header image is generated with ChatGPT.

🔴The author openly acknowledges this and also adds some commentary of his own.

🔴The paper contains no references.

🔴Should this type of paper be in the scientific literature?

Here is some further information and some thoughts we had.

1️⃣The paper we are looking at (see 1 in the image) is “ChatGPT and the Future of Medical Writing” by Som Biswas. You can access the paper here: doi.org/10.1148/radiol.

 

2️⃣We chose this paper after seeing a tweet by @MishaTeplitskiy. Unfortunately, his tweet has since been been deleted, so we cannot link to it.

3️⃣Essentially the tweet says that an author (Som Biswas) has published a whole load of papers in 2023 that have a similar theme. That is “ChatGPT and the [insert domain here]“.

4️⃣You can see the Google Scholar profile of the author (see 2 on the image). This profile appears to have been deleted but we archived it on Wayback Machine: web.archive.org/web/2023121609

5️⃣If you look at the paper, the author acknowledges that he used ChatGPT to write the paper (see 3 on the image). Not only is this acknowledged in the acknowledgments section but it is also stated in the paper itself. This is a credit to the author.

6️⃣The paper contains zero references. There is a section called “Fabricated References” (see 4 on the image).

The text in the paper says “Three fabricated references that are not cited in the body of this article:” We are not quite sure what this means. Does it mean that ChatGPT generated these references but the author just noted them, but did not use them in the body of the article?

Just to note, the references are valid papers.

7️⃣This paper has been cited 263 times (see 2 in the image). This seems a lot for a paper published in 2023 (and this figure was captured on 16 Dec 2023). We are not suggesting anything is wrong but it does suggest that a deeper investigation might be worthwhile.

8️⃣This paper does acknowledge the use of ChatGPT in writing the paper but

i) do all his similar papers do this and

ii) if the author says he edited the ChatGPT generated text, is there any way of know how much editing has been done?

So what?

So, why are we raising this? Here are a few things, you might want to think about.

The point of the original tweet was to raise the fact that this is a prolific author, especially in 2023.

 

Many of the papers are around ChatGPT and how it relates to a given domain. They are all (we think – we need to do a full investigation) generated by ChatGPT.

 

Our question is “What is to stop anybody generating papers using ChatGPT, on any subject, and publishing it?

 

Are we entering a time, when anybody can publish in the scientific literature just by using a large language model?

 

This is especially true, when there is not literature review/related work section and the paper contains no references.

Final Note: This article is based on one of our previous tweets.

Some journals now allow ChatGPT-authored content

Computer screen showing an image of ChatGPT

In November 2023, Times Higher Education published an article titled “Science journals overturn ban on ChatGPT-authored papers.” You read the article here.

 

This goes against what most other journals are doing. They do not allow generative AI content “as-is” and also require an acknowledement when large language models are used.

Our Thoughts

In a tweet, we presented our thoughts, when we read this article.

  1. It seems a very significant change of direction. Whether that is good or bad, only time will tell.
  2. The key, for us, is that the use of Artifical Intelligence, Large Languge Models, ChatGPT etc. must be declared and, in a transparent way.
  3. The problem is that there are people out there who will not declare that they have used AI.
  4. Let’s assume that this is done and they later get caught. What should happen?
  5. In our view, the paper should be retracted immediately, no questions asked, no long drawn out investigation. Retract the paper, with the reason being stated as “The undeclared use of AI.

The question remains though is should authors be able to use ChatGPT (and other such tools) without changing the text and without having to declare it?

We recently tweeted about Som Biswas, who used ChatGPT to genereate papers. We have quite a few thoughts on this (see the tweet) but the one question we would ask is:

 

If anybody can use ChatGPT, is there anything stopping anybody “writing” a paper and publishing it, even if they have no knowledge, or expertise, on the topic being written about?

Detecting AI generated papers: Two recent articles

Two papers that address the topic of detecting AI generated text
Introduction
As well as using generative AI (Artificial Intelligence) to help write papers, there is also active research which looks at ways to detect papers that have been written using AI tools. This article highlights two such papers.
 
ChaptGPT and Large Language Models (LLM)
ChatGPT, and other LLM tools, are already having a massive impact on the world at large and the scientific community is, if anything, more affected than many other sectors.
 
Detecting papers written with AI Tools
There is no doubt that AI tools can be of benefit to those writing scientific papers, but there is a growing need to detect when these tools have been used.
 
Why? There are many reasons, but one is being able to check papers when authors have not declared when they have used Generative AI to assist in their paper.
 
Many journals ask that the use of AI is declared. Like plagiarism tools, it would be useful to have a tool to check whether the paper has, even partly, been written using one of the multitude of AI tools that are now available.
But, detecting AI generated text is not an easy task, which has given rise to research on this topic.

 

 
Two recent papers
These two papers, are just a couple of recent examples of papers that is addressing this topic.
 
1) Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text (DOI: https://doi.org/10.1007/s40979-023-00140-5)
 
Abstract: The proliferation of artificial intelligence (AI)-generated content, particularly from models like ChatGPT, presents potential challenges to academic integrity and raises concerns about plagiarism. This study investigates the  capabilities of various AI content detection tools in discerning human and AI-authored content. Fifteen paragraphs each from ChatGPT Models 3.5 and 4 on the  topic of cooling towers in the engineering process and five human-witten control  responses were generated for evaluation. AI content detection tools developed by  OpenAI, Writer, Copyleaks, GPTZero, and CrossPlag were used to evaluate these  paragraphs. Findings reveal that the AI detection tools were more accurate in  identifying content generated by GPT 3.5 than GPT 4. However, when applied to human-written control responses, the tools exhibited inconsistencies, producing false positives and uncertain classifications. This study underscores the need for further development and refinement of AI content detection tools as AI-generated content becomes more sophisticated and harder to distinguish from human-written text.
 
 
2) (Mis‑)Classification of 17,721 Journals by an Artificial Intelligence Predatory Journal Detector (DOI: https://doi.org/10.1007/s12109-023-09956-y)
 
Abstract: The SCImago Journal Rank (SJR) ranks journals into four quartiles (Q1–Q4). SJR
serves as a safelist for journal selection, when trying to avoid predatory journals, as journals that have been indexed by SJR are seen as having stringent publishing standards. An AI-based tool, the Academic Journal Predatory Checking System (AJPC), claims to be able to differentiate suspected predatory journals from normal journals. In this study, we asked (25 March 2023) AJPC to classify the top 2500 journals (based on SJR ranking). We found that 65.64% of these journals were classified as “suspected predatory”, 33.28% were classified as “normal”, while 1.08% of the journals could not be classified. A follow-up run (30 March 2023) returned  different results (89.20%, 10.16% and 0.64%). This set of results is worrying, not only because of the number of SJR journals that are thought to be “predatory”, but also because of the difference in results when using the same tool a matter of days apart. We extended our analysis and looked at seven stand-alone publishers (Elsevier, Frontiers, MDPI, OMICS, Springer Nature (incl. Nature Portfolio), Taylor & Francis and Wiley). In total, we asked AJPC to classify 17,721 journals. The results confirm that we should be concerned about the classifications provided by AJPC. For example, 100% (4756) of Elsevier journals were classified as “suspected predatory”. Even Springer Nature (which includes the journal that published the AJPC paper) has journals which were classified at “suspected predatory”. We thus urge caution against relying on AJPC at present. We also urge the AJPC’s authors to provide an explanation as to why a journal is classified in a particular way, as part of their user interface. We are willing to conduct further analyses should AJPC be revised and updated as it could provide an important service to the scholarly community.
 
Finally
If you are interested in this area of research, these two papers might be good starting points, especially if you also access the papers that they cite.
 
 

1,000 ChatGPT papers have been published this year, so far

Image of the most highly cited ChatGPT paper as an 22 Aug 2023

Almost 1,000 papers have been published THIS YEAR that have ChatGPT in the title.  As at 22 August 2023, the actual number of papers published is 985.

Since ChatGPT came to the general public’s attention in November 2022, it has been subject to scrutiny from almost all walks of life. The research community has not been slow to investigate this AI tool. 

 

Looking at Scopus and ONLY searching for ChaptGPT in article titles (so NOT looking at the abstracts and keywords), the search returned 985 papers. Moreover, only three papers were published in 2022, so 982 papers have been published this year (and counting – this data was extracted on 22 Aug 2023).

 

Figure 1 shows the top five papers, with regard to the number of times they have been cited. As can be seen, they are receiving a lot of attention. It is interesting to note that four of the papers were published by Nature

and the other paper (the top cited paper) was published in Science.

The top five ChatGPT papers in 2023 (as at 22 Aug 2023)
Figure 1: Top five ChatGPT papers in 2023, as at 22 Aug 2023

The top five papers

Here are links to the five papers listed above.

  1. ChatGPT is fun, but not an author: https://dx.doi.org/10.1126/science.adg7879
  2. ChatGPT: five priorities for research: https://dx.doi.org/10.1038/d41586-023-00288-7
  3. ChatGPT listed as author on research papers: many scientists disapprove: https://dx.doi.org/10.1038/d41586-023-00107-z
  4. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use: https://dx.doi.org/10.1038/d41586-023-00191-1
  5. Abstracts written by ChatGPT fool scientists: https://dx.doi.org/10.1038/d41586-023-00056-7

Do you want to see all the papers?

We are testing a new service, where you can request a full list of the ChaptGPT papers that have been published in 2023. If you would like to receive this list, please go to: predatory-publishing.com/request-more-i

 

… we have to say that this is the first time we have done this, so please bear with us and we apologise if there are teething problems.

How can I check if a journal is predatory? Advice from ChatGPT

ChatGPT logo and a bullet list

Ask ChatGPT if a journal is predatory

Out of interest, we asked ChatGPT what advice it would give in order to identify a predatory journal.

 

The exact question we asked was “How can I check if a journal is predatory?

 

We asked the question on 23 June 2023, using ChatGPT 3.5.

 

The response we got is shown below.

The response from ChatGPT when asked how to check if a journal is predatory

To determine if a journal is predatory, you can consider the following steps:

  1. Research the Journal: Look for information about the journal’s reputation, history, and editorial board. Search for their website and explore the content available.

  2. Examine the Journal’s Website: Assess the overall appearance and functionality of the journal’s website. Predatory journals often have poorly designed websites, numerous grammatical errors, and lack transparency in terms of contact information or journal policies.

  3. Review the Scope and Focus: Evaluate the journal’s scope and the topics it claims to cover. Predatory journals may have an overly broad or unrelated scope, aiming to attract submissions from a wide range of subjects without specific expertise.

  4. Editorial Board: Check the composition of the editorial board. Look for well-known researchers or experts in the field. If the journal lists individuals without their consent or fails to provide adequate information about the board members, it may raise concerns.

  5. Publication Fees: Predatory journals are often profit-driven and may charge high publication fees without providing adequate services in return. Excessive fees combined with low-quality publishing practices can be indicative of a predatory journal.

  6. Peer Review Process: Investigate the journal’s peer review process. Reputable journals have a rigorous peer review system to ensure the quality of published articles. Predatory journals may claim to have peer review but have a superficial or nonexistent review process.

  7. Indexing and Impact Factor: Check if the journal is indexed in reputable databases such as PubMed, Scopus, or Web of Science. Predatory journals often falsely claim indexing in well-known databases or mention irrelevant indexing services.

  8. Solicitation Emails: Be cautious if you receive unsolicited emails inviting you to submit your work to a journal you have never heard of. Predatory journals often use aggressive email campaigns to attract authors.

  9. Research the Journal’s Reputation: Look for discussions or reviews about the journal from researchers in your field. Check reputable forums, social media groups, or online communities where academics share their experiences with journals.

  10. Consult Colleagues and Mentors: Seek advice from trusted colleagues, mentors, or faculty members who are familiar with publishing practices in your field. They may have valuable insights or recommendations regarding the journal you are evaluating.

By following these steps, you can gather information and make an informed judgment about the legitimacy of a journal. It’s crucial to be vigilant and conduct thorough research before submitting your work to any publication.

Our Comments

All this makes sense to us and it is good advice.

 

An alternative is to look through the scientific literature and find the many checklists that have been published. In our view, the suggestions made by ChatGPT are pretty representative of many of the checklists that you will find.

If you use ChatGPT, you must acknowledge it

A chimps face with a slogan next to it

Artificial Intelligence is here to stay

Whether you are a fan of AI; or not, whether you believe it should be used as part of the research process; or not and whether you have experimented with these tools; or not, there is one certainty – AI is here and it is here to stay.

The question is, how do we deal with these new tools?

What are Artificial Intelligence's use cases in scientific publishing?

It’s generally accepted (for now anyway) that ChatGPT (or more generally large language models – we will use the terms interchangeably) is not capable of producing insights, or new knowledge, that would warrant publication in a peer reviewed scientific paper.

 

However, there are many ways that ChatGPT can help scholars, including (but certainly not limited to):

  1. Given the main text of your paper, it could write an abstract.
  2. Given an abstract, a Large Language Model can suggest a title.
  3. Given an abstract, an LLM can suggest the main headings you might consider for your paper.
  4. Given various parts of your paper, ChatGPT can write a conclusion section.
  5. Give some text of the points you want to make and the LLM can draft a section of the paper for you.
  6. If you are struggling with some code, you can ask ChatGPT to write the code for you.
  7. If you are not happy with your text, you can ask ChatGPT to summarize it in the form of a bulleted list.
  8. An LLM can suggest ideas to develop your paper further, which could then be used as the basis of a Future Work section.
  9. Ask ChatGPT to suggest the best way to analyze some data that you will use in the paper.
  10. You can ask the LLM to rephrase text so that the writing is more formal, and more suitable for a scientific journal.

There are many (many, many) videos on how you can use ChatGPT for your research. If you want to get lost in the myriad of material that is out there, the four videos below might be good starting points. We’ll see you in a few hours :-).

The author's responsibility when writing a scientific paper

For each of the above use cases it could be argued that these are the responsibility of the author. In fact, let’s be clear, they are the responsibility of the author, and let nobody tell you otherwise. You can delegate these tasks to AI but the final paper is the responsibility of the author(s).

 

Most journals now make statements that ChatGPT cannot be an author on a paper with two often used reasons being:

  1. The AI engine cannot be held responsible for what is written
  2. The copyright of the text from the AI engine cannot be attributed to the AI, therefore, the author(s) holds the copyright.

Here are a couple of examples about authorships from well-known publishers; Elsevier (archived here) and Taylor & Francis (archived here).

Guidelines for author(s) who use a Large Language Model

If authors you use an AI tool to help you write their paper there is nothing wrong with that but its use must acknowledge in the paper.

 

We would go slightly further and suggest the following guidelines:

  1. The author(s) must acknowledge that they used an AI tool in the preparation of your paper.
  2. The author(s) should provide details about the AI tool that was used. For example “ChatGPT version 3.5”
  3. The author(s) should specify which parts of the paper were prepared with AI assistance.
  4. Brief details should be provided as to how the AI tools was used.
  5. Although it does not have to be provided as part of the paper, it would be useful for the author(s) to retain logs of the AI tool, in the same way that you would make a record in a research/lab notebook.
  6. If the paper has multiple authors, then it should be assumed, unless otherwise stated, that all the authors are aware that an AI tool was used.

Guidelines for publishers on the use of Large Language Models

Publishers/journals should ensure that their advice to authors are clear, unambiguous and what actions may be taken if that guidance is not followed.

 

We would suggest that the guidelines from the publishers follow a similar model to those suggested above for the authors.

 

IEEE have some guidelines (archived here), which states:

The use of artificial intelligence (AI)–generated text in an article shall be disclosed in the acknowledgements section of any paper submitted to an IEEE Conference or Periodical. The sections of the paper that use AI-generated text shall have a citation to the AI system used to generate the text. For more information

 

We welcome this statement but would argue that it does not go far enough.

At a minimum, we would suggest that a sentence should be added that says that if it is discovered that this guidance has not been followed and that full disclosure about the use of AI is not given then the paper could be retracted and the authors banned from submitting to journals published by that publisher for (say) five years.

Call to Action

Whether we like it or not, Artificial Intelligence is here to stay. We cannot stop it and we have to learn to live with it and accept it as now part of our lives.
 
From a scientific publishing point of view, we urge all publishers to provide guidelines that state how authors should acknowledge the use of AI tools and, importantly, what are the consequences of violating those guidelines.
 
As an author, you should acknowledge the use of AI tools and be honest in how they have contributed to the article.

How do you spot a ChatGPT paper? Here is one easy way

Header image for article 152

Unless you have been living on Mars, you cannot have avoided the discussion over ChatGPT (and other large language models) that has been going on for the past few months.

 

The discussion is not only about the uses that ChaptGPT can be put to, but also how content generated with these type of tools can be identified, which is important in the context of scientific publishing.

 

In this article, we look at a journal which has published a paper generated by ChatGPT. We have to say, that this is an obvious example, due to the naivety of the author and the lack of peer review. But this should only serve as a warning that journals/authors are already publishing papers written by large language models and they will not all be so easy to spot.

Are we allowed to used ChatGPT?

The short answer is no, or at least AI tools do not meet the conditions to be an author on a paper. This is stated by many organizations, such as COPE (Committee on Publication Ethics), which says:

 

AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.” [See here]

 

The same COPE web page goes on to say:

 

Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics.

 

Many other publishers and organizations have also come out with similar statements.

How can you spot a ChatGPT paper?

There is a lot of work being done on how to detect AI written papers. One of the world’s leading plagiarism detectors (Turnitin) is looking to upgrade its tool to spot AI generated content.

 

This is to be welcomed, although it is likely to lead to an arms race where there is as much work being done how to avoid detection by tools such as Turnitin, as there is on developing the tools to detect this sort of content. You only need to do a search on how to avoid plagiarism detectors to show that this has been going on for many years.

 

However, there are times when we do not need these tools, you just need to look at the paper itself and apply some common sense.

 

But, first, let’s take a look at a specific journal.

Journal of Internet Banking and Commerce

The focus of this article is the Journal of Internet Banking and Commerce (ISSN: 1204-5357), as well as an article it published.

 

The journal is an established journal, first being published in 1996.

Google Scholar

If you look at the journal’s Google Scholar page, there are articles going back to 1970 (see Figure 1). We note though that the papers marked as 1970, have volume numbers as 12 and 13, which are from 2006 and 2007 and those papers do appear in those years, so it looks like a Google Scholar metadata issue.

We also note that there are no papers on their Google Scholar profile beyond 2021 (see Figure 2).

Google Scholar page for the Journal of Internet Banking and Commerce
Figure 1: Google Scholar page for the Journal of Internet Banking and Commerce showing that some papers date back to 1970, even though it did not start publishing until 1996
Figure 2: Google Scholar page for the Journal of Internet Banking and Commerce showing that no papers are indexed after 2021

Indexing in Scopus

The Journal of Internet Banking and Commerce was indexed in Scopus from 2009 to 2016, but was then discontinued (see Figure 3).

Journal of Internet Banking and Commerce: Scopus Coverage
Figure 3: Journal of Internet Banking and Commerce: Scopus Coverage

Article Processing Fees

The Article Processing Fees (APC) for the Journal of Internet Banking and Commerce is USD 2,019 and the journal also participates in the Fast Editorial Execution and Review Process. For an extra USD 99 (payable at submission time) it provides a review in 3-5 days and publication two days after.

We have archived the journal’s APC page here.

H-index

We mention in passing, as this always frustrates us, that the journal says (see archived page here) that:

Articles published in Journal of Internet Banking and Commerce have been cited by esteemed scholars and scientists all around the world. Journal of Internet Banking and Commerce has got h-index 35, which means every article in Journal of Internet Banking and Commerce has got 35 average citations.

This is just the wrong definition of h-index.

The Article

The article that we are interested in, is titled “Stock Price Prediction based on Gradient Descent using a Back Propagation Neutral Network“. The citation is:

 

Selvamuthu D. (2023) Stock price prediction based on gradient descent using a back propagation neutral network. Journal of Internet Banking and Commerce 28(1).

 

You can access the article here. We have also archived the web page here. The PDF, as well as being available via the journal’s web site, is also available here.

Stock Price Prediction based on Gradient Descent using a Back Propagation Neutral Network
Figure 4: Screenshot of the journal article, highlighting that it was generated with ChatGPT (or similar)
Figure 4 shows a screen shot of the article. The part we draw your attention to is the text highlighted in yellow, which says:
 
As an AI language model, I can give you some information …
 
It is obvious that this text has been generated by ChatGPT (or some other language model). Looking at the rest of the paper we would guess that the entire paper has been generated by ChatGPT (or similar).
 

Our Comments/Thoughts

We have the following comments about this journal/paper.

  1. The paper cannot have undergone any peer review. Surely, any cursory read (by a reviewer, for example) would have highlighted the fact that the paper had been written by ChatGPT (or similar).
  2. This area (Artificial Neural Networks – ANN) is something we know a little about (otherwise we would not comment on the technical content of the paper). This paper is very naive and could have been written in the 1980’s when interest in ANN started to gain traction. Even though, the paper is really nothing more than a general introduction to ANN’s. There is certainly nothing that is reproducible.
  3. The references are all quite old, with only one from 2021.
  4. If the author, their institution or another stakeholder paid over USD 2,000 to have this published then they wasted their own money or, perhaps more likely, wasted the money of the tax payers. Of course, the journal will be very happy to have received this money. We can’t help feel though that the money could have been used for much better things.

Finally

If the journal, publisher or author feel that we are wrong in our analysis, or are being unfair, then we would be delighted to enter into a conversation and withdraw or update this article as a (possible) outcome.

Acknowledgements

We were motivated to write this article after seeing a LinkedIn post by Matt Hodgkinson. You can see the post here (assuming you have access to LinkedIn). Matt’s post not only features the paper we mention above but also has other examples which, if not so worrying, would be funny.

What does ChaptGPT say about Frontiers Media?

A recent tweet by Mark A. Hanson posed the question “What is your opinion of Frontiers Media as a publisher?” The survey allowed one of four answers:

  1. Very reputable publisher
  2. Reputable publisher
  3. Quasi-predatory publisher
  4. Predatory publisher

The survey was posted on the 27 Jan 2023, and ran for (we believe) seven days.

 

Before showing you the results of the survey, we thought it would be an interesting exercise to see what ChaptGPT said about this publisher, just to see if there is any correlation between the Twitter survey and ChatGPT.

What is your opinion of Frontiers Media as a publisher?

This was the question that was asked in the twitter survey. We thought we would ask ChaptGPT the same question. The question was asked on 31 Jan 2023. The response from ChaptGPT was as follows.

 

As a language model AI, I don’t have personal opinions or biases. However, Frontiers Media is a large open-access publisher that has been in operation for over a decade, and has published many high-quality research articles across a wide range of academic disciplines. Some researchers and academics consider it to be a reputable publisher, while others have raised concerns about its business model and the peer review process. Ultimately, the reputation of a publisher, including Frontiers Media, depends on the individual perception of the research community and the quality of the articles it publishes.

 

We should note that this question was posed in a “new” ChatGPT. That is, ChaptGPT was not given access to its previous answers, so that it did not answer questions in the context of previous answers.

Is Frontiers Media a predatory publisher?

After starting a new chat, we posed the question above (31 Jan 2023). The response was as follows:

 

Frontiers Media has received some criticism and allegations of being a predatory publisher, however, the company has denied these claims and states that it follows rigorous peer-review processes. The definition of a predatory publisher is still subject to debate and it’s important to consider multiple sources before making a conclusion.

Would you have any advice for a researcher who is thinking of submitting a research paper to Frontiers Media?

Without starting a new chat, we posed the question above. ChatGPT responded as follows:

Here are a few tips for a researcher considering submitting a research paper to Frontiers Media:

  1. Check the reputation and publication standards of the specific journal within Frontiers Media you are considering submitting to, as the quality of journals can vary within the same publisher.

  2. Look for information about the peer-review process, editorial board, and publication fees.

  3. Check for any warnings or red flags on websites like ThinkCheckSubmit or Cabells blacklist.

  4. Consider the impact factor and citations of the journal and previous articles published in the same journal.

  5. Consult with your colleagues and mentors to see if they have any experience or insights on the journal.

  6. Read the author guidelines carefully, including information on open-access publishing fees and copyright policies.

  7. Make an informed decision based on your own research and professional judgement.

 

What did the Twitter survey say?

After the Twitter survey had closed, we took a look at the results (see figure below).

 

Results of a Twitter survey which asked "What is your opinion of Frontiers Media as a publisher?"

We should note that the survey is a small sample size (42 responses) and the results must be read/interpreted in that context.
The figures in red and the actual number of responses, which we give in addition to the percentage figures that are reported.

Other information sources

We don’t often cite Wikipedia (for all the well-known reasons around not being peer reviewed, subject to change etc.) but it is a go-to place for many people so the Wikipedia Frontiers Media page might be worth a look, with all the caution that should be exercised when looking at this resource.

 

Kendall (2021) mentioned Frontiers in his paper on Beall’s legacy. That paper drew on Schneider (2017), which itself drew on Basken (2017). The presented evidence would suggest that Beall adding Frontiers Media to his list, ultimately led to him taking down his list.

For the interested reader, we would recommend that the above three papers are worth taking  a look at. Frontiers Media has been subject to previous scrutiny and have always strongly defended their position.

Some of the papers that have mentioned Jeffrey Beall and Frontiers Media

Conclusion

It is interesting to see that the Twitter survey broadly agrees with the views given by ChatGPT. Of course, this all has to be taken with a pinch of salt. The survey sample is relatively small and could be biased in a number of ways (the audience that is likely to see it, the responses are anonymous so could be open to people who are conflicted taking part etc.).

The ChatGPT comments should also be treated with caution. Where ChaptGPT gets it information from is unknown, there are no references to support its views and ChaptGPT does not have information more recent that 2021.

 

Our advice: We cannot do better than ChaptGPT which says “Make an informed decision based on your own research and professional judgement.” It is important that, as researchers, we learn how to find journals and publishers that we are comfortable with. There is lots of information, advice and colleagues out there and it is beholden on us to draw on those resources, to decide which journals are deserving of our research.