A traffic light system for predatory publishers?

Introduction

One of the challenges our community faces is identifying predatory publishers, in a way that everybody agrees upon. Is it possible to come up with a way where we can definitively state whether a given journal or publisher is predatory? Moreover, would everybody agree with the classification of a given journal/publisher?

In this article, we discuss some of these issues and ask if there is an alternative way of classifying a predatory publisher/journal and, at least, start a discussion as to how these ideas could be developed.

Binary Classification of predatory journals by Beall

If you know about Beall’s List (and if not take a look at our article), especially the reason for it being taken off line so abruptly, you will know that part of the reason is that Beall had a binary classification. Either a journal/publisher was predatory (and was on his list) or it was not (and was not on his list).

The fact that Beall was the only person making the decision as to whether a journal/publisher was predatory or not was (in hindsight) unhelpful as Beall was a lone voice, and when somebody disputed whether the journal/publisher should be classified as predatory, it was difficult to defend.

If Beall had a support team around him, perhaps his list may have lasted longer than it did, but the fact he was a lone voice, and it was a binary classification, both contributed to its ultimate demise.

Do we need to be definitive whether a journal is predatory?

 Do we really need to be definitive about whether a journal/publisher is predatory or not?

Or do we just need to be able to say “buyer beware” and suggest to others that they may want to carry out their own checks and balances before deciding whether to submit to a given journal.

Indeed, you may also want to check before citing a paper from a possible predatory journal, but that is digressing from the main points we want to discuss in this article.

So, it could be argued that we are not concerned with being absolutely certain whether a journal is predatory or not. We only need to have an element of doubt so that we can simply look for another journal in which we have more confidence, or we can carry out our own checks if we are really attracted to that journal.

Multiple points of view

One of the issues that Beall had, was that he was the only person making the decision whether a journal/publisher was predatory or not.

It might be more helpful if more people were involved in that decision, if nothing else so that the classification of a journal is the view of many people, which might be seen as a better indicator than just one person making the decision.

There are a few ways this could be done, for example:

  • Wisdom of the Crowds: This asks the opinion of as many people as possible, drawn from different sectors (such as authors, readers, editors, publishers etc.). The main idea is to get a range of views and as the number of people contributing increases, the results get more and more accurate. If you have never heard of wisdom of the crowds (or crowd sourcing), take a look at this article or this book (The Wisdom of Crowds). You might also want to read where it all started, with Francis Galton visiting a country fair in 1906.
  • Surveys on social media: Many platforms now enable the use of surveys, which is a type of crowd sourcing, with Twitter, LinkedIn and Facebook being the obvious ones.
  • Peer review articles:This is not something that has been done very much, but we would like to see it done more often. That is, a review of a publisher, or a journal, that is subject to peer review and which becomes part of the scientific archive.
    This has the benefit that more than one person has looked at the evidence and the conclusions drawn and it also provides a data point in the scientific literature which might be useful to other researchers.

Traffic light system for predatory publishers

Two of the main points we mention above are:

  1. A suggestion that we do not require a binary decision as to whether a publisher/journal is predatory or not. Rather we just need some warning signs that it may be, so that further action can be taken.
  2. The classification should not be down to just one person, but should involve as many people as possible.

We expand on each of these points below.

Classification

Rather than classifying publishers/journals using a binary classification (it is predatory, yes or no?), why not use a system with more categories; say a traffic light system where green says a journal is not predatory, red says that it definitely is and amber says that the jury is out and further investigation is necessary?

Of course, not even that is ideal, not even if we extend the range of the classification options to give us a little more latitude (say a five category system). The main problem is that the red (definitely predatory) and green (definitely not predatory) will still not be agreed by everybody, so there will be a tendency to push more towards amber than one possibly would like. Even if we extend to more categories (say five) the end points will suffer from the same issue.

However, we still argue that a non-binary system is better than a binary classification.

More than one viewpoint

As already mentioned, Beall was the only person deciding whether a publisher/journal was predatory or not.

We have already suggested as to how this could be expanded (wisdom of the crowds, surveys and peer review).

Of the three, we favor peer review. This may not get as many viewpoints as a survey but at least we know who wrote the article and (at least the editors do) who reviewed the article. The evidence that supports the conclusions drawn are also part of the scientific archive and it can be referenced by others.

Leave it with us

We would like to say “Leave this to us, we’ll implement something that enables a traffic light system to be assigned to each publisher/journal.

However, we are realistic enough to recognize (at least) two issues with this.

  1. If we try and do something like Crowd Sourcing (Wisdom of the Crowds), we are not drawing from a diverse enough population (as the people that follow our Twitter feed and read our blog are probably biased; not in a bad but they are likely to think as we do). We may also struggle to get a large enough sample size.
    Our preferred option are peer reviewed articles, but this is not something we can do on our own, but we would encourage you to join us in this ambition.
  2. Given the large number of predatory publishers/journal there are (Cabells say (23 Nov 2022) “The searchable Journalytics database includes 18 academic disciplines from more than thirteen thousand international scholarly publications.“), we do not have the resources to look at all possible journals in realistic timescales.

Call to action

If you agree (or even disagree) with the points we make in this article we would welcome your thoughts. We would also encourage you to consider how the classification of predatory publishers/journals could be done, which improves on current methodologies.

How can you help us?

We would welcome comments on this article (in fact any article) via our Twitter accounts.

You may have noticed that we do not enable comments on our blog posts. This is due to the spam that this attracts and also the fact that we would have to moderate those comments and this takes a lot of time and, we know from personal experience, that the author of those comments would like them to appear instantly and, when they do not, it can cause frustration.

You can email us as admin@predatory-publishing.com. We don’t monitor that account on a daily basis, but we do read everything that is sent, even if we do not respond.

We would also ask you to consider supporting us as a patron. It would really help us to continue, and develop, the work that we do.