Posted Wednesday, 8 Apr 2026 by Marit Moe-Pryce
The academic publishing system is coming apart at the seams due to the flood of low-quality or simply fake articles currently in circulation. Over the past couple of years, since AI really took hold, there has been an increasingly intense debate about the problem globally.
Many serious academic journals are currently experiencing a marked increase in the numbers of articles being submitted. Several have reported annual increases approaching 40 percent, or more.
The situation is critical. The academic publishing system is in need of urgent life support, and a collective effort from the entire scholarly community is paramount.
The surge in AI-generated, or so-called ‘hallucinated’, references has resulted in many academics finding themselves credited with references to articles and books that they have never written (but perhaps should have written). Others are the recipients of peer review reports that seem to have been authored by AI.
This development has led researchers, research institutions – and those of us working in academic publishing – to pose an urgent question: Who is responsible for stopping this kind of ‘academic’ slop from getting published and thus gaining legitimacy?
In social media debates, many point to the obvious candidates: editors and peer reviewers. That’s all well and good, but not enough. What about the authors themselves and the publishers?
I believe we urgently must step-up the debate on the distribution of responsibility.
First and foremost, authors themselves have an obvious responsibility when they turn to AI. They must be conscious and very aware of how they use AI. Not least, being transparent and describing what programmes they have used –and how– is important. They must also be clear what the consequences of such AI assistance is for the article they are sending out into the world.
In addition, peer reviewers must own their share of responsibility and contribute (even) more: as well as assessing the article’s academic quality, they must ensure that the references are correct and adequate. They need to make spot checks of references in the bibliography, and help by flagging linguistic and other factors that could indicate fraudulent use of AI, and as more than half of the reviewers allegedly now use AI themselves for peer review, they must be extra critical of their own use of AI in this work.
Journal editors must nevertheless shoulder the lion’s share of responsibility for weeding out dubious and fake articles before they get sent on for peer review. This is the only way to prevent peer-review fatigue, already a serious problem, from becoming endemic. Furthermore, there must be consequences to sloppiness and fraud. There is a need for some form of information-sharing between journals regarding sloppy articles, whether this is due to carelessness or deliberate deceit, to avoid these from simply immediately being submitted elsewhere post-rejection.
Editors must also keep up to date on how the so-called paper mills operate, adapt, and utilise AI to continue to undermine research with their industrial scale production of false ‘research articles’. They must prevent fraud by continually monitoring and updating their own routines. They must also help to raise awareness and ensure that their respective academic communities are made aware of the challenges we face.
Publishers have escaped criticism remarkably easily for their lacking engagement with poor and dishonest use of AI. Their business models have pivoted sharply from a focus on quality to quantity, largely as a direct consequence of political decisions to prioritise open access publishing. This has incentivised publishers to publish more, faster, and more cost-effectively.
Copy editing and proofreading in the traditional sense were perhaps the first expendable item to be cut by publishers in the name of profitability. Old-school copy editors, who cross-checked citations and references for authenticity, (have largely?) disappeared. And so, in practice, all the work involved in quality assurance has shifted onto editors.
In my position, I’ve been a close observer of publishers’ copy editing over the years. It has become mostly mechanic and in no way adequate to uncover the use of AI.
Things were tolerable for a while, but the emergence of paper mills and AI, enable shortcuts and dishonesty on an industrial scale. Academic communities can no longer hold the fort on their own.
Publishers, who continue to make a lot of profit from academic output, must be held accountable. Academic communities need to demand higher standards from publishers to ensure that the final technical product meets a standard that guarantees quality and integrity. Copy editing must include quality checks of the content.
So simple and yet so difficult.
Looking into the crystal ball, I predict that hallucinated references will soon be a thing of the past. AI will improve, and both honest researchers and fraudsters will soon know to weed out any such hallucinations before submitting their articles to journal editors.
However, this is no cause for celebration because, as a consequence, it will become even more difficult for editors and peer reviewers to uncover fraudulent and careless uses of AI.
Previously, I’ve argued that the only way to stop the paper mills is to eliminate the market in which they operate. This ‘market’ is the part of the publishing system, often acerbically referred to as ‘publish or perish’.
In this respect, it is perhaps good news that AI ironically may contribute to the implosion of the ‘publish or perish’ market. The unsustainable flood of fraudulent and low-quality articles overwhelming editorial offices and the peer-review system is also negating the logic of ‘publish or perish’.
Because, what is the significance of a journal having a 10 percent acceptance rate if 50 percent of the submissions were junk to start with? What’s the validity of impact factors when journals are pressured to publish more to satiate publishers’ desire for income? How can one know which journals are leading or legitimate in an ever-expanding sea of journals when even national evaluators of academic publishing are unsure?
What’s the point of enumerating citations or publications on a CV when many academics, thanks to AI, now have numbers in three or four digits?
It appears inevitable that these metrics will lose much of their importance in assessments relating to academic recruitment and advancement. In essence, we have no way of knowing whether academic output is a mark of excellence. If the logic of ‘publish or perish’ breaks, publishers’ business models will have to pivot once again, hopefully this time back towards prioritizing quality over quantity.
While we wait for AI to help take down the flawed parts of academic publishing, editors, authors, peer reviewers, and publishers must work together and provide immediate life-support to those parts of the publishing system we actually do want to keep.
We must work collectively to ensure that publications in today’s serious academic journals pass the litmus test for quality. In the security sector, people often talk about the Swiss cheese model in connection with the management of complex risk scenarios where there is no single solution. By aggregating many different measures and layers, the security gaps in academic publishing system can be closed.
The responsibility for saving the academic publishing system and the quality of academic publishing cannot be placed with editors alone. We need to build an academic ‘Swiss cheese’ model – a community effort in academia in which everyone, including —and perhaps particularly— the publishers, must take an active role if we are to assure quality and put an end to the paper mills. The fraudulent and sloppy flood of ‘work’ is currently forcing the whole publishing system to its knees.