New AI tool flags more than 1,000 questionable science journals… but can it be trusted?
- The open-access journal boom has fueled predatory publishers exploiting researchers with fees while skipping real peer review.
- An AI tool trained on 14,500 journals flagged over 1,000 suspicious publications but has a 24% false positive rate.
- Fake science is surging, with a 2025 study warning that paper mills are doubling fraudulent research output every 1.5 years.
- Predatory journals threaten public trust, distorting medical guidelines and policy decisions while wasting taxpayer funds.
- AI detection tools could be misused to censor legitimate but controversial research, raising concerns over truth control.
The explosion of open-access journals has democratized scientific research, but it has also given rise to a shadow industry of predatory publishers that exploit authors with publishing fees while offering little to no legitimate peer review. Now, researchers have developed an AI tool to detect these shady journals—but its 24% false positive rate means human experts are still essential.
Who’s behind it? A team of computational scientists, led by Daniel Acuña of the
University of Colorado Boulder, trained an AI model on more than 14,500 journals—12,869 high-quality ones and 2,536 that had been removed from the Directory of Open Access Journals (DOAJ) for violating ethical guidelines. The AI then analyzed nearly 94,000 open-access journals, flagging more than 1,000 previously unknown suspect publications.
The problem with predatory journals
The
open-access model was supposed to make research freely available to everyone, breaking down paywalls that restrict knowledge. But as the system grew, so did the number of journals that prioritize profit over scientific integrity. These "questionable" journals often promise rapid publication with little to no peer review, charging authors hefty fees while producing low-quality—or even fraudulent—research.
A 2025 study in
PNAS found that the number of fake papers churned out by "paper mills" is doubling every 1.5 years, threatening to
flood academia with junk science. "If these trends are not stopped, science is going to be destroyed," warned Luís A. Nunes Amaral, a data scientist at
Northwestern University.
How the AI works
The researchers trained their AI to
spot red flags in journal websites and publication records, including:
- Poor website design (sloppy layouts, missing editorial board info)
- Low citation numbers (few references from reputable sources)
- Unrealistically fast peer review times
- High rates of self-citation (authors citing their own work excessively)
When applied to 93,804 open-access journals, the AI flagged 1,437 as potentially questionable. However, the system has a 24% false positive rate, meaning about one in four legitimate journals could be incorrectly labeled.
"Our findings demonstrate AI’s potential for scalable integrity checks," the researchers wrote, "while also highlighting the need to pair automated triage with expert review."
While the AI shows promise, it’s far from perfect. Many of the false positives were small, legitimate journals with limited online presence or discontinued publications. Some were even book series misclassified as journals.
Joanna Ball, managing director of DOAJ, cautioned that the tool needs more validation before widespread use. "I’m quite excited by some of this work," she said, but noted that DOAJ’s human reviewers already reject about 75% of journal applications for failing quality standards.
A growing crisis in scientific publishing
The rise of predatory journals isn’t just an academic issue; it’s a threat to public trust in science. Fake or low-quality studies can distort medical guidelines, influence policy decisions, and waste taxpayer funding. A 2025
New York Times investigation found that paper mills are now using AI to generate fake research, making fraud harder to detect.
For now, the tool serves as a first line of defense, flagging suspicious journals for further review rather than making final judgments. As Acuña put it, "These results must be read as preliminary signals rather than final verdicts."
A new censorship tool?
Of course, there's another side to this story that should give everyone pause. If AI can help expose shady journals, could it also be used to censor legitimate but controversial research? The line between "predatory" and "politically inconvenient" is thinner than we think, and in an era of algorithmic gatekeeping, who gets to decide what’s "questionable"? The fight for
scientific integrity isn’t just about bad journals. It’s about who controls the truth.
Sources for this article include:
Phys.org
Science.org
Science.org
NYTimes.com