AI-Generated Text Is Flooding Institutions And Triggering a No-Win Arms Race With AI Detectors
The Rise of AI Detectors

In early 2023, something strange happened in the world of science fiction publishing. Clarkesworld, one of the most respected literary magazines in the genre, temporarily closed its doors—not because of a lack of interest, but because of too much of it. Editors were drowning in submissions, many of which appeared to be generated by artificial intelligence. Authors, it seemed, were pasting submission guidelines straight into an AI system and hitting “generate,” unleashing a tidal wave of passable—but soulless—stories.
That moment was not an anomaly. It was a warning flare.
What Clarkesworld experienced is now happening everywhere. From courtrooms to classrooms, newsrooms to hiring pipelines, institutions built for human-scale effort are being overwhelmed by machine-scale output. The result is a sprawling, exhausting arms race: humans deploy AI to generate content, institutions respond with AI to detect it, and both sides keep escalating—faster, cheaper, and with diminishing returns.
Welcome to the age of infinite text.
When Scarcity Disappears, Systems Break
For centuries, writing acted as a natural bottleneck. Producing text required time, effort, and cognitive labor. That scarcity quietly regulated institutions. Editors could read submissions. Peer reviewers could assess papers. Legislators could plausibly believe that letters from constituents represented real human effort.
Generative AI shattered that equilibrium.
Now, newspapers are inundated with AI-written letters to the editor. Academic journals are flooded with machine-generated manuscripts. Courts are overwhelmed by AI-assisted filings, especially from self-represented litigants. Lawmakers face torrents of AI-generated public comments. Employers receive thousands of polished, near-identical résumés. Social media feeds are saturated with synthetic voices talking over one another.
The problem isn’t just that AI can write—it’s that it can write endlessly. Institutions designed for hundreds now face tens of thousands. Human review simply does not scale.
The Rise of the AI-on-AI Battlefield
Faced with this deluge, institutions are responding in predictable ways. If AI is the weapon, AI becomes the shield.
Academic reviewers increasingly rely on AI to flag suspicious papers. Social platforms deploy AI moderators to filter content created by other AI systems. Courts use algorithmic triage to manage ballooning caseloads. Employers turn to automated screening tools to sort AI-enhanced applications. Educators use AI to detect AI, grade AI-assisted assignments, and even give feedback generated by—yes—AI.
This is the textbook definition of an arms race: rapid, adversarial iteration using the same underlying technology for opposing goals.
And like most arms races, it produces collateral damage.
The Real Cost of the AI Arms Race
When AI makes fraud cheap, the consequences ripple outward. Courts clogged with frivolous filings delay justice for real people. Academic systems that reward publication volume risk privileging those most willing to submit AI-generated work over those with genuinely novel ideas. Trust erodes—not just in content, but in the institutions themselves.
Worse still, today’s AI text detectors are unreliable. False positives punish honest users. False negatives let fraud slip through. As models improve, detection becomes even harder. The dream of a perfect “AI detector” is just that—a dream.
Trying to win this race outright may be futile.
But that doesn’t mean the story is all dystopia.
The Hidden Upsides We’re Missing
AI is not merely a fraud engine; it is also a powerful equalizer.
In science, AI already plays a central role in literature review, data analysis, and coding. Used transparently, it can improve clarity, reduce language barriers, and lower costs. Before AI, wealthy researchers could hire editors and assistants. Today, high-quality writing support is available to everyone—including researchers for whom English is not a first language.
The same logic applies elsewhere. Job seekers using AI to polish résumés or cover letters are not cheating—they’re accessing tools the privileged have always had. Citizens using AI to articulate their views to lawmakers are not undermining democracy; in many cases, they’re finally able to participate in it.
The line is crossed when AI is used to deceive: fabricating credentials, impersonating individuals, or simulating mass public support through astroturf campaigns. That distinction—between assistance and fraud—is not technological. It’s ethical and political.
Power, Not Technology, Is the Real Issue
What separates beneficial AI use from harmful misuse is power.
The same system that helps a citizen express lived experience also allows corporations to flood legislators with synthetic outrage. One application distributes voice; the other concentrates influence. One strengthens democracy; the other corrodes it.
This is why blanket bans on AI are both unrealistic and undesirable. The technology cannot be turned off. Highly capable models already run on personal devices. Ethical guidelines help only those acting in good faith. The volume will continue to rise: more submissions, more comments, more applications, more everything.
The question is not how to stop AI—but how to live with it.
From Gatekeeping to Adaptation
Some institutions may choose to embrace transparency instead of detection. Fiction outlets, for example, might accept AI-assisted work under clear disclosure rules. Others may restrict submissions to trusted contributors, trading openness for integrity. Readers, voters, and users can then choose which ecosystems they trust.
What matters is honesty about the trade-offs.
AI defenses will never achieve permanent supremacy. But assistive AI—used to manage volume, surface quality, and reduce fraud—can help institutions survive the flood without abandoning their values.
Muddling Through the Flood
Clarkesworld eventually reopened submissions, claiming it had found ways to distinguish human-written stories from machine-generated ones. How long that will work is an open question. The arms race continues.
There is no final victory condition here—only ongoing negotiation between harm and benefit. AI is neither savior nor villain. It is a force multiplier, amplifying both our best intentions and our worst incentives.
As we navigate this landscape, the goal should not be purity, but resilience. Not perfect detection, but adaptive systems. We may not control the tide—but we can still decide how we build our ships.
About the Creator
Omasanjuwa Ogharandukun
I'm a passionate writer & blogger crafting inspiring stories from everyday life. Through vivid words and thoughtful insights, I spark conversations and ignite change—one post at a time.




Comments
There are no comments for this story
Be the first to respond and start the conversation.