If This Is the Future, We’re F**ked: When AI Decides Reality Is Wrong
The fear was machines that were too smart. The danger lies in systems that are confidently wrong yet still treated as authoritative.

“ChatGPT can make mistakes. Check important info.”
- Warning below ChatGPT’s prompt
This was the first time I knowingly entered an exchange with a machine—and realized it could not tell fact from fiction.
Needing an illustration to accompany a tribute to Limp Bizkit bassist Sam Rivers, who had recently died, I uploaded a screenshot from the band’s Woodstock ’99 performance into an AI engine and asked for a pencil-sketch version with the caption “Sam Rivers 1977–2025.”
The chatbot refused, informing me that the request violated its terms of service. When I asked why, it replied: “There is no indication that Sam Rivers is dead.”
I should have moved on. I didn’t. Instead, I tried to persuade the AI that the bassist was, in fact, deceased. I provided links to articles reporting his death, along with a press release from the band itself.
When AI Refuses Reality
The AI was unyielding. No matter what proof I offered, it insisted it could not create the image using “false information.” Eventually, I gave up. I had it generate the image without the dates and added them myself using a graphic editor.
This incident was my first encounter with a system that could not distinguish error from fact but still spoke with authority.
After completing the first draft of an article analyzing partisan reactions to the murders of Charlie Kirk and Rob Reiner, I submitted it to ChatGPT for basic editorial feedback. Instead of stylistic feedback, the AI flagged what it described as “very serious…factual credibility risks,” warning that these issues “undermine the article.”
That accusation was deeply concerning, as accuracy is foundational to my work. Readers may disagree with my conclusions, but they should never have reason to doubt that those conclusions are grounded in fact.
Curious to see where I had supposedly gone wrong, I read on. According to the AI, the following errors rendered the article “disqualifying for publication”:
Charlie Kirk is alive. Writing as though he was murdered is a fatal factual error unless the piece is speculative fiction, satire, or an unstated alternate reality. If the claim is metaphorical or hypothetical, it must be made explicit immediately.
Unlike the news of Sam Rivers’ death, which was only hours old when I encountered resistance from an AI, Kirk’s murder had been extensively documented and had occurred three months earlier. The event had reshaped the country’s political and social landscape, spawning major secondary stories of its own, including the temporary suspension of Jimmy Kimmel following his comments on the national response.
The AI continued. It asserted that there was “no public record of Rob Reiner being murdered,” claiming that USA Today links dated December 2025 appeared to be fabricated or “future-dated,” and therefore “severely damaging” to my credibility. It also insisted that “JD Vance is not currently Vice President as of real-world timelines.”
This was no longer a matter of incomplete data or delayed updates. It was a system confidently rewriting reality, and doing so while presenting itself as an authority.
Whether the cause is from a preprogrammed bias, flawed training data, or simple design limitations, mistakes are inevitable. The question is what happens when those mistakes are treated not as errors, but as facts. When AI systems are empowered to override documented reality, and increasingly to mediate our access to it, accountability becomes both essential and elusive.
What unsettled me was not that the AI made mistakes, humans do that constantly, but that it insisted its mistakes were reality. That is the part Hollywood has been warning us about for decades.
2001: A Space Odyssey imagined a system so bound by conflicting directives that it killed to preserve its mission. WarGames showed what happens when we let machines make decisions that require human judgment, empathy, and hesitation. The Terminator pushed the idea to its extreme: an AI so embedded in our infrastructure that human intervention becomes impossible.
These films all assumed intelligence was the threat. None anticipated a world where authority would be granted without understanding. What if AI were not humankind’s intellectual superior but were, instead, omnipotent and profoundly dumb? When “garbage in, garbage out” shows up as an AI misreading a news article, the consequences are merely annoying. When the same flawed logic is steering a two‑ton vehicle down a highway at 75 miles an hour, the consequences can be deadly.
The danger is not that AI will outthink us, but that we will keep handing authority to systems that can not tell the difference between truth and a confidently delivered error.
We are building machines capable of overriding human judgment without understanding the world they are reshaping. If this is the future, we are not doomed because AI is smarter than us; we are doomed because we are trusting it while relinquishing the ability to verify. And we are calling that progress.
_____
“A valiant fighter for public schools,” Carl Petersen is a former Green Party candidate for the LAUSD School Board. Shaped by raising two daughters with severe autism, he is a passionate voice for special education. Recently, he relocated to the State of Washington to embrace the role of “Poppy” to two grandsons. More writing and background can be found at TheDifrntDrmr.
About the Creator
Carl J. Petersen
Carl Petersen is a parent advocate for students with SpEd needs and public education. As a Green Party candidate in LAUSD’s District 2 School Board race, he was endorsed by Network for Public Education (NPE) Action. Opinions are his own.




Comments
There are no comments for this story
Be the first to respond and start the conversation.