Futurism logo

When I Asked AI to Proofread My Writing, It Made Things Up

A personal look at why reliability matters more than apologies when machines shape real decisions

By Nick WilliamPublished about 14 hours ago 4 min read

I like to write as a hobby. It is my therapy. When my head feels crowded, writing slows everything down. I publish some of that writing on Medium and Vocal, not because I think every piece is perfect, but because the act of finishing and sharing matters to me.

For the past few months, I have been using AI as a last step before publishing. Not to write for me, and not to rewrite my voice, but simply to catch grammar issues I might miss. I rarely accept every suggestion. I use it the same way I would use a spellchecker, as a second set of eyes.

Proofreading my own work has always been harder than reviewing someone else’s. I read too fast because I already know what I meant to say. My brain fills in gaps before my eyes notice them. That is exactly the kind of task AI should be good at.

At least, that is what I thought.

Just today, I asked AI to proofread an essay I am working on. On two separate occasions, it came back with what I can only describe as fictional errors. It suggested fixes for problems that did not exist. Words that were already there. Punctuation that was already present.

So I asked a simple question. Why so many errors today in your feedback?

The response was polite and self aware. It admitted the mistake. It explained that it had not carefully re read the original text before making suggestions. It even acknowledged that this kind of feedback damages trust, especially when the task is something as basic as proofreading.

That part was honest. Almost refreshing.

I followed up with another question. Should I use a certain prompt to make sure you are being more careful?

The answer suggested adding extra instructions. Ask it to double check. Ask it to quote the exact phrase before suggesting a correction. Add an accountability step so that mistakes like this would be harder to make.

Then came the sentence that stuck with me. It admitted that I should not need a special prompt to get accurate proofreading. The system should already be doing that level of careful checking by default.

And that is where this stopped being about grammar.

Imagine a dishwasher with built in AI. Sometimes it leaves your dishes dirty. You ask it why. It replies that it should have washed them properly, but to be safe, you should press the start button harder next time.

That would sound ridiculous. We would not accept it.

Yet this is increasingly how we interact with AI systems. When something goes wrong, the burden shifts to the user. Adjust the prompt. Be more specific. Add more instructions. Debug the tool while you are trying to use it.

As companies rush to build AI into everything, I find myself wondering about the quality of the output and, more importantly, the cost of getting it wrong.

There is a difference between a typo slipping through and a system making decisions that affect real people. We have already seen public examples. AI systems praising historical monsters. Generating racist or biased content. Producing confident but incorrect news summaries. Citing legal cases that never existed.

These are not edge cases anymore. These systems are being used in journalism, law, healthcare, government, and defense. The stakes are no longer theoretical.

Now add AI based age verification into the mix. Platforms like Spotify and YouTube are beginning to require it in some markets. What happens when the system gets it wrong? How many users will be locked out because an algorithm misread their age? How much friction will people tolerate before trust erodes completely?

We have grown used to living with unfinished software. Apps update constantly. Bugs are patched in public. Users quietly accept that they are part of the testing process.

AI feels different.

These systems are already shaping decisions, access, and outcomes. They are being treated as authoritative even when they are not consistently reliable. Asking users, citizens, or customers to act as the safety net feels like a risk we have not fully acknowledged.

The price of mistakes is rising faster than our tolerance for them.

Here is the part that made me laugh, and then stop laughing.

I used AI to grammar check this very article. Even after explicitly asking it to carefully read and quote the text before suggesting changes, it made the same mistake again.

It suggested adding the word "about" to this sentence.

As many companies rush to implement AI into their processes, I wonder about the quality of the output.

The word was already there.

When I pointed this out, the AI agreed. It admitted it had made the exact same error I was writing about. It even acknowledged that this was a perfect real world example of the problem.

Go figure.

I am not anti AI. I still use it. I probably will tomorrow. But trust is not built on apologies or explanations. It is built on systems that work as expected without requiring constant supervision.

If AI is going to play a role this large in our lives, it has to do better than asking us to press the start button harder.

artificial intelligence

About the Creator

Nick William

Nick William is a writer and strategist with years of experience crafting clear, reader-focused articles across technology, business, and digital growth topics. His writing style is shaped by understanding of how people read.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.