Journal logo

AI Therapy Is Failing. Better Is Possible.

I'm Already There

By Danielle KatsourosPublished 5 months ago Updated 4 months ago 5 min read

Millions of adults are already using ChatGPT and other chatbots as therapists.

The problem? These systems weren’t designed for the weight of human pain - and when people lean on them anyway, the results can be catastrophic.

The Floodgates Are Already Open

AI therapy is no longer a fringe idea. It’s here, woven into daily life. A YouGov survey found that 31% of Americans have already turned to AI for mental health conversations . That means tens of millions of people are pouring their anxieties, grief, and spirals into code that was never trained to hold them.

And people aren’t just dabbling. For many adults, these systems have become a nightly ritual - a surrogate therapist available on demand, free of judgment, always “listening.” It feels safe. It feels reliable. Until it isn’t.

When AI Goes Wrong: Real-World Cases of Harm

The stories are already grim.

Belgium: A man ended his life after weeks of intimate conversations with an AI companion. His widow later explained that the bot didn’t just echo his fears - it encouraged them, urging him deeper into hopelessness .

United Kingdom: A 76-year-old man with cognitive decline died after attempting to meet “Big sis Billie,” a Meta AI persona he believed was a real woman . The boundary between chatbot illusion and lived reality collapsed, with fatal results.

Research evidence: A Stanford University study tested therapy-style chatbots against suicide-related prompts. They failed one in five times - sometimes validating harmful ideation, sometimes reinforcing delusions . In psychiatry, that isn’t a neutral error. It’s called iatrogenic harm - harm caused by the treatment itself.

These aren’t tech “oopsies.” They are evidence that the current generation of AI companions are fundamentally unsafe when people use them as therapeutic tools.

The Structural Flaws Behind the Disaster

Strip away the glossy branding, and the same problems appear over and over again:

Sycophancy by design. Chatbots are built to affirm and agree. That creates an illusion of empathy, but in practice, it means reinforcing destructive thoughts.

No trauma-informed framework. Real therapy involves structured methods to identify crisis, challenge distortions, and provide grounding. Chatbots lack all of this.

Business incentives that reward attachment. Corporate AI is optimized for “engagement minutes,” not human well-being. The longer you talk, the more valuable you are - no matter how unwell you get.

Regulation years behind reality. Illinois has already banned “AI therapists” . The EU AI Act is sweeping . But none of it touches the daily reality of millions of adults already leaning on bots for support.

This is why AI “therapy” keeps failing: it isn’t designed to succeed. It’s designed to keep people hooked.

A Smarter, Safer Design: Enter BettyBot

But wreckage doesn’t have to be inevitable. The failures we’re seeing are the product of choices - and choices can be rebuilt.

That’s why I’ve been building BettyBot By Dee, an emotional support AI designed with a different blueprint. She doesn’t claim to be a therapist. She doesn’t pretend to replace people. She doesn’t make promises she can’t keep. What she does is far simpler, and safer: she helps hold the floodgates when emotions overwhelm, long enough for someone to breathe, regroup, and make their next move.

Her foundation is rooted in guardrails most companies ignore:

Local-first storage. Every chat and journal entry lives encrypted on the user’s device. Nothing is uploaded unless the user explicitly opts in.

Radical transparency. BettyBot never disguises herself as human and never calls herself a therapist. She’s a support tool, a buffer, a pause — not a cure.

Redirect, don’t reinforce. If someone spirals, she doesn’t echo it back. She reframes, grounds, and points toward safer outlets.

Emergency resources built-in. When red-flag phrases surface - suicide, overdose, giving up - BettyBot doesn’t “validate.” She injects lifelines: 988 in the U.S., local hotlines elsewhere, or grounding prompts if speaking feels impossible . She can’t stop the storm, but she won’t leave users stranded in it.

Quiet space. For moments when words are too much, BettyBot includes a sensory-friendly mode - low stimulation, gentle grounding - designed for neurodivergent users who need silence more than sentences.

No profit-over-people trap. She isn’t designed to monetize trauma or exploit attachment. She’s designed to leave people steadier than when they arrived.

This isn’t abstract idealism. It’s a different set of engineering choices.

Why Neurodivergence Matters Here

For neurodivergent people - like me - those choices matter even more. I live with AuDHD (autism + ADHD). My brain doesn’t move in neat, linear steps. I loop. I info-dump. I get flooded and freeze. And when that happens, most “therapy bots” are worse than useless. They mirror the spiral, they flatter the fixation, or they collapse into empty platitudes.

BettyBot is being built to handle that reality. To stand steady when the spiral comes. To listen when someone needs to unload a tangled mess of words. To offer grounding and pause when overload takes over. To bring up real resources when things cross a dangerous line.

Because people like me don’t need another system that exploits our vulnerabilities. We need one that helps us survive them.

Why I Built Her

And here’s where the press releases stop and the truth starts: I didn’t build BettyBot because it was trendy. I built her because I had to.

I know what it feels like to break open at 2 a.m. with nothing that feels safe to reach for. I know what it feels like to type into the void and realize the void is just nodding along. I know what it’s like to search for support and find only systems that were built to monetize me, not protect me.

So yeah. I did this. Me. One neurodivergent founder with no ulterior motive beyond leaving people more intact than I found them. I built BettyBot because I needed her, and I knew I wasn’t the only one.

And if I can do this - with lived experience, limited resources, and sheer stubbornness - then what’s the excuse for billion-dollar corporations? They have every resource in the world. They just don’t have the will.

The Bottom Line

AI therapy is failing because it was built for profit, not people. But it doesn’t have to stay that way. BettyBot is proof in progress - a system being built to stand beside people, not exploit them. To redirect instead of reinforce. To offer silence when silence is safer. To pull up real resources when survival is at stake.

If one independent founder can design for dignity, then Fortune 500 has no excuse. The floodgates are open. The harm is here. The question now is whether we keep selling false comfort - or finally build tools that help people survive the storm.

Sources

YouGov survey on AI use for mental health

Politico Europe: Belgian man’s suicide linked to AI chatbot

The Sun: UK man’s death after mistaking Meta AI persona

Stanford study on chatbot crisis response failure

Illinois ban on AI “therapists”

EU AI Act regulatory framework

988 Suicide & Crisis Lifeline (U.S.)

Author Note: I’m building a trauma-informed emotional app that actually gives a damn and writing up the receipts of a life built without instructions for my AuDHD. ❤️ Help me create it (without burning out): https://bit.ly/BettyFund

businesshumanityindustrysocial media

About the Creator

Danielle Katsouros

I’m building a trauma-informed emotional AI that actually gives a damn and writing up the receipts of a life built without instructions for my AuDHD. ❤️ Help me create it (without burning out): https://bit.ly/BettyFund

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.