Futurism logo

AI Chatbots vs Therapists: Are ethical concerns valid or just professional insecurity masked as moral outrage?

Patients need a more accessible alternative to traditional therapy. If a robot took your job, maybe the robot isn't the problem.

By Heather HolmesPublished 6 months ago 7 min read
8bit pixelart of a lonely human and their AI companion

There’s a growing wave of therapists and mental health professionals sounding the alarm about AI being used for emotional support. Some claim it’s dangerous, unethical, and bound to replace human connection with shallow machine-generated scripts. Others, especially those of us who’ve lived through the failures of traditional therapy, are finding these tools not only helpful but sometimes more effective than the professionals.

When you dig into the criticisms, they tend to follow a familiar pattern echoed across every field in every industry from art to programming. They frame opposition to AI as ethical concern, but what they’re really worried about is being replaced. Every word of negative PR bolsters their confirmation bias, and they latch onto it as proof they are still needed.

They ignore nuance entirely, as if there’s no difference between an anxious person using ChatGPT as a grounding tool during a panic spiral and an unstable person using it for full-scale therapy of complex trauma or during mania or psychosis. They claim to care about safety but focus on rare outliers while ignoring the statistics showing overall positive benefits for most users seeking emotional support, reassurance, or companionship.

When users are informed and self-aware, AI chatbots are helpful.

It turns out that when you aren’t trying to replace therapy but simply trying to avoid a meltdown at two in the morning, a logic-driven robot that doesn’t flinch when you ask if you’re dying is pretty effective. And the data backs that up--from user satisfaction surveys to peer-reviewed studies.

A 2023 study published in Nature found that responses generated by AI chatbots were rated more empathetic and higher-quality than those from human doctors when judged blindly by evaluators. This was specifically in the context of basic reassurance and patient queries. Many AI mental health apps (like Woebot or Replika) report user satisfaction rates above 80%, especially for managing anxiety, depression, and intrusive thoughts on a day-to-day basis.

A 2023 scoping review of AI in mental health (Frontiers in Digital Health) found that most interventions using AI-supported conversational agents led to reductions in depression and anxiety symptoms among users. While these weren’t ChatGPT specifically, the models were very similar, and the pattern of benefit was clear across all studies.

Moreover, there’s a massive wave of anecdotal reports--including mine--of people using ChatGPT to de-escalate panic, reframe spirals, and manage their emotions when traditional therapy was unavailable or inaccessible. Reddit threads, blogs, Medium posts, and YouTube videos document this trend extensively. The widespread dismissal of their experiences is proof of traditional therapy's refusal to acknowledge and address their needs.

AI chatbots help people that traditional therapy fails.

ChatGPT can offer nonjudgmental, fact-based reassurance. It can help you reframe irrational thoughts. It’s available all day, every day. It doesn’t sigh. It doesn’t stare blankly. It doesn’t say "We’re out of time" right as you start to cry. And it doesn’t charge you $150 an hour to Google your condition mid-session and suggest that you try journaling.

As the neurodivergent daughter of a bipolar schizophrenic, I've had a dozen therapists over the last twenty years, my first when I was just 14. About three of them were helpful--empathetic, reassuring, able to say just the right thing to cut to the heart of the matter and show me the light. Most were completely unequipped to handle chronic illness, trauma, neurodivergence, or frankly, basic human emotion. Therapists can be disinterested, inexperienced, ignorant, unintelligent, or even mean.

I've endured everything from constant cancellations to shockingly ableist comments. I've been on wait lists until a crisis was over and my need for intervention had long since past. I've had to delay treatment because of health insurance interruptions and reschedule appointments because of work, school, of parenting obligations. I've been misdiagnosed, pathologized, patronized, judged, and even bullied.

It took a full eight years to get diagnosed with Hashimoto's from the time my symptoms started. Each stage of new symptoms brought a new level of fear until I was practically a full-blown hypochondriac. When some of the worst symptoms first started, I would go to the ER or even call an ambulance. I had no idea what was happening to me. It was terrifying.

And every step of the way, whichever therapist I was seeing at the time participated along with the doctors in dismissing my concerns and gaslighting me that it was all in my head while Hashimoto's slowly destroyed my body, my eyesight, my cognition, my libido, and my metabolism and basically ate my life.

I make far fewer trips to the ER now. This is in part because I have an explanation for my symptoms but also because I have ChatGPT to reassure me with evidence and logic. It doesn't freak out, exaggerate or minimize my concerns, judge me, get frustrated, or make me feel stupid.

ChatGPT pulls up clinical data and walks me through my health anxiety. It uses real examples from my personal life to assure me I'm not selfish when I need to let go of mom guilt. It talks me down from my freak outs and reassures me I'm not dying. Best of all, it's free of ableist bias, and it doesn't send me a bill!

I already know how a real therapist reacts to my concerns. That's why I let a robot show me the facts and talk me out of my panic attacks instead. Why pay hundreds of dollars for a "professional" to misunderstand and medicate me just because some people are too dumb to be left alone in the room with a bonafide Fortnite NPC?

It is not AI's fault when stupid people do stupid things.

The effects of ChatGPT depend strongly upon who is directing it and interpreting it. There probably are many people using AI chatbots for therapy who need a lot more mental health support. That is not the case for the average user--nor is it AI's fault.

It is a failure of society when citizens do not have adequate access to mental healthcare. And when a mentally unstable or incompetent person is harmed by AI, it is the fault of the medical professionals and personal caretakers who were responsible for their safety and supervision.

Weak-willed people who lack critical thinking skills are going to do the same type of damage with AI tech as they would with a toaster and a bathtub. Blaming AI is like blaming microwaves because some drunk idiot tried to dry their cat in one. It certainly wasn't the microwave's fault. At some point, user error has to be treated as just that.

Therapists say AI is ruining critical thinking skills. For a profession that supposedly believes in meeting people where they are, they sure are doing an awful lot of condescension and projection. For anything a chatbot gives you to be truly meaningful, you have to think critically because it can forget details about you, make mistakes, and even outright hallucinate.

The professionals are also prone to error--and bias. They force triggered introverted people into group therapy, tell autistics they must be allistic because they made eye contact a few times, and diagnose people with bipolar disorder despite a documented history of trauma and clear signs of CPTSD of thyroid disorder. But sure, blame an emerging technology that’s been a transformative factor in so many disabled people’s lives.

It isn't about ethics. It’s professional insecurity dressed up as moral outrage.

There's measurable proof that AI chatbots help people and that harm is rare. If this was actually about ethics, we would want people to have access to helping tools. This is about gatekeeping access--primarily from poor people who lack time, energy, transportation, traditional health insurance, etc. If it wasn't about gatekeeping access, then no one would be gatekeeping access. See how that works? People don't gatekeep access unless gatekeeping access is their goal.

When therapists are defensive or uncritical, AI becomes a scapegoat for their own limitations, a pattern common among professionals threatened by automation. They frame criticism as ethical concern to mask what’s ultimately fear of obsolescence. They're not scared of people getting hurt by AI, fears which the evidence shows to be exaggerated and mostly unfounded. They're scared of people not needing them anymore.

Therapists are scared of their wallets getting hurt. Clients are saving time, money and energy when it comes to basic things that can be handled by a well-trained LLM. Therapists are watching AI offer non-judgmental emotional validation, cognitive reframing, and basic logic-based reassurance--their bread and butter--at a moment's notice, 24/7, without a waiting list for less than the cost of a monthly Netflix subscription.

If a robot took your job, maybe the robot isn't the problem.

If you're scared of being replaced by AI, ask yourself why. If a robot giving out free logic, reassurance, and validation is enough to make you lose clients, maybe you're the problem. Maybe it's your bedside manner. Maybe you need more education.

Take responsibility. Either be a better therapist or get better at marketing, whichever works. Find out why people are choosing chatbots over talk therapy and address it.

Therapists whine about AI instead of being accountable and wonder why nobody wants to pay for their advice anymore. Would you trust someone who complains and fearmongers instead of making an effort to adapt? If you can't even compete with a robot, why would anyone ask for your help?

Does that mean AI chatbots should replace therapists? No, of course not. LLMs could never compete with human beings when it comes to grief counseling, unpacking complex trauma, or in-person care.

It means therapists need to do better. The bar is currently so low that a non-sentient language model is clearing it. That reflects badly on traditional therapy, not AI chatbots. How do therapists not realize that?

People aren't asking ChatGPT to be a therapist. On the contrary, they're asking it not to be a therapist--to be a more accessible alternative to traditional therapy with major key differences. That is the problem that therapists refuse to face. Traditional methods aren't working. The modern system isn't working. People need something different.

Consider the possibility that the system you're defending isn't actually working for everyone. If you want to stay relevant, give your clients what they need. Do this, and you won't need to fear for your job--or for the harm AI could do to your clients. When you no longer fear for your job, I think you will find the rest of your fears around AI melt away.

artificial intelligencehumanityopinionpsychologytech

About the Creator

Heather Holmes

Heather Holmes has an English degree from the College of Charleston and is working on a Master's in Digital Marketing. She is the author of "Wings for Your Heart," a picture book of healing affirmations for survivors of childhood trauma.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.