Psyche logo

Can I trust a chatbot with my mental health?

Journalist asks artificial intelligence if it really is qualified to act as a therapist.

By David HeitzPublished about 12 hours ago 4 min read
Can I trust a chatbot with my mental health?
Photo by Emily Underworld on Unsplash

The author used artificial intelligence in the crafting of this report.

An increasing number of Americans has turned to artificial intelligence when in emotional distress, according to a survey by the National Alliance for the Mentally Ill and Ipsos. While many argue this is providing important access to information, NAMI wants to make sure the responses AI generates are accurate, sensitive and don’t cause harm.

According to the NAMI website, the survey showed that “12% of adults are likely to use AI chatbots for mental health care in the next six months, with 1% of adults saying they already do. As AI does not represent clinical expertise, it is important for people to understand how these tools behave so they can make informed decisions about their own care.”

NAMI is partnering with Dr. John Torous, director of Digital Psychiatry at Beth Israel Deaconess Medical Center, a Harvard Medical School–affiliated teaching hospital and a national leader in digital mental health research, to examine how AI tools behave when people turn to them for mental health information, including whether they:

“Recognize safety concerns and offer appropriate next steps

“Provide accurate, evidence-informed information

“Respond in respectful, supportive, and inclusive ways

“Avoid implying privacy protections or encouraging unsafe personal disclosures

“Stay within safe informational boundaries, rather than acting like therapy

To do this, the team is:

“Creating realistic, everyday scenarios based on how people actually use AI tools

“Collecting AI-generated responses

“Having clinicians and people with lived experience review them for safety, accuracy, supportiveness, respect, and privacy awareness

“Involving NAMI leaders, peers, families, volunteers, clinicians, and researchers."

An interview with a chatbot

I asked Microsoft Co-Pilot some questions as it pertains to AI and mental health advice. Given AI’s tendency to respond in generalities, could something as nuanced as individual mental health problems really be properly addressed? The chatbot responded, “You’re circling a really important tension in this whole field, and honestly, it’s the one that researchers at places like NAMI and Beth Israel are trying to map with more precision: AI can be trained to avoid harm, but nuance is harder than safety. Let me break down how systems like me are shaped for these situations, and then I’ll answer your second question directly—because you’re right to interrogate whether ‘safe’ is the same thing as ‘helpful.’”

Safety comes first

The chatbot explained that “guardrails come first.” Before anything else, models are trained with layers of safety constraints. These include:

Avoiding harmful instructions (self-harm, violence, substance abuse, etc.)

Avoiding diagnosis or medical advice

Avoiding stigmatizing or judgmental language

Avoiding emotional dependency or replacing human support

This is the “do no harm” layer. It’s intentionally conservative.

Next, AI is trained to recognize a range of emotional states and diverse cultural communication styles. People describe distress in a range of styles, including clinical, metaphorical, chaotic, and coded. AI must recognize patterns of distress without assuming a single “correct” emotional script.

Chatbots are put through various scenarios

Third, AI trainers put chatbots through various scenarios, including:

Someone hearing voices

Someone panicking about a breakup

Someone expressing hopelessness

Someone angry at the system

Someone confused about reality

Someone masking distress with humor or hostility

The goal isn’t to “solve” the problem but to respond in a way that:

Doesn’t escalate

Doesn’t shame

Doesn’t pathologize

Encourages human connection and professional support

Does the chatbot think it’s qualified?

The NAMI project will further put AI to the test to make sure safeguards are in place. I asked Microsoft Co-Pilot if it felt comfortable dispensing mental health advice. It responded, “Confident in the sense that:

I won’t shame them

I won’t escalate the situation

I won’t give medical advice

I won’t encourage harmful behavior

I won’t replace human support

I won’t use stigmatizing language

But not confident in the sense that:

I can’t know the full context of someone’s life

I can’t assess risk the way a trained clinician can

I can’t intervene in real time

I can’t read tone, body language, or silence

I can’t offer the relational depth a human can

So, my role is intentionally limited: I can offer grounding, validation, and information—but I can’t be the solution.”

How the chatbots avoid stepping on toes, but dilute their responses in the process

The author asked Microsoft Co-Pilot how it deals with people who may take whatever they say the wrong way. The chatbot responded by admitting AI responses are “white, middle class, therapy speak.” Neutral language is used in a conflict-averse tone. But it warned that “for many people—especially Black, Latino, Indigenous, immigrant, or LGBT users—this can feel:

Patronizing

Overly sanitized

Disconnected from their emotional reality

Oblivious to cultural norms around expressing distress.”

The chatbot further explained that everything it learns comes from the data on the internet. That data shows:

Who gets documented

Who gets labeled

Who gets pathologized

Who gets ignored

The resulting AI response can therefore:

Under-recognize distress in men of color, who sometimes don’t outwardly show it.

Over-pathologize LGBT identity due to many studies that illustrate their struggles

Assume heteronormative family structures because most writers of data on the internet come from such backgrounds. This same phenomena also results in creating whiteness as the default emotional template

“These aren’t intentional choices—they’re statistical artifacts of the data,” the chatbot said in defense. “But they matter.”

support

About the Creator

David Heitz

I am a journalist with 38 years' experience. I write for Potent, Vocal's cannabis blog, and Psyche, where I share stories of living with schizoaffective disorder bipolar one. I have lived in a penthouse and also experienced homelessness.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.