Humans logo

ChatGPT Is Not Your Therapist": OpenAI CEO Issues Privacy Warning

Sam Altman cautions users against sharing sensitive emotional information with AI, highlighting the lack of legal confidentiality and therapeutic safeguards.

By Taimoor KhanPublished 6 months ago 3 min read

In a recent podcast appearance, OpenAI CEO Sam Altman made a candid and concerning statement about the use of ChatGPT, particularly when it comes to mental health and therapy. He warned users that using AI chatbots like ChatGPT for emotional or psychological support does not come with the same legal protections and confidentiality as professional therapy. This disclosure has sparked an important conversation around privacy, ethical usage, and the limitations of artificial intelligence in sensitive domains.

No Legal Confidentiality With AI Conversations

Altman emphasized that while artificial intelligence platforms can be helpful for many tasks, users should be cautious when using them for personal or emotional conversations. Unlike licensed therapists, AI tools are not bound by laws or professional ethical codes that ensure confidentiality. "When a patient visits a doctor or therapist, the law guarantees that their private information will be kept confidential," Altman explained. "But ChatGPT doesn't fall under any such legal framework."

This means that if someone discusses deeply personal, emotional, or mental health issues with ChatGPT, there's no legal guarantee that those conversations will remain private. While OpenAI has privacy policies in place, they do not equate to the legally enforced protections available in healthcare and therapeutic settings.

Lack of Legal Framework for AI

Altman’s remarks also highlight a broader concern: the current lack of legal or policy frameworks to govern the behavior and responsibilities of artificial intelligence systems. While AI technology is evolving rapidly, legislation and regulations are still catching up. This gap presents serious challenges, especially as AI tools become increasingly integrated into everyday life, from education to healthcare to personal wellness.

"The legal system has yet to fully define how AI should operate within sensitive domains like mental health," Altman noted. "This creates a situation where users might assume their data is protected in ways that it legally is not."

Public Misunderstanding About AI Limitations

One of the underlying problems Altman pointed out is the public misconception about what AI is capable of and how it should be used. Many users view AI tools like ChatGPT as intelligent, understanding, and empathetic systems. While they can generate human-like responses and mimic conversational patterns, they lack true emotional intelligence, empathy, and clinical expertise.

Some users, especially those dealing with loneliness or mental health struggles, may be tempted to rely on AI chatbots for emotional support. However, Altman cautioned against this practice, reminding listeners that ChatGPT is not a replacement for licensed mental health professionals. "People need to be aware of the boundaries of this technology," he said.

Advice to Users: Use Responsibly

Altman urged users to be careful with what they share on AI platforms and to treat them as tools—not as therapists or confidential advisors. “Even though ChatGPT and similar technologies can offer convenience and simulate helpful dialogue, they are not equipped to provide professional guidance, nor do they offer legally protected spaces for sensitive discussions,” he explained.

He encouraged users to be thoughtful and informed when interacting with AI and to avoid sharing private or emotionally sensitive information unless they fully understand how their data is handled.

Conclusion

Altman’s frank acknowledgment serves as an important reminder in the age of artificial intelligence: just because a tool feels human-like doesn’t mean it is human—or that it’s safe to use in all situations. As AI continues to grow in capability and accessibility, both developers and users must remain vigilant about ethical use, privacy concerns, and the boundaries between assistance and professional help.

While ChatGPT may be a powerful tool for learning, productivity, and general conversation, it should not be mistaken for a therapist or used in place of trained professionals. Altman’s warning calls for greater awareness, transparency, and policy development to protect users and ensure AI is used responsibly.

advicefeaturesciencesocial media

About the Creator

Taimoor Khan

Hi, I’m Taimoor Khan. Writing is my way of capturing the quiet moments of life that often go unnoticed.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.