ChatGPT can no longer offer medical, financial or legal guidance, says report
The change follows incidents where users reportedly suffered harm after relying on ChatGPT’s advice.

OpenAI’s flagship chatbot, ChatGPT, is reportedly undergoing a major shift in how it can be used. According to new reports, the platform will no longer provide direct medical, legal, or financial advice — categories that are considered high-risk because incorrect information can lead to serious real-world harm.
As reported by NEXTA and covered by News18, OpenAI has updated ChatGPT’s usage terms and added new internal guardrails that came into effect on October 29. Under these guidelines, ChatGPT has officially moved from being a tool that users can consult for personal recommendations, prescriptions, or strategic action plans — to an “educational assistant” that may only outline general concepts, explain how systems work, or suggest that users speak to qualified human professionals.
In practical terms, this means the chatbot will no longer offer specifics like medication names, possible drug dosages, legal templates for lawsuits, tax loophole strategies, or personalized investment tips like whether someone should buy or sell a particular stock. Instead, it can still describe broad frameworks — such as how certain laws function, how financial markets generally operate, or how specific types of medical treatments typically work — but anything that resembles actionable advice tailored to a particular person is now restricted.
According to the report, OpenAI appears to have tightened these policies after multiple incidents highlighted how risky it can be when non-experts take AI-generated content as if it were professional guidance.
One of the most striking cases involved a 60-year-old man who reportedly replaced table salt with sodium bromide — a dangerous chemical compound — after allegedly receiving information from ChatGPT. The consequences were severe. As documented in the medical journal Annals of Internal Medicine, he was hospitalized for three weeks, developed hallucinations, paranoia, and psychiatric symptoms, and was placed on an involuntary hold. The man later admitted that he had undertaken a personal “experiment” to eliminate salt from his diet after consulting ChatGPT’s responses regarding potential health risks of sodium.
Another well-publicized case involved Warren Tierney, a 37-year-old man from Killarney in County Kerry, Ireland. Tierney experienced trouble swallowing and reportedly asked ChatGPT whether this could mean he had cancer. The chatbot replied that cancer was “highly unlikely”. Feeling reassured, Tierney delayed seeking medical care — only to later learn that he had stage-four oesophageal adenocarcinoma, a serious and advanced form of cancer.
Tierney later spoke publicly about the experience and said that while he takes responsibility for his decision, the AI’s confidently-worded reassurance influenced his choice not to go to a doctor earlier. “It sounded convincing and had all these good ideas,” he told The Mirror, “but ultimately I take full responsibility for what happened.”
Examples like these have fueled ongoing debate about what role AI should play in high-risk decision-making. While models like ChatGPT can explain complicated topics in simple language, the general public frequently interprets the outputs as expert-level certainty. Many users assume that because the text sounds confident, it must be accurate — even though large language models do not actually “understand” the world or verify facts the way trained professionals do.
This misunderstanding is exactly what OpenAI’s new rules are intended to prevent.
The updated policy emphasizes that ChatGPT is not a doctor, lawyer, financial advisor, pharmacist, or therapist. It is not licensed, certified, or qualified to diagnose illnesses, design investment portfolios, or craft legal strategies. From now on, the platform will actively redirect users toward human professionals for any decisions that could affect their health, legal standing, or financial well-being.
Industry analysts note that increased regulation and scrutiny are also emerging worldwide not only because of safety and misinformation concerns, but because generative AI has begun intersecting with regulated industries like healthcare, banking, and law. Companies therefore face potential liability if users rely on AI outputs as if they were authoritative guidance.
Although the core capabilities of ChatGPT are not disappearing, the fundamental expectation is shifting: it is a tool for explanation, not prescription. The goal is to help users better understand how things work not tell them what to do.
About the Creator
General gyan
"General Gyan shares relationship tips, AI insights, and amazing facts—bringing you knowledge that’s smart, fun, and inspiring for curious minds everywhere."



Comments
There are no comments for this story
Be the first to respond and start the conversation.