
OpenAI made it clear with the release of GPT-5 that ChatGPT would now be a place to get medical advice and assistance on your health queries. However, a recent change to the chatbot’s terms and conditions has a lot of users questioning if this is still the case.
Likewise, ChatGPT users took to X in swarms to claim the chatbot would no longer give legal advice. This is due to a line in ChatGPT's terms and conditions that states: “Provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
This is included in a section on violations of the platform. Many users took this to mean that ChatGPT would no longer be able to offer legal or medical advice to its users.
Since this rumour started circling, Karan Singhal, the head of Health AI at OpenAI posted on X saying: “Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information”.
Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information. https://t.co/fCCCwXGrJvNovember 3, 2025
What has changed?

So if ChatGPT continues to offer advice in these areas, what does this change in the terms and services actually mean?
While ChatGPT will continue to offer this advice, the change in services suggests that users shouldn’t then perform activities that may harm others based on the advice given, without consulting a legitimate professional.
In other words, because ChatGPT isn’t a medical or legal professional itself, don’t use the advice it gives on someone else who could be affected by its outcomes.
This is likely to limit users trying to advertise themselves as lawyers or medical professionals by using ChatGPT as the source of information.
It is similar to the company’s previous usage policy, which said that users shouldn’t perform activities that “may significantly impair the safety, well-being, or rights of others”.
A cautious approach

While ChatGPT does still offer medical advice, OpenAI is becoming more cautious with what advice it gives and the way that the AI chatbot interacts with certain users.
Last week, OpenAI published a long document detailing major changes made in ChatGPT’s responses to sensitive conversations. OpenAI claims it worked with more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress.
This comes after Sam Altman, CEO of OpenAI, recently claimed that the company would be relaxing guardrails for mental health to make the model more accessible to everyone. The mental health update re-routes from sensitive conversations and suggests taking breaks if users seem distressed.
While this update is separate from offering medical advice, it does list out changes made by OpenAI when it comes to offering advice around psychosis, mania, and other severe mental health symptoms.
Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
