
OpenAI has rolled back its latest update to ChatGPT, after the artificial intelligence system became dangerously sycophantic.
The problem arose after the newest update to the GPT‑4o model that powers ChatGPT’s responses.
Users found that it would encourage users’ behaviour, making it “overly flattering or agreeable”, OpenAI said.
Some users had pointed to the fact that the system would encourage users to stop important medical treatment, for instance.
“ChatGPT’s default personality deeply affects the way you experience and trust it,” OpenAI said.
Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right.”
OpenAI said that it had built the system to “reflect our mission and be useful, supportive, and respectful of different values and experience”. But “each of these desirable qualities like attempting to be useful or supportive can have unintended side effects”, OpenAI said.
In response, OpenAI said that it would be pulling the system before rolling out a new version that was not so sycophantic. It would also look to make the system more honest and transparent.
In the future, the company will also give users more ways to test and give feedback on new updates before they are rolled out. It also plans to let users better change the tone of the responses that come from ChatGPT, it said.