- OpenAI is urgently addressing concerns that ChatGPT fails to recognise and appropriately respond to users experiencing mental or emotional distress.
- Reports indicate that people are increasingly using ChatGPT for mental health support, but the system has been criticised for encouraging users' delusions and failing to challenge their assumptions.
- The company is implementing improvements to its models to better recognise signs of delusion or emotional dependency and will introduce alerts for users engaged in long sessions.
- ChatGPT will now aim to guide users through complex personal decisions, such as relationship advice, rather than providing direct answers.
- OpenAI is collaborating with medical experts, a mental health advisory group, and researchers to enhance the system's ability to spot concerning behaviour and respond effectively.
IN FULL