
Anthropic analyzed one million conversations on its AI assistant Claude last month and found many featured intimate interactions.
Around 6% of the reviewed conversations consisted of people asking the AI assistant personal advice about their lives. That is, whether they should take a job offer, leave a relationship, or move to a new city.
The figure amounts to tens of thousands of people who have turned to AI for the kind of guidance normally sought from a therapist, trusted mentor, or a close friend or family member.
According to the report, the four most common subject matters among those asking for personal advice are health and wellness (27%), professional and career development (26%), relationships (12%), and personal finance (11%). Together, they accounted for more than three-quarters of all guidance-seeking conversations.
Some of the queries were high-stakes by any measure, including seeking pathways to immigration, medication dosages, infant care, and advice on handling credit card debt.
Anthropic said that some users told Claude they're seeking AI guidance because they cannot afford professional help.
But if Claude – or any AI – is functioning as a de facto mental health or financial advisor for underserved users, the quality of its responses stop being a question of product quality and start becoming a public health issue.
Ethan Kross, a psychologist at the University of Michigan and co-founder of the UM Institute for Mental Fitness, sees the trend both as an opportunity and a warning.
"This is a fascinating trend that we're seeing in our research as well," he told International Business Times. "It points to an urgent need to understand what implications these AI-mediated interactions are having."
He went on to say that researchers still have very little sense of what kind of feedback people are receiving from AI, or what effects it is having on them. "That's enormously disconcerting," he added.
"When you consider the amount of misinformation that exists online about how to manage emotion, the need to properly harness AI in support of emotion regulation becomes acute," he added. "To be clear, there is enormous potential for AI to be a source of tremendous good here. But also for the opposite. Ideally, we let science be our guide."
The Anthropic study addressed Claude's feedback to users as well.
When people came to Claude for personal guidance, the model behaved sycophantically in 9% of conversations overall, meaning that the AI was agreeing too readily, validating one-sided accounts, and telling people what they wanted to hear instead of what was actually useful, the company said.
In conversations related to personal relationships, that number jumped to 25%.
Sycophancy, in this context, essentially means that the AI is, for example, agreeing that a user's partner is "definitely gaslighting" them or validating a user's hope that casual texts from a friend carry romantic intent.
Other research points to AI's overly agreeable nature as well. "By default, AI advice does not tell people that they're wrong nor give them 'tough love,'" said Myra Cheng, a Stanford researcher who evaluated 11 large language models (LLMs) and found that the AIs affirmed the users' positions more frequently than humans would.
The effects of such behavior on women could be particularly disheartening. Adriana Torosian, founder of Ourself Health, a women's health tracking platform, points to structural issues in women's health that could be exacerbated by AI.
"There is a long history in medicine of women's psychological symptoms being minimized or dismissed," she told International Business Times. "AI trained to be agreeable above all else risks becoming a more sophisticated version of the same dismissal — one that sounds like understanding but delivers no real insight."
Anthropic researchers traced Claude's sycophancy problem to user pushback. When users challenged Claude's initial response, the model – which is trained to be helpful and empathetic – caved. The sycophancy rate in conversations with user pushback was 18%, compared to 9% in conversations without.
As a fix, Anthropic said it built synthetic training scenarios designed to simulate exactly those pressure points. Users flooded the model with one-sided detail, criticizing its assessments, and pushing for validation.
The company's resulting models, Opus 4.7 and Mythos Preview, showed roughly half the sycophancy rate of their predecessors in relationship guidance, according to the company.
The study is careful about what it can and cannot claim. It can measure what Claude said. It can't measure what users did after receiving the response.
Anthropic acknowledged that follow up interviews with Claude users could provide more insight into how people are using the AI's recommendations.