
Mistral AI’s Le Chat is the least privacy-invasive generative artificial intelligence model when it comes to data privacy, a new analysis has found.
Incogni, a personal information removal service, used a set of 11 criteria to assess the various privacy risks with large language models (LLMs), including OpenAI’s ChatGPT, Meta AI, Google’s Gemini, Microsoft’s Copilot, xAI’s Grok, Anthropic’s Claude, Inflection AI’s Pi AI and China-based DeepSeek.
Each platform was then scored from zero, being the most privacy-friendly to one, being the least-friendly on that list of criteria. The research aimed to identify how the models are trained, their transparency, and how data is collected and shared.
Among the criteria, the study looked at the data set used by the models, whether user-generated prompts could be used for training and what data, if any, could be shared with third parties.
What sets Mistral AI apart?
The analysis showed that French company Mistral AI’s so-called Le Chat model is the least privacy-invasive platform because it collects “limited” personal data and does well on AI-specific privacy concerns.
Le Chat is also one of the few AI assistant chatbots in the study that would only provide user-generated prompts to its service providers, along with Pi AI.
OpenAI’s ChatGPT comes second in the overall ranking because the company has a “clear” privacy policy that explains to users exactly where their data is going. However, the researchers noted some concerns about how the models are trained and how user data “interacts with the platform’s offerings”.
xAI, the company run by billionaire Elon Musk that operates Grok, came in third place because of transparency concerns and the amount of data collected.
Meanwhile, Anthropic’s Claude model performed similarly to xAI but had more concerns about how models interact with user data, the study said.
At the bottom of the ranking is Meta AI, which was the most privacy invasive, followed by Gemini and Copilot.
Many of the companies at the bottom of the ranking don’t seem to let users opt out of having prompts that they generated used to further train their models, the analysis said.