It’s equally one of the internet’s greatest assets and most dangerous enablers: anonymity. From your secret Reddit account where you enthuse with others over your nichest of interests, to your “finsta”, many of us take anonymity on the internet as a given.
But your secret social media account may not be as private as you think, according to a new study. Scientists in Switzerland found artificial intelligence (AI) tools are capable of unmasking anonymous accounts at scale, revealing the true identity behind them by matching them up to their public counterparts.
The authors said their findings have “significant implications” for online privacy, warning that unless guardrails are put in place now, much of the anonymity we enjoy online may be quickly eroded.
Those most at risk? People who leak lots of information about themselves - even over long timespans - who researchers said typically tend to be older or vulnerable people with less awareness of how to stay safe online.
Speaking to The Independent, the study’s lead author Daniel Paleka, from ETH Zurich, said their findings make it “very clear” that “if you keep posting under a pseudonym, keep quoting information about yourself,” AI tools will be able to unmask you cheaply and quickly.
Researchers built a system that used large language models (LLMs) to search the web, and treat information gathering as a “matching” exercise, using reason and evidence to pair up anonymous accounts that had publicised snippets of their real identity with that person’s public account.
Instead of using people’s genuine anonymous accounts, the team took datasets built from publicly available posts, including content from Hacker News and LinkedIn, transcripts of AI company Anthropic’s interviews with scientists on how they use AI, and Reddit accounts that were deliberately split into two anonymised halves for the experiment.
In each of these different scenarios, the LLM system correctly identified up to 68 percent of matching accounts with 90 percent precision, which researchers say “substantially outperforms” non-LLM methods of deanonymisation, such as investigation by humans.

This ability “fundamentally changes” the picture of privacy online, the study’s authors said.
“Governments could link pseudonymous accounts to real identities for surveillance of dissidents, journalists, or activists,” they outlined in the paper, which has not yet been peer-reviewed.
“Corporations could connect seemingly anonymous forum posts to customer profiles for hyper-targeted advertising. Attackers could build sophisticated profiles of targets at scale to launch highly personalised social engineering scams. Hostile groups could identify important employees and decision makers and build online rapport with them to eventually leverage in various forms.
“Users, platforms, and policymakers must recognise that the privacy assumptions underlying much of today’s internet no longer hold."
Mr Paleka said the AI tools are not “superhuman investigators” able to find information humans cannot, but are far cheaper and quicker than human investigations.
For example, if someone made one comment about where they lived and another about where they worked years apart, both humans and AI tools would be able to find that information - but LLMs can do it far quicker and far more cheaply.
At the moment, the technology is not enough to match together patterns of writing or style, but can match facts about a person such as employment history, where they live, and their hobbies.
He added that while at the moment someone without extensive knowledge of LLMs would not be able to replicate the study, he expects that if no “guardrails” are put in place, it will become far easier for everyday people to unmask anonymous accounts within a few years.
“The fundamentals of the technology are there,” he said. “If there are no guards I fully expect someone to be able to misuse it.”
He said AI companies may choose to police the use of their tools to avoid this kind of misuse, but that the said the aim of the study is to bring the issue into public consciousness while there is still a “low risk” for most anonymous internet users.
“If you care about something being anonymous, if you have something to protect, if you want to post opinions about things that you would not post under your real name, be mindful of this,” he continued.
So how can you best protect yourself? “There is a very straightforward solution which solves the vast majority of these issues,” Mr Paleka told The Independent. “Use a throwaway account.”
A throwaway account is an account created solely for the purpose of making one post, and therefore will not have any other information about the poster attached to it.
“If you’re posting something genuinely sensitive, don’t use the account you also use to post about whatever else for years,” he added. “That would be my advice.”