AI Makes It Easier for Hackers to Identify Anonymous Social Media Accounts, Study Finds

New research suggests that AI platforms like ChatGPT could make sophisticated privacy attacks much easier.


A study by researchers Simon Lermen and Daniel Paleka found that large language models (LLMs) can successfully match anonymous social media accounts to real identities based on the information users post.

In their experiments, LLMs were able to take details from anonymous accounts—like mentioning walking a dog in a specific park—and cross-reference that information to identify the user on other platforms with high confidence.

While this example was hypothetical, the researchers warned that AI could be used for surveillance of dissidents, activists, or for “highly personalised” scams by hackers. AI now allows anyone with access to public LLMs and an internet connection to carry out attacks that previously required expert skills.

Experts highlighted several concerns:

Peter Bentley, UCL professor of computer science, warned that commercial de-anonymisation tools could pose serious privacy risks and noted that LLMs sometimes make mistakes, potentially accusing innocent people.

Prof. Marc Juárez, cybersecurity lecturer at the University of Edinburgh, noted that AI could link public data beyond social media—like hospital records or statistical releases—raising risks even when data is “anonymised.”

Prof. Marti Hearst, UC Berkeley, emphasized that AI can only link accounts when users share the same identifiable information across platforms, so the technology is not infallible.

The study calls for a reassessment of online privacy practices. Lermen suggested steps like restricting access to user data, detecting automated scraping, and limiting bulk exports. Users are also encouraged to be cautious about the personal details they share online.

As AI grows more capable, protecting anonymity may require both stronger platform safeguards and greater individual vigilance.