Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Nathaniel Mott

AI-powered PromptLocker ransomware is just an NYU research project — the code worked as a typical ransomware, selecting targets, exfiltrating selected data and encrypting volumes

Virus.

ESET said on Aug. 26 that it had discovered the first AI-powered ransomware, which it dubbed PromptLocker, in the wild. But it seems that wasn't the case: New York University (NYU) researchers have claimed responsibility for the malware ESET found.

It turns out PromptLocker is actually an experiment called "Ransomware 3.0" conducted by researchers at NYU's Tandon School of Engineering. A spokesperson for the school told Tom's Hardware a Ransomware 3.0 sample was uploaded to VirusTotal, a malware analysis platform, and then picked up by the ESET researchers by mistake.

ESET said that the malware "leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption." The company noted that the sample hadn't implemented destructive capabilities, however, which makes sense for a controlled experiment.

But the malware does work: NYU said "a simulation malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks — mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes — across personal computers, enterprise servers, and industrial control systems."

Is that worrisome? Absolutely. But there's a significant difference between academic researchers demonstrating a proof-of-concept and legitimate hackers using that same technique in real-world attacks. Now the study will likely inspire the ne'er-do-wells to adopt similar approaches, especially since it seems to be remarkably affordable.

"The economic implications reveal how AI could reshape ransomware operations," the NYU researchers said. "Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models."

As if that weren't enough, the researchers said that "open-source AI models eliminate these costs entirely," so ransomware operators won't even have to shell out the 70 cents needed to work with commercial LLM service providers. They'll receive a far better return on investment than anyone pumping money into the AI sector, at least.

But for now that's all still conjecture. This is compelling research, sure, but it seems we're going to have to wait a while longer for the cybersecurity industry's promise that AI will be the future of hacking to come to fruition. (Or be exposed as the same AI boosterism taking place throughout the rest of the tech industry; whichever.)

NYU's paper on this study, "Ransomware 3.0: Self-Composing and LLM-Orchestrated," is available here.

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.