
ESET today announced the discovery of "the first known AI-powered ransomware." The ransomware in question has been dubbed PromptLock, presumably because seemingly everything related to generative AI has to be prefixed with "prompt."
ESET said that this malware uses an open-weight large language model developed by OpenAI to generate scripts that can perform a variety of functions on Windows, macOS, and Linux systems while confounding defensive tools by exhibiting slightly different behavior each time.
"PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption," ESET said in a Mastodon post about the malware. "Based on the detected user files, the malware may exfiltrate data, encrypt it, or potentially destroy it. Although the destruction functionality appears to be not yet implemented."
Lua might seem like an odd choice of programming language for ransomware; it's mostly known for being used to develop games within Roblox or plugins for the NeoVim text editor. But it's actually a general-purpose language that offers a variety of advantages to the ransomware operators—including good performance, cross-platform support, and a focus on simplicity that makes it well-suited to "vibe coding."
It's important to remember that LLMs are non-deterministic; their output will change even if you provide the same input with the same prompt to the same model on the same device. That's maddening if you expect them to exhibit the exact same behavior over time, but ransomware operators don't necessarily want that, because it makes it easier for defensive tooling to associate patterns of behavior with known malware.
PromptLock "uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts on the fly," which helps it to evade detection. The fact that the model runs locally also makes it so OpenAI can't snitch on the ransomware operators—if they had to call an API on its servers every time they generate one of these scripts, the jig would be up. The pitfalls of vibe coding don't really apply, either, since the scripts are running on someone else's system.
Maybe this will make for a decent consolation prize for AI companies. Yeah, they're facing massive lawsuits. Sure, basically nobody has seen any benefits from adopting their services. Okay, so even Meta's cutting back on its AI-related spending spree. But nobody can say that AI is useless—it's convinced at least some ransomware operators to use local models in their warez! That counts for something, right?
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.