Google security researchers have identified what they say is the first known case of hackers using AI-powered malware in a real-world cyberattack, according to findings published Wednesday.
Why it matters: The discovery suggests adversarial hackers are moving closer to operationalizing generative AI to supercharge their attacks.
Driving the news: Researchers in Google's Threat Intelligence Group have discovered two new malware strains — PromptFlux and PromptSteal — that use large language models to change their behavior mid-attack.
- Both malware strains can "dynamically generate malicious scripts, obfuscate their own code to evade detection and leverage AI models to create malicious functions on demand," according to the report.
Zoom in: Google's team found PromptFlux while scanning uploads to VirusTotal, a popular malware-scanning tool, for any code that called back to Gemini.
- The malware appears to be in active development: Researchers observed the author uploading updated versions to VirusTotal, likely to test how good it is at evading detection. It uses Gemini to rewrite its own source code, disguise activity and attempt to move laterally to other connected systems.
- Meanwhile, Russian military hackers have used PromptSteal, another new AI-powered malware, in cyberattacks on Ukrainian entities, according to Google. The Ukrainian government first discovered the malware in July.
- Unlike conventional malware, PromptSteal lets hackers interact with it using prompts, much like querying an LLM. It's built around an open-source model hosted on Hugging Face and designed to move around a system and exfiltrate data as it goes.
Reality check: Both malware strains are pretty nascent, Google says. But they mark a major step toward the future that many security executives have feared.
Between the lines: PromptSteal's reliance on an open-source model is something Google's team is watching closely, Billy Leonard, tech lead at Google Threat Intelligence Group, told Axios.
- "What we're concerned about there is that with Gemini, we're able to add guardrails and safety features and security features to those to mitigate this activity," Leonard said. "But as (hackers) download these open-source models, are they able to turn down the guardrails?"
The big picture: The underground cyber crime market for AI tools has matured significantly in the past year, the report says.
- Researchers have seen advertisements for AI tools that could write convincing phishing emails, create deepfakes and identify software vulnerabilities.
- That makes it easier for even unskilled cyber criminals to launch attacks well beyond their own capabilities.
Yes, but: Most attackers don't need AI to do damage and are still overwhelmingly relying on common tactics, like phishing emails and stolen credentials, incident responders have told Axios.
- "This isn't 'the sky is falling, end of the world,'" Leonard said. "They're adopting technologies and capabilities that we're also adopting."
Go deeper: AI is about to supercharge cyberattacks