Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Scott Younker

Google warns of AI-infused malware that's harder to detect than normal viruses

Graphic screen displaying malware detection warning .

Google's Threat Intelligence Group (GTIG) is warning that bad guys are using artificial intelligence to create and deploy new malware that both utilizes and combats large language models (LLM) like Gemini when deployed.

The findings were laid out in a white paper released on Wednesday, November 5 by the GTIG. The group noted that adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying "novel AI-enabled malware in active operations." They went on to label it a new "operational phase of AI abuse."

Malware families

(Image credit: Google)

Google is calling the new tools "just-in-time" AI used in at least two malware families: PromptFlux and PromptSteal, both of which use LLMs during deployment. They generate malicious scripts and obfuscate their code to avoid detection by antivirus programs. Additionally, the malware families use AI models to create malicious functions "on demand" rather than being built into the code.

Google says these tools are a nascent but significant step towards "autonomous and adaptive malware."

PromptFlux is an experimental VBScript dropper that utilizes Google Gemini to generate obfuscated VBScript variants. VBScript is mostly used for automation in Windows environments.

(Image credit: Google)

In this case, PromptFlux attempts to access your PC via Startup folder entries and then spreads through removable drives and mapped network shares.

"The most novel component of PROMPTFLUX is its 'Thinking Robot' module, designed to periodically query Gemini to obtain new code for evading antivirus software," GTIG says.

The researchers say that the code indicates the malware's makers are trying to create an evolving "metamorphic script."

According to Google, the Threat Intelligence researchers could not pinpoint who made PromptFlux, but did note that it appears to be used by a group for financial gain. Google also claims that it is in early development and can't yet inflict real damage.

The company says that it has disabled the malware's access to Gemini and deleted assets connected to it.

Google also highlighted a number of other malware that establish remote command-and control (FruitShell), capturing GitHub credentials (QuietVault), and one that steals and encrypts data on Windows, macOS and Linux devices (PromptLock). All of them utilize AI to work or in the case of FruitShell to bypass LLM-powered security.

Gemini abuse

(Image credit: Google)

Beyond malware, the paper also reports several cases where threat actors abused Gemini. In one case, a malicious actor posed as a "capture-the-flag" participant, basically acting as a students or researchers to convince Gemini to provide information that is supposed to be blocked.

Google specified a number of threats from Chinese, Iranian and North Korean threat groups that abused Gemini for phishing, data mining, increasing malware sophistication, crypto theft and creating deepfakes.

Again, Google says it has disabled the associated accounts in identified cases and reinforced its model safeguards. The company goes on to says that underground marketplaces for malicious AI-based tools is growing.

"Many underground forum advertisements mirrored language comparable to traditional marketing of legitimate AI models, citing the need to improve the efficiency of workflows and effort while simultaneously offering guidance for prospective customers interested in their offerings," the company wrote.

With AI getting more sophisticated, this seems to indicate a trend of replacing conventional malicious tools with new AI-based ones.

Google's AI approach

(Image credit: Freepik)

The paper wraps up by advocating that AI developers need to be "both bold and responsible" and that AI systems must be designed with "strong safety guardrails" to prevent these kinds of abuses.

Google says that it investigates signs of abuse in its products and uses the experience of combating bad actors to "improve safety and security for our AI models."

How to stay safe

(Image credit: Shutterstock)

The war against viruses and malware is ever evolving as tools on both sides become more sophisticated especially with the injection of AI.

There are ways to stay safe. As always, be wary of links and external content. If an AI tool is be used to summarize a web page, PDF, or email that content could be malicious or contain a hidden prompt to attack the AI.

Additionally, you should always limit AI access to sensitive accounts like bank accounts, email or documents that have sensitive information. Compromised AI could exploit that access.

Finally, unexpected behavior in an LLM or AI model should be treated as a red flag. If an AI model starts answerint questions strangely, reveals internal knowledge of your PC or worse, tries to perform unusual or unauthorized actions then you should stop that session.

Make sure you keep your software updated, including the best antivirus software and the LLM programs and applications you utilize. this ensures that you have the most recent and patched versions protecting you against known flaws.

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.