Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Ash Hill

ChatGPT Can Generate Mutating Malware That Evades Modern Security Techniques

hacker in front of computer

ChatGPT has managed to create some amusing and hilarious things in the right hands, like this Big Mouth Billy Bass project. However, there is a much darker side of AI that could be used to create some seriously complicated problems for the future of IT. A few IT experts have recently outlined the dangerous potential of ChatGPT and its ability to create polymorphic malware that’s almost impossible to catch using endpoint detection and response (EDR).

EDR is a type of cybersecurity technique that can be deployed to catch malicious software. However, experts suggest this traditional protocol is no match for the potential harm ChatGPT can create. Code that can mutate — this is where the term polymorphic comes into play — can be much harder to detect.

Most language learning models (LLMs) like ChatGPT are designed with filters in place to avoid generating inappropriate content as deemed by their creators. This can range from specific topics to, in this case, malicious code. However, it didn’t take long for users to find ways to circumvent these filters. It’s this tactic that makes ChatGPT particularly vulnerable to individuals looking to create harmful scripts.

Jeff Sims is a security engineer with HYAS InfoSec, a company that focuses on IT security. Back in March, Sims published a white paper detailing a proof-of-concept project he calls BlackMamba. This application is a type of polymorphic keylogger that sends requests to ChatGPT using an API each time it’s run.

“Using these new techniques, a threat actor can combine a series of typically highly detectable behaviors in an unusual combination and evade detection by exploiting the model’s inability to recognize it as a malicious pattern," Sims explains.

 Another cybersecurity company, CyberArk, recently demonstrated ChatGPT’s ability to create this type of polymorphic malware in a blog post by Eran Shimony and Omer Tsarfati. In the post, they explain how code injection from ChatGPT requests makes it possible to modify scripts once activated, avoiding the more modern techniques used to detect malicious behavior. 

At the moment, we only have these examples as a proof of concept — but hopefully this awareness will lead to more developments to prevent the harm this type of mutating code could cause in a real-world setting.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.