Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Sead Fadilpašić

LLM services are being hit by hackers looking to sell on private info

Holographic silhouette of a human. Conceptual image of AI (artificial intelligence), VR (virtual reality), Deep Learning and Face recognition systems. Cyberpunk style vector illustration.

Using cloud-hosted large language models (LLM) can be quite expensive, which is why hackers have apparently begun started stealing, and selling, login credentials to the tools.

Cybersecurity researchers Sysdig Threat Research Team recently spotted one such campaign, dubbing it LLMjacking.

In its report, Sysdig said it observed a threat actor abusing a vulnerability in the Laravel Framework, tracked as CVE-2021-3129. This flaw allowed them to access the network and scan it for Amazon Web Services (AWS) credentials for LLM services.

New methods of abuse

"Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers," the researchers explained in the report. "In this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted."

The researchers were able to discover the tools that the attackers used to generate the requests which invoked the models. Among them was a Python script that checked credentials for ten AI services, analyzing which one was useful. The services include AI21 Labs, Anthropic, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter, and GCP Vertex AI.

They also discovered that the attackers didn’t run any legitimate LLM queries in the verification stage, but were rather doing “just enough” to find out what the credentials were capable of, and any quotas. 

In its news report, The Hacker News says the findings are evidence that hackers are finding new ways to weaponize LLMs, besides the usual prompt injections and model poisoning, by monetizing access to LLMs, while the bill gets mailed to the victim.

The bill, the researchers stressed, could be quite a big one, going up to $46,000 a day for LLM use.

"The use of LLM services can be expensive, depending on the model and the amount of tokens being fed to it," the researchers added. "By maximizing the quota limits, attackers can also block the compromised organization from using models legitimately, disrupting business operations."

More from TechRadar Pro

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.