Claude AI and other systems could be vulnerable to worrying command prompt injection attacks


- Security researchers tricked Anthropic’s Claude Computer Use to download and run malware
- They say that other AI tools could be tricked with prompt injection, too
- GenAI can be tricked to write, compile, and run malware, as well
In mid-October 2024, Anthropic released Claude Computer Use, an Artificial Intelligence (AI) model allowing Claude to control a device – and researchers have already found a way to abuse it.
Cybersecurity researcher Johann Rehnberger recently described how he was able to abuse Computer Use and get the AI to download and run malware, as well as get it to communicate with its C2 infrastructure, all through prompts.
While it sounds devastating, there are a few things worth mentioning here: Claude Computer Use is still in beta, and the company did leave a disclaimer saying that Computer Use might not always behave as intended: “We suggest taking precautions to isolate Claude from sensitive data and actions to avoid risks related to prompt injection.” Another thing worth noting is that this is a prompt injection attack, fairly common against AI tools.
“Countless ways” to abuse AI
Rehnberger calls his exploit ZombAIs, and says he was able to get the tool to download Sliver, a legitimate open source command-and-control (C2) framework developed by BishopFox for red teaming and penetration testing, but it is often misused by cybercriminals as malware.
Threat actors use Sliver to establish persistent access to compromised systems, execute commands, and manage attacks in a similar way to other C2 frameworks like Cobalt Strike.
Rehnberger also stressed that this is not the only way to abuse generative AI tools, and compromise endpoints via prompt injection.
“There are countless others, like another way is to have Claude write the malware from scratch and compile it,” he said. “Yes, it can write C code, compile and run it.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“There are many other options.”
In its writeup, The Hacker News added DeepSeek AI chatbot was also found vulnerable to a prompt injection attack that could allow threat actors to take over victim computers. Furthermore, Large Language Models (LLM) can output ANSI escape code, which can be used to hijack system terminals via prompt injection, in an attack dubbed Terminal DiLLMa.
You might also like
Security researchers tricked Anthropic’s Claude Computer Use to download and run malware They say that other AI tools could be tricked with prompt injection, too GenAI can be tricked to write, compile, and run malware, as well In mid-October 2024, Anthropic released Claude Computer Use, an Artificial Intelligence (AI) model…
Recent Posts
- Grok blocked results saying Musk and Trump “spread misinformation”
- A GPU or a CPU with 4TB HBM-class memory? Nope, you’re not dreaming, Sandisk is working on such a monstrous product
- The Space Force shares a photo of Earth taken by the X-37B space plane
- Elon Musk claims federal employees have 48 hours to explain recent work or resign
- xAI could sign a $5 billion deal with Dell for thousands of servers with Nvidia’s GB200 Blackwell AI GPU accelerators
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010