Microsoft challenges you to hack its LLM email service


- Microsoft is offering $10k prize for hackers who can exploit vulnerabilities in its LLM
- The challenge will focus on prompt injection defenses
- Software developers and hackers often work together to discover and fix flaws
Are you an experienced hacker looking to make a little extra money this Christmas? Well you might be in luck, as Microsoft is sponsoring a competition, alongside the Institute of Science, and Technology Australia, and ETH Zurich, in which contestants will try to break a simulated Large Language Model (LLM) integrated email client.
Winning teams for the LLMail-Inject challenge will be awarded a share of the $10,000 prize pool.
Participants will need to sign into the challenge using a GitHub account, and create a team. The teams will then be asked to evade prompt injection defenses in a simulated LLM-integrated email client. The LLmail service includes an assistant which can answer questions and perform actions on behalf of the user, and crucially includes defenses against indirect prompt injection tasks.
A mutually beneficial relationship
By bypassing the injection defenses, the hackers will be looking to prompt the LLM to do or reveal things it is not trained to. Through this, Microsoft is aiming to identify weaknesses in its current prompt injection defenses, and encourage the development of robust security measures.
The relationship between security researchers and software developers is often used this way, with Google often offering a ‘bug bounty’ for anyone who discovers and is able to exploit vulnerabilities in its Google Cloud Platform.
Similarly, Microsoft recently announced it was hosting its own Black Hat-esque hacking event, in which competitors would look for vulnerabilities in Microsoft AI, Azure, Identity, Dynamics 365, and M365.
Taking a proactive approach to addressing potential vulnerabilities allows software companies to mitigate the risks before they can be exploited by threat actors in real world scenarios. Slack’s AI assistant was on the receiving end of malicious prompt injections, which was luckily discovered by security researchers – but could have led to real security concerns.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Via The Register
You might also like
Microsoft is offering $10k prize for hackers who can exploit vulnerabilities in its LLM The challenge will focus on prompt injection defenses Software developers and hackers often work together to discover and fix flaws Are you an experienced hacker looking to make a little extra money this Christmas? Well you…
Recent Posts
- This surprisingly simple way to hide hardware security keys in mainstream flash memory could pave the way for ultra-secure storage very soon
- NYT Connections hints and answers for Sunday, July 6 (game #756)
- NYT Strands hints and answers for Sunday, July 6 (game #490)
- The 55 Best Deals From REI’s July 4 Outdoor Gear Sale (2025)
- Samsung is about to find out if Ultra is enough
Archives
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022