OpenAI is upping its bug bounty rewards as security worries rise


- OpenAI is increasing its bug bounty payouts
- Spotting high-impact vulnerabilities could net researchers $100k
- The move comes as more AI agents and systems are developed
OpenAI is hoping to encourage security researchers to identify security vulnerabilities by increasing its rewards for spotting bugs.
The AI giant has revealed it is upping its Security Bug Bounty program from $20k to $100k, and is widening the scope of its Cybersecurity Grant program, as well as developing new tools to protect AI agents from malicious threats.
This follows recent warnings AI agents can be hijacked to write and send phishing attacks, and the company is keen to outline its “commitment to rewarding meaningful, high-impact security research that helps us protect users and maintain trust in our systems.”
Disrupting threats
Since the Cybersecurity Grant Program was launched in 2023, OpenAI has reviewed thousands of applications and even funded 28 research initiatives, helping the firm gain valuable insights into security subjects like autonomous cybersecurity defenses, prompt injections, and secure code generation.
OpenAI says it continually monitors malicious actors looking to exploit its systems, and identifies and disrupts targeted campaigns.
“We don’t just defend ourselves,” the company said, “we share tradecraft with other AI labs to strengthen our collective defenses. By sharing these emerging risks and collaborating across industry and government, we help ensure AI technologies are developed and deployed securely.”
OpenAI is not the only company to increase its rewards program, with Google announcing in 2024 a five factor rise in bug bounty rewards, arguing that more secure products make finding bugs more difficult, which is reflected in the higher compensations.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
With more advanced models and agents, and more users and developments, there are inevitably more points of vulnerability that could be exploited, so the relationship between researchers and software developers is more important than ever.
“We are engaging researchers and practitioners throughout the cybersecurity community,” Open AI confirmed.
“This allows us to leverage the latest thinking and share our findings with those working toward a more secure digital world. To train our models, we partner with experts across academic, government, and commercial labs to benchmark skills gaps and obtain structured examples of advanced reasoning across cybersecurity domains.”
Via CyberNews
You might also like
OpenAI is increasing its bug bounty payouts Spotting high-impact vulnerabilities could net researchers $100k The move comes as more AI agents and systems are developed OpenAI is hoping to encourage security researchers to identify security vulnerabilities by increasing its rewards for spotting bugs. The AI giant has revealed it is…
Recent Posts
- Not Just Any Prime Day Deals, 220 Obsessively Tested Picks—even $1,200 off an OLED TV
- Samsung Galaxy Unpacked 2025 as it happened – the new Z Fold 7, Z Flip 7 and Galaxy Watch 8 are here
- The Bezos-funded climate satellite is lost in space
- Samsung’s big folding phone redesign is a breath of fresh air in a sea of AI-first phone launches
- The best 4K TV deals during Prime Day 2025
Archives
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022