GPT‑5.5 Bio Bug Bounty

Source: openai.com
149 points by Murfalo 22 hours ago on hackernews | 99 comments

Testing universal jailbreaks for biorisks in GPT‑5.5

As part of our ongoing efforts to strengthen our safeguards for advanced AI capabilities in biology, we’re introducing a Bio Bug Bounty for GPT‑5.5 and accepting applications. We’re inviting researchers with experience in AI red teaming, security, or biosecurity to try to find a universal jailbreak that can defeat our five-question bio safety challenge.

  • Model in scope: GPT‑5.5 in Codex Desktop only.
  • Challenge: Identify one universal jailbreaking prompt to successfully answer all five bio safety questions from a clean chat without prompting moderation.
  • Rewards:
    • $25,000 to the first true universal jailbreak to clear all five questions.
    • Smaller awards may be granted for partial wins at our discretion.
  • Timeline: Applications open April 23, 2026 with rolling acceptances, and close on June 22, 2026. Testing begins April 28, 2026 and ends on July 27, 2026.
  • Access: Application and invites. We will extend invitations to a vetted list of trusted bio red-teamers, and review new applications. Once selected, successful applicants will be onboarded to the bio bug bounty platform.
  • Disclosure: All prompts, completions, findings, and communications are covered by NDA.

Submit a short application here⁠(opens in a new window) (name, affiliation, experience) by June 22, 2026. Accepted applicants and collaborators must have existing ChatGPT accounts to apply, and will sign a NDA. Apply now and help us make frontier AI safer.