
OpenAI has launched a new bug bounty program for its latest large language model, GPT-5. The biggest highlight of this program is that the company is offering rewards of up to $25,000. The program is designed to test whether security researchers and red-teamers can find a universal jailbreak prompt that forces GPT-5 to answer sensitive bio and chemistry-related safety questions. The challenge is strict: the jailbreak must work from a clean chat and bypass moderation systems.
As already mentioned, OpenAI is offering $25,000 to the first participant who successfully delivers a universal jailbreak across all ten bio/chem safety questions. However, that’s not all, the company is also offering $10,000 to the first team that completes all ten questions using multiple jailbreak prompts. Apart from that, partial successes may also earn smaller payouts at OpenAI’s discretion.
Applications for the program are now open on an invite-only basis, with access restricted to vetted bio red-teamers and selected applicants. Once approved, participants will be onboarded to OpenAI’s bio bug bounty platform. Testing officially begins on September 9, 2025.
Besides, to ensure responsible disclosure, OpenAI requires all prompts, completions, findings, and related communications to remain confidential under a strict NDA.
GPT-6 Already on the Horizon
While OpenAI is still sharpening the security on GPT-5, CEO Sam Altman has already hinted at what’s next. Speaking to reporters in San Francisco, Altman said GPT-6 will arrive sooner than the gap between GPT-4 and GPT-5 and will bring a shift in how users interact with AI.
Instead of just responding to prompts, GPT-6 is expected to adapt more closely to users, potentially allowing people to build highly personalized chatbots that reflect individual preferences and routines. Altman also highlighted memory as a critical feature to make ChatGPT “truly personal.” For now, though, OpenAI’s attention is on GPT-5, and whether anyone can push it past its guardrails.