Get all your news in one place.
100's of premium titles.
One app.
Start reading

Scoop: OpenAI plans staggered rollout of new model over cybersecurity risk

OpenAI is finalizing a model with advanced cybersecurity capabilities that it plans to release only to a small set of companies, similar to Anthropic's limited roll out of Mythos, a source familiar told Axios.

Why it matters: AI capabilities have reached a tipping point, at least in terms of autonomy and hacking capabilities. Model-makers are now so worried about the havoc their own tools could cause that they're reluctant to release them into the wild.


Driving the news: Anthropic announced plans Tuesday to limit access of its new Mythos Preview model to a hand-picked group of technology and cybersecurity companies over fears of its advanced hacking capabilities.

  • At the time, it was the first AI company to take such an approach with a new model.
  • Now, OpenAI is planning a similar approach, according to the source.

Zoom in: OpenAI introduced its "Trusted Access for Cyber" pilot program in February after rolling out GPT-5.3-Codex, the company's most cyber-capable reasoning model.

  • Organizations in the invite-only program are given access to "even more cyber capable or permissive models to accelerate legitimate defensive work," according to a blog post.
  • At the time, OpenAI committed $10 million in API credits to participants.

The big picture: Former government officials and top security leaders have been ringing alarm bells over the past year about AI models that — in the wrong hands — could one day autonomously disrupt water utilities, the electric grid, or financial systems.

  • Those capabilities now appear to be here.

Threat level: Even if AI companies hold back their models for limited releases, top security experts all have the same message: There's no going back.

  • "You can't stop models from doing code enumeration or finding flaws in older codebases," said Rob T. Lee, chief AI officer at the SANS Institute. "That capability exists now."
  • It's only a matter of weeks or months before there's a new model with similar capabilities out in the wild, Wendi Whitmore, chief security intelligence officer at Palo Alto Networks, told Axios during a panel at the HumanX conference in San Francisco on Tuesday.
  • Adam Meyers, senior vice president of counter adversary operations at CrowdStrike, called Mythos' capabilities a "wake-up call" for the entire industry.

Between the lines: Restricting the rollout of a new frontier model makes "more sense" if companies are concerned about models' ability to write new exploits — rather than about their ability to find bugs in the first place, Stanislav Fort, CEO of security firm Aisle, told Axios.

The intrigue: Staggering the release of new AI models looks a lot like how cybersecurity vendors currently handle the disclosure of security flaws in software, Lee added.

  • "It's the same debate we've had for decades around responsible vulnerability disclosure," Lee said.

Yes, but: It's unclear if OpenAI will decide to release its forthcoming model more broadly at some point.

  • Anthropic has said it won't ever release Mythos Preview to the public, but would consider releasing other Mythos models if there are strong guardrails.

Reality check: Widely available AI models are already capable of finding some of the vulnerabilities and exploits that Mythos uncovered, researchers at AISLE found Wednesday.

Go deeper: The wildest things Anthropic's Mythos pulled off in testing

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.