Get all your news in one place.
100's of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Amanda Caswell

Anthropic just released Opus 4.7 — the 'civilian' version of the AI they said was too dangerous for us

Dario Amodei, Anthropic CEO.

Today, Anthropic officially released Claude Opus 4.7, the most powerful AI model available to the general public. On paper, it is promised to be a beast: a notable leap in advanced software engineering, substantially better vision for analysis capabilities and a new "self-verification" mode that allows it to audit its own work before it reports back to the user.

But there is a shadow hanging over this launch. For the first time in the history of frontier AI, a company has admitted to purposely making a model dumber in order to protect the world from it. Let me explain.

Opus 4.7 is the 'civilian-safe' version of the Mythos model

(Image credit: Claude/Anthropic)

To truly get why the release of Opus 4.7 is such a milestone, you first have to understand the implications of Anthropic's Claude Mythos Preview. I'm mentioning it alongside today's launch mainly because Mythos remains the company's most powerful model. However, its release is strictly limited to cyber defenders and critical infrastructure partners. While Opus 4.7 is a "notable improvement" over previous versions, it is fundamentally a secondary tier.

In the release notes for Opus 4.7, Anthropic dropped a bombshell stating that during the training of Opus 4.7, the team experimented with efforts to "differentially reduce" the model’s cyber-offensive capabilities.

For you and me, that means the company intentionally nerfed the model’s ability to be used as a digital weapon.

Project Glasswing and the first real-world test

(Image credit: Anthropic)

Opus 4.7 serves as the first live guinea pig for Project Glasswing, the security initiative Anthropic unveiled last week. This framework introduces automated safeguards that detect and block prohibited or high-risk cybersecurity requests in real-time.

For the average developer, this means a more helpful assistant. For the security community, it means a gatekeeper.

If you are a professional researcher, you can no longer access these features anonymously. You must now apply for Anthropic’s new Cyber Verification Program. That move effectively puts "Frontier AI" behind a background check.

Opus 4.7 upgrades

(Image credit: Shutterstock)

Even with its wings clipped in cybersecurity, Opus 4.7 is promised to be a massive upgrade for professional workflows. If you aren't trying to hack a mainframe, here is what you’re getting:

  • Autonomous engineering: This new model makes it easier than ever to hand off your hardest coding work. Anthropic promises tasks that previously required "close supervision" can now be done with total confidence.
  • Self-verification: Opus 4.7 no longer just "guesses." It devises ways to verify its own outputs, running internal logical checks before reporting back. This is huge for hallucination reduction and fact-checking.
  • High-resolution vision: While image generation is still not part of Claude's features, the model can now see images in significantly greater resolution. This breakthrough could be useful for parsing complex technical diagrams, UI/UX mockups and even professional slides for your next presentation.
  • Creative "taste": Anthropic claims the model is more "tasteful" when generating professional documents, producing higher-quality interfaces and docs that feel less "AI-generated" and more human-refined. This is something I'm still eager to play around with, as it's been studied that "taste" is one of the hardest human aspects to replicate.

The takeaway

Claude Opus 4.7 is a "safe" powerhouse with pricing remaining the same as Opus 4.6: $5/M input tokens, $25/M output tokens. It promises to deliver a massive 3x increase in production task completion and nearly perfect vision accuracy (98.5%), all at the same price as its predecessor.

However, I'm cautiously optimistic because the real story here is that it’s the "civilian" version of Anthropic’s secret Mythos model; purposefully limited in its hacking abilities to test a new era of gated, identity-verified AI. We've entered a new era of AI and I'll be watching (and reporting) closely.

Have you tried it yet? Let me know in the comments what you think.

More from Tom's Guide

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.