Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Craig Hale

One in five security breaches now thought to be caused by AI-written code

Man coding programmer, software developer working on digital tablet with binary, html computer code on virtual screen.
  • Vibe coding is widespread, but so are vulnerabilities in AI-generated code
  • No one really knows who’s ultimately responsible for AI code
  • AI and humans both have roles in development

New research has claimed more than two-thirds (69%) of organizations have found vulnerabilities in AI-generated code, even though 24% of production code is now written by AI globally.

The State of AI in Security & Development report from Aikido Security found despite companies pushing AI agendas to improve efficiency and boost output, security teams (53%), developers (45%) and mergers (42%) still get the blame when AI code goes wrong.

Aikido says this is creating confusion over the ownership of AI-caused vulnerabilities, which could ultimately make them harder to track down and remediate.

AI-generated code isn’t perfect

“Developers didn’t write the code, infosec didn't get to review it and legal is unable to determine liability should something go wrong. It's a real nightmare of risk,” Aikido CISO Mike Wilkes noted. “No one knows who’s accountable when AI-generated code causes a breach.”

In Europe, 20% of companies have had serious incidents, while their US counterparts have seen more than twice as many (43%), which Aikido puts down to two factors: the higher likelihood that US developers would bypass security controls (72% vs 61%), and Europe’s stricter compliance. Still, half (53%) of European companies admit to having near misses.

AI tools might not be the enemy, but having an overly complicated ecosystem could be. The report reveals how 90% of those using six to eight tools experienced incidents, compared to 64% of those using just one or two tools.

Remediation time also gets prolonged for those using more tools (3.3 days for 1-2 tools vs 7.8 days for 5+ tools).

The outlook is more positive, though. Most (96%) agree that AI will eventually write secure, reliable code in the next five years, with nearly as many (90%) believing AI will be able to handle penetration testing within 5.5 years.

Better still (for the workforce), only 21% think this will happen without human oversight, highlighting the importance of human workers in the development process.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.