Get all your news in one place.
100's of premium titles.
One app.
Start reading
Latin Times
Latin Times
Technology
Matias Civita

Google Makes Bombshell Claim That Hackers Used AI to Create Zero-Day Flaw in Their System

Google said it disrupted what it described as the first known cyberattack in which hackers used artificial intelligence to find and build an exploit for a previously unknown software flaw.

Google said on Monday it disrupted what it described as the first known cyberattack in which hackers used artificial intelligence to find and build an exploit for a previously unknown software flaw.

The finding, published by Google Threat Intelligence Group, involved a prominent criminal threat actor that planned a mass exploitation campaign against a widely used open-source, web-based system administration tool. Google said the flaw allowed attackers to bypass two-factor authentication, though they still needed valid user credentials.

"For the first time, [Google Threat Intelligence Group] has identified a threat actor using a zero-day exploit that we believe was developed with AI. The criminal threat actor planned to use it in a mass exploitation event but our proactive counter discovery may have prevented its use," the company said.

Google did not name the affected tool, vendor, or hacking group, but said it worked with the vendor to "responsibly disclose this vulnerability and disrupt this threat activity." The case is significant because zero-day flaws are among the most valuable tools in hacking. They have that name because software makers have had zero days to fix them before attackers can use them. In this case, Google said its analysts found evidence that a large language model helped discover and weaponize the flaw.

The company said it did not believe its own Gemini model was used. John Hultquist, chief analyst at Google Threat Intelligence Group, told The Associated Press the episode shows that "the era of AI-driven vulnerability and exploitation is already here." He also warned Reuters that the case may be only the "tip of the iceberg" as criminals and state-backed hackers experiment with AI-powered cyber operations.

Google said the flaw was not a failure on their part but a higher-level logic error tied to a hardcoded trust assumption, the kind of weakness that AI systems may be increasingly good at spotting because they can read code in context and infer what developers intended.

The report also warned that threat actors linked to China, North Korea, and Russia have also demonstrated significant interest in capitalizing on AI for vulnerability discovery." Google said some actors are using AI to generate malware, create decoy code, analyze targets, and automate parts of cyber operations. Google said adversaries are moving from early experimentation to what it called "industrial-scale" use of generative models in hostile workflows.

Google said it disrupted what it described as the first known cyberattack in which hackers used artificial intelligence to find and build an exploit for a previously unknown software flaw

Originally published on IBTimes

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.