Sheejith's Personal Site

Google says hackers used AI to create zero day security flaw for the first time

Cybercriminals recently used an artificial intelligence model to create a zero-day vulnerability that could be used to exploit networks widely, Google announced Monday.

The announcement comes as major AI companies, including Anthropic and OpenAI, have begun testing newer models that can find and exploit critical software vulnerabilities better than most humans.

Google Threat Intelligence Group researchers detailed the development in a report released Monday. Zero-day exploits are considered the most serious type of security flaw because they are not detected by security companies and have no known fixes.

The report noted that this was the first time Google had seen evidence of AI being used to develop these vulnerabilities — marking a major change in the cybersecurity landscape, as it suggests newer AI models could be used to create major exploits, not just find them.

Google concluded that Anthropic’s Claude Mythos model — which has already found thousands of vulnerabilities across every major operating system and web browser — was most likely not used to create the zero-day exploit.

Mythos, along with OpenAI’s newly announced GPT-5.5-Cyber model, is top of mind for the Trump administration, which is holding ongoing meetings with industry groups to discuss potential regulation and vetting of frontier models.

Google reported its findings to the unnamed firm affected by the vulnerability before releasing its report. The company then issued a patch to fix the issue.

John Hultquist, chief analyst at Google Threat Intelligence Group, said in a statement that the findings made clear that the race to use AI to find network vulnerabilities has “already begun.”

“For every zero-day we can trace back to AI, there are probably many more out there,” Hultquist said. “Threat actors are using AI to boost the speed, scale, and sophistication of their attacks.”

Threat researchers in recent months have observed hackers using AI more frequently to enhance their attacks. In November, Anthropic said Beijing-backed hackers used AI to fully automate their cyberattacks for the first time.

The Google Threat Intelligence Group report also details efforts by Russia-linked hacking groups to use AI models to target Ukrainian networks with malware, while North Korean government hacking group APT45 used AI technologies to refine and scale up its cyber methods.

The rollout of hyper-advanced AI models has heightened concerns that the technology could soon be co-opted by criminals and adversaries to find and launch cyberattacks at a scale unseen. Anthropic and OpenAI have so far only allowed a small group of researchers, tech companies and government agencies to test their AI models.

“The staged release was actually to create what we call defenders’ advantage, and we believe that window is somewhere in the months timeframe — not years,” Rob Bair, head of cyber policy at Anthropic, said last week at the AI+Expo in Washington.

Posted on: 5/11/2026 1:36:33 PM


Talkbacks

You must be logged in to enter talkback comments.