Hackers Used AI to Develop First Known Zero-Day 2FA Bypass for Mass Exploitation
جوجل كتكشف على ثغرة zero-day لتخطي 2FA غالبا مطورة بالذكاء الاصطناعي
Google Discloses Zero-Day 2FA Bypass Exploit Likely Developed Using AI
TL;DR Google Threat Intelligence identified an unknown threat actor deploying a zero-day exploit to bypass two-factor authentication on an undisclosed open-source web administration tool, assessed with high confidence to have been developed using a large language model. The exploit script exhibits hallmarks of LLM-generated code, including educational docstrings and a fabricated CVSS score. The disclosure marks the first confirmed malicious use of AI for vulnerability discovery and weaponization in the wild, signalling a compressed attack timeline defenders must adapt to.
What happened
Google Threat Intelligence Group (GTIG) disclosed a zero-day vulnerability exploited by an unknown threat actor in what the organisation characterises as a "mass vulnerability exploitation operation." The flaw resides in a popular open-source, web-based system administration tool—the vendor name remains undisclosed—and permits attackers to bypass two-factor authentication using a Python script.
GTIG assessed with high confidence that an artificial intelligence model was weaponised to generate the exploit code. The Python script contains multiple hallmarks typically associated with large language model output: extensive educational docstrings, a hallucinated CVSS score, structured Pythonic formatting, detailed help menus, and a clean ANSI colour class implementation—patterns characteristic of LLM training data.
The vulnerability requires valid user credentials to exploit. It stems from a high-level semantic logic flaw arising from a hard-coded trust assumption—a category of defect that LLMs are particularly effective at identifying in codebases.
Google worked with the affected vendor to responsibly disclose and patch the flaw. Although no evidence suggests Google's own Gemini AI was used in the attack, GTIG confirmed an unidentified AI model was weaponised for discovery and exploitation.
The disclosure comes alongside evidence of broader AI abuse across threat ecosystems. Google documented concurrent campaigns from multiple threat actors leveraging language models for vulnerability research, malware development, and operational infrastructure management.
Why it matters
This incident crystallises a structural shift in attack surface. Threat actors now possess tooling that reduces the friction between vulnerability identification and weaponisation. The exploit demonstrates LLMs can discover semantic logic flaws—the type of defect human researchers traditionally excel at spotting—and translate that discovery into functional exploit code without human intervention.
For defenders, this compresses response timelines. Patch windows shrink when vulnerability-to-exploit development cycles accelerate. The 2FA bypass specifically targets a control mechanism designed to mitigate credential compromise; its circumvention by automated means raises the bar for incident response protocols that assume time to detect and react.
The broader threat landscape compounds this pressure. Google documented six distinct threat clusters—originating from North Korea (APT45), China (UNC2814, APT27, UNC6201, UNC5673), and Russia—independently weaponising AI models for vulnerability research, polymorphic malware development, and autonomous agent operations. Each group demonstrates different specialisations: some focusing on firmware targets, others on API bypass techniques, others on scalable account cycling to obscure malicious activity at attribution-resistant scale.
The grey market for shadow API relay access—Google identified 17 such services operating across Chinese marketplaces Taobao and Xianyu to tunnel access to Claude and Gemini models around regional restrictions—indicates this is not fringe capability. Threat actors have routinised access to premium-tier models through professionalized middleware and automated registration pipelines.
Affected systems and CVEs
- Open-source web-based system administration tool (vendor and product name not disclosed)
- Android malware PromptSpy (related capability demonstration, not exploited system)
No CVE assigned at the time of publication. The advisory does not provide a CVE identifier for the 2FA bypass vulnerability.
What to do
- Monitor for exploitation attempts against open-source web administration tools in your environment, particularly those with 2FA implementations. Request vulnerability and patch information directly from your vendor if public disclosure remains limited.
- Review authentication logs for anomalous 2FA bypass events or authentication flows that deviate from expected patterns, particularly those associated with valid credentials.
- If running the affected tool, coordinate with the vendor to obtain and deploy the security patch as soon as it becomes available.
- Audit Python scripts and third-party integrations for hallmarks of LLM-generated code—excessive docstrings, textbook-style Pythonic patterns, colourisation classes, and overly detailed help functionality—which may indicate compromised or malicious payloads.
- For Android-focused environments: inspect installed applications for suspicious capabilities around screen analysis, biometric capture, or uninstall prevention. PromptSpy does not appear on the Play Store but may arrive through sideloading or third-party app distribution channels.
- Implement runtime C2 infrastructure monitoring to detect dynamic credential rotation and API key changes, particularly if managing relay infrastructure or VNC services.
- Expand threat hunting to cover LLM query patterns consistent with vulnerability research: role-assumption prompts, recursive CVE analysis, PoC validation chains, and persona-driven jailbreaking attempts.
Open questions
- What is the name and version of the affected open-source web administration tool.
- Which specific AI model was used to develop the exploit—was it Claude, Gemini, or an alternative platform.
- What is the CVE identifier for the 2FA bypass vulnerability.
- How many organisations or users were affected by this exploitation campaign.
- When was the exploit first developed and deployed operationally.
- What additional targets or capabilities were included in the "mass vulnerability exploitation operation" beyond the 2FA bypass.
- Which threat actors are attributed to the primary exploit development and deployment campaign.
- What is the full extent of the Python script's functionality beyond 2FA bypass.
Source
Hackers Used AI to Develop First Known Zero-Day 2FA Bypass for Mass Exploitation


