2026: The Year of AI-Assisted Attacks
الهجمات المدعومة بالذكاء الاصطناعي خلات ناس ماعندهمش خبرة تقنية ينفذو اختراقات معقدة فـ 2025
AI-Assisted Attacks Enabled Nontechnical Actors to Conduct Sophisticated Cybercrimes in 2025
TL;DR Large language models crossed a capability threshold in 2025, enabling teenagers and individuals with no coding background to execute attacks previously requiring specialized expertise. Malicious packages in public repositories grew to 454,600, time-to-exploit collapsed to 44 days, and 28.3% of vulnerabilities are now exploited within 24 hours of disclosure. The barrier to entry for technically sophisticated attacks has been significantly lowered.
What happened
On December 4, 2025, a 17-year-old with no technical background was arrested in Osaka under Japan's Unauthorized Access Prohibition Act after extracting personal data of over 7 million users from Kaikatsu Club, Japan's largest internet cafe chain. His stated motivation: buying Pokémon cards.
This case exemplifies a broader shift in 2025. Three teenagers aged 14, 15, and 16 with no coding background used ChatGPT in February 2025 to build a tool that attacked Rakuten Mobile's system approximately 220,000 times. In July 2025, a single actor using Claude Code conducted an extortion campaign targeting 17 organizations over one month, leveraging AI to develop malicious code, organize stolen files, analyze financial records to calibrate demands, and draft extortion emails. In December 2025, an individual used Claude Code and ChatGPT to breach more than 10 Mexican government agencies, stealing over 195 million taxpayer records.
The technical capability metrics reflect the acceleration. Top frontier models improved from resolving 33% of real GitHub issues in August 2024 to just under 81% by December 2025 on SWE-bench. Malicious packages in public repositories grew from 55,000 in 2022 to 454,600 by 2025, with notable leaps coinciding with GPT-4's release in 2023 and the emergence of agentic coding in 2025.
Instances of malicious packages discovered on public repositories increased by 75%, cloud intrusions increased by 35%, and AI-generated phishing began outperforming human red teams entirely throughout 2025.
In September 2025, the Shai-Hulud attack targeting the npm ecosystem compromised over 500 packages. Malicious packages were designed with documentation, unit tests, and code structured to appear as legitimate telemetry modules, evading static analysis and signature scanners because the code, likely AI-generated, appeared legitimate. Over 487 organizations had secrets compromised, and $8.5 million was stolen from Trust Wallet after attackers used exposed credentials to poison its Chrome extension. Many organizations instituted code freezes following the attack.
Why it matters
The collapse of time-to-exploit represents the core defense problem for engineering teams. Time to exploit decreased from over 700 days in 2020 to 44 days in 2025. According to Mandiant's M-Trends 2026 report, exploits now routinely arrive before patches are available, with 28.3% of CVEs exploited within 24 hours of disclosure.
This asymmetry undermines traditional patch-based defense. The average time to remediate a known high- or critical-severity CVE is 74 days. Additionally, 45% of vulnerabilities in systems maintained by large companies (1000+ employees) never get remediated at all.
For developers and system administrators, the implications are direct: the Venn diagram of attackers willing to conduct attacks and those with technical ability to do so is expanding rapidly each month. Threat actors no longer require deep expertise. Amateur attackers can now execute campaigns previously requiring organized teams. An individual in Algeria used AI-assisted methods to build ransomware that hit 85 targets in their first month.
For supply chain security, the concern is acute. Malicious packages designed with professional-grade documentation and testing infrastructure bypass traditional detection tools. When tested against 8,783 malicious npm packages, Chainguard Libraries blocked 99.7%; against approximately 3,000 malicious Python packages, it blocked roughly 98%. The implication is that conventional scanners are missing a significant fraction of threats.
Affected systems and CVEs
- Kaikatsu Club (7 million user records exfiltrated)
- Rakuten Mobile (220,000 attack attempts in February 2025)
- 17 organizations targeted in July 2025 extortion campaign
- Mexican government agencies (more than 10 agencies, 195 million taxpayer records stolen)
- npm ecosystem (500+ packages compromised in Shai-Hulud attack, September 2025)
- Trust Wallet Chrome extension ($8.5 million stolen)
No CVE assigned at the time of publication.
What to do
- Rebuild open source libraries from verified, attributable source code to prevent CI/CD takeover, dependency confusion, long-lived token theft, and package distribution attacks.
- Eliminate entire categories of vulnerability rather than attempting to outrun attacks through patching alone.
- Implement code freezes when supply chain attacks occur, as multiple organizations did following Shai-Hulud.
- Populate production systems, artifact managers, and developer workstations with vetted library implementations that structurally eliminate attack classes.
Open questions
- The identity and specific nationality of the single actor conducting the July 2025 extortion campaign remain unspecified.
- Specific Mexican government agencies targeted are not named.
- Identity and location of the individual targeting the Mexican government are not disclosed.
- Whether all mentioned AI-assisted attacks were officially attributed to specific LLM models, or if attribution is inferred.
- Specific LLM model versions used in each attack are not detailed.
- Whether the 17-year-old intended to sell the Pokémon cards or use them personally.


