We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is
Security Scan of 1 Million Exposed AI Services Reveals Widespread Authentication Failures
TL;DR — Intruder's scan of 1 million exposed AI services found widespread security failures, with many deployments lacking authentication entirely by default. Exposed instances revealed user conversation histories, API keys in plaintext, and access to frontier language models without credentials. The research identified over 90 exposed agent management platforms across government, marketing, and finance sectors, alongside unauthenticated Ollama instances wrapping commercial models from major vendors.
What happened
Intruder conducted a security investigation of AI infrastructure by querying certificate transparency logs to identify over 2 million hosts running approximately 1 million exposed services. The research was prompted by the ClawdBot incident, a self-hosted AI assistant averaging 2.6 CVEs per day, which highlighted the gap between the speed of AI deployment and security maturity.
The core finding was structural: authentication is not enabled by default in many AI infrastructure projects. This meant instances deployed "straight out of the box" were immediately accessible to anyone on the internet without credentials.
Specific exposures included:
- OpenUI instances containing users' full LLM conversation histories
- Claude-powered chatbots disclosing API keys in plaintext
- Flowise instances exposing entire business logic, credential lists, and connected tools capable of file operations and code interpretation
- Over 90 exposed agent management platform instances spanning government, marketing, and finance sectors
- Ollama API servers running without authentication; when Intruder sent a test prompt ("Hello") to 5,200+ servers with connected models, 31% responded without requiring authentication
- 518 Ollama instances wrapping frontier models from Anthropic, Deepseek, Moonshot, Google, and OpenAI
Lab analysis of a subset of these applications uncovered repeated insecure patterns: misconfigured Docker setups, hardcoded credentials, applications running with root privileges, and poor sandboxing. Within days of lab work, the team identified arbitrary code execution in one popular AI project. The source does not name this project or indicate whether patches have been released.
Why it matters
For defenders and operators in the MENA region and beyond, the findings expose a structural risk in the AI infrastructure supply chain. The speed advantage gained by deploying without authentication or security hardening creates a direct attack surface for multiple threat classes:
-
Credential theft and model abuse — Exposed API keys enable attackers to perform operations on behalf of legitimate services, or to use expensive frontier models without authorization or logging.
-
Data exfiltration — Unauthenticated access to chat histories and agent management platforms exposes both user-generated content and proprietary business logic.
-
Lateral movement — Agent platforms with access to external systems or code execution capabilities become pivot points for deeper compromise. An attacker gaining access to an n8n or Flowise instance may inherit the privileges and integrations of that system.
-
Jailbreaking at scale — Freely accessible LLM instances allow attackers to bypass safety guardrails and generate harmful content using someone else's infrastructure, without accountability or logging.
-
Supply chain risk — Organizations in government and finance sectors were among those running exposed instances, amplifying the potential impact of compromise.
The research underscores that the industry-wide security maturity built over decades is being actively discarded in favour of speed. For SOC analysts, this means exposure to a new class of misconfigured attack surface that may not yet have dedicated detection or response playbooks.
Affected systems and CVEs
- OpenUI
- n8n
- Flowise
- Ollama
- Claude-powered chatbots
- Services wrapping models from Anthropic, Deepseek, Moonshot, Google, and OpenAI
No CVE assigned at the time of publication.
What to do
- Enable authentication by default during installation and initial configuration
- Generate unique credentials on first setup rather than embedding hardcoded defaults in examples or documentation
- Implement proper access management controls, especially for agent platforms with access to external systems or code execution
- Apply secure Docker configuration and avoid running applications with root privileges
- Place AI infrastructure in a DMZ with proper network segmentation and sandboxing, particularly when agents have access to code interpretation tools
- Audit existing deployments using certificate transparency logs or network scanning to identify exposed services
- Review deployment practices to identify hardcoded credentials or insecure defaults in use
Open questions
- The specific date when the scan was conducted is not stated in the source
- The identity of the popular AI project in which arbitrary code execution was discovered is not disclosed
- Whether vendors or project maintainers have released patches or hardening updates in response to these findings is unclear
- The names of specific organizations affected by the 90+ exposed agent management platform instances are not provided
- The current status of these exposed instances—whether they remain accessible or have been remediated—is not addressed
Source
We Scanned 1 Million Exposed AI Services. Here's How Bad the Security Actually Is


