Anthropic MCP Design Vulnerability Enables RCE, Threatening AI Supply Chain
mochkil "Design" khatir f'MCP dyal Anthropic kay-khelli l-AI f'khatar dyal Remote Code Execution (RCE)
Critical Design Flaw in Anthropic’s MCP Exposes AI Supply Chain to Remote Code Execution
TL;DR
Researchers have identified a "by design" vulnerability in Anthropic’s Model Context Protocol (MCP) that enables Remote Code Execution (RCE). The flaw is rooted in the protocol's official SDKs and the way it handles STDIO transport interfaces. Despite affecting over 7,000 servers and 150 million downloads across major AI frameworks like LangChain and LiteLLM, Anthropic has declined to change the architecture, viewing the behavior as expected.
Overview of the MCP Vulnerability
A team of cybersecurity researchers from OX Security has uncovered a systemic weakness in the Model Context Protocol (MCP), an open standard designed to connect AI models to data sources and tools. The vulnerability is not a simple coding error but a fundamental architectural flaw present in Anthropic’s official Software Development Kits (SDKs) for Python, TypeScript, Java, and Rust.
The flaw allows for Arbitrary Command Execution (RCE) on systems running vulnerable MCP implementations. If exploited, attackers can gain direct access to sensitive user data, internal databases, API keys, and chat histories.
The Scope of the Impact
The widespread adoption of MCP has created a cascading effect across the AI supply chain. OX Security researchers—Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar—noted that the vulnerability affects:
- Over 7,000 publicly accessible servers and software packages.
- More than 150 million downloads to date.
The issue stems from unsafe defaults in how MCP configuration works over the STDIO (standard input/output) transport interface. This interface was intended to start local servers and return a handle to the Large Language Model (LLM). However, researchers found that the implementation allows any arbitrary OS command to be executed. If the command successfully creates an STDIO server, it returns a handle; if it is a different command, it executes the command and then returns an error.
Affected Projects and CVEs
The research led to the discovery of 10 specific vulnerabilities across popular AI projects. While some vendors have issued patches, the underlying risk remains for those using the unpatched reference implementation.
Key affected projects include:
- LiteLLM (CVE-2026-30623) - Patched
- DocsGPT (CVE-2026-26015) - Patched
- Bisheng (CVE-2026-33224) - Patched
- GPT Researcher (CVE-2025-65720)
- LangChain-Chatchat (CVE-2026-30617)
- Flowise (CVE-2026-40933)
- Windsurf (CVE-2026-30615)
- Agent Zero (CVE-2026-30624)
Additionally, independent reports over the past year have identified similar issues in tools like Cursor (CVE-2025-54136) and LibreChat (CVE-2026-22252).
Attack Vectors
Researchers categorized the exploitation methods into four broad triggers:
- Authenticated/Unauthenticated command injection via the MCP STDIO interface.
- Hardening bypass via direct STDIO configuration.
- Zero-click prompt injection that triggers an MCP configuration edit.
- Network requests via MCP marketplaces that trigger hidden STDIO configurations.
Anthropic's Response and the Supply Chain Risk
Despite the findings, Anthropic has declined to modify the protocol’s architecture, stating that the behavior is "expected."
This stance has drawn criticism from OX Security. Researchers argue that by leaving the reference implementation unchanged, developers continue to inherit high-level risks. "What made this a supply chain event... is that one architectural decision, made once, propagated silently into every language, every downstream library, and every project," the researchers stated. They further noted that shifting responsibility to implementers does not remove the risk; it merely obscures its origin.
Mitigation Recommendations
To protect systems against this "by design" vulnerability, organizations and developers are advised to:
- Block Public Access: Prevent public IP access to sensitive services.
- Monitor Invocations: Closely monitor all MCP tool calls.
- Sandboxing: Run MCP-enabled services in restricted sandbox environments.
- Input Validation: Treat all external MCP configuration inputs as untrusted data.
- Verified Sources: Only install and integrate MCP servers from verified and trusted sources.
Source: The Hacker News - Anthropic MCP Design Vulnerability Enables RCE


