Why Agentic AI Is Security's Next Blind Spot
فرق السيكيريتي ناقصاهم المعرفة الأساسية بـ Agentic AI لي ديجا خدام فبيئات الإنتاج
Security Teams Lack Foundational Understanding of Agentic AI Already Running in Production
TL;DR: Agentic AI systems are executing tasks and accessing sensitive data in production environments across organizations, often without security team involvement. Most security professionals do not yet understand these systems well enough to defend them effectively. The gap between deployment velocity and security competency is widening, creating organizational risk that hinges less on technology flaws than on insecure configuration decisions made without security input.
What happened
Agentic AI is no longer experimental. The technology is deployed and operational in production environments today, embedded in developer workflows, integrated with enterprise services through the Model Context Protocol (MCP), and being built by non-technical staff across business units using low-code or no-code frameworks.
The deployment pattern mirrors previous technology shifts: business units move forward because the capability is valuable, security is consulted late or not at all, and the organization discovers its exposure only after systems are live.
Three distinct categories of agentic AI are now in active use. The first comprises general-purpose coding and productivity agents such as Claude Code and GitHub Copilot, already embedded in developer workflows regardless of formal approval status. The second consists of vendor-built agents powered by MCP, an integration layer enabling agents to connect to external services—calendars, email systems, ticketing platforms—and act on their behalf. Nearly every major vendor either has an MCP server in production or is building one. The third category is custom agents built by individual users without security review. Agentic AI has removed a longstanding barrier: the ability to build functional tools with real system access no longer requires traditional programming skills. Anyone in an organization can now build automations, workflows, and agents with permissions to act across internal systems.
Most custom agents will not undergo security review before deployment.
Why it matters
The foundational principle of information security remains unchanged: genuine fluency in a technology must precede meaningful defense of it. When organizations skip this work—as they did with firewalls before understanding networking, and with cloud computing before developing cloud-native security practices—they end up with systems they cannot reason about.
Security teams that cannot speak the language of agentic AI get bypassed. They lose the ability to challenge design decisions, propose workable controls, or ask informed questions. Business units proceed without them. This is not malice; it is the inevitable consequence of a security team being unable to engage substantively with the technology.
The practical attack surface is concrete. An agent with access to both a terminal and an email inbox can be manipulated through either channel to act through the other—a lateral movement path attackers will exploit. A malicious calendar invite with hidden instructions in the event description is a real attack vector; the agent reads it, interprets the embedded prompt, and executes. A self-hosted AI assistant connected to Telegram without proper controls becomes a wide-open entry point for anyone who messages it.
Critically, many of these risks arise from insecure configuration rather than fundamental flaws in the underlying tools. A single configuration decision—pairing an agent with a single trusted account, scoping it to its intended function, restricting its access rights—can materially reduce exposure. These decisions require security involvement early in the design phase, which is not currently happening.
The supply chain dimension compounds the problem: as non-security teams deploy agents without review, the organization's attack surface expands in ways that traditional vulnerability scanning and network monitoring do not surface.
Affected systems and CVEs
No CVE assigned at the time of publication.
Products identified in active use:
- Claude Code
- GitHub Copilot
- Model Context Protocol (MCP)
- Telegram (as a communication integration point)
What to do
- Develop foundational knowledge of how AI applications are architected, including how agents consume inputs, chain tools together, and produce outputs.
- Engage hands-on with agentic AI tools already in use by developers. Build an agent. Experiment with the frameworks and tools your organization is already using.
- Establish configuration-based security controls before agents go live. Scope agents strictly to their intended function; an agent managing calendars should not have terminal access.
- Implement configuration patterns such as pairing agents with single trusted accounts rather than allowing them to respond to arbitrary external inputs.
- Establish security review processes for custom agents before deployment, treating them as a supply chain security problem.
- Restrict agent access to only the specific trusted accounts, channels, or systems required for their function.
- Involve security teams early in the design phase of agent deployments to establish proper scoping, permissions, and access controls.
- Stay current with emerging AI security frameworks and threat taxonomies as they evolve. Vendor AI security products are beginning to reach the market; foundational knowledge of how these applications work is required to evaluate them critically.
Open questions
- The article does not specify when agentic AI began widespread production deployment or at what scale organizations have already adopted it.
- No examples of attacks exploiting agentic AI or MCP in the wild are provided.
- The timeline for when security team competency gaps will become critical is not specified.
- Specific OWASP threat taxonomies for agentic AI are referenced as evolving but not named.
- The maturity levels and effectiveness of vendor AI security products are not detailed.


