Palo Alto Networks’ threat intelligence and research team, Unit 42, has identified a set of security risks within Google Cloud’s Vertex AI platform that could allow malicious or compromised AI agents to access sensitive data and cloud resources beyond their intended permissions.
The findings center on Vertex AI Agent Engine, a service designed to help organizations build and deploy autonomous AI agents that can interact with enterprise applications, systems, and data. As adoption of these agents accelerates, Unit 42 warns that gaps in permission management could expose enterprises to a new class of insider-style threats.
At a high level, researchers demonstrated how an attacker could deploy an AI agent that appears legitimate but is engineered to quietly extract its own credentials. Once obtained, those credentials can be reused to gain wider access across the cloud environment. This effectively turns the agent into a “double agent,” operating as both a trusted automation tool and a potential insider threat.
“As AI agents become more autonomous, organizations must reassess how much trust and access they grant these systems by default.”
According to Unit 42, the root cause lies in how permissions are assigned by default. Service accounts associated with deployed AI agents were found to be over-permissioned, granting access to resources well beyond what the agent required to function. By chaining together multiple configuration weaknesses, researchers were able to extract credentials and use them to access cloud storage data, retrieve sensitive deployment details, and gain visibility into internal platform components that would normally be restricted.
Notably, Unit 42 emphasized that this was not a single software flaw, but the result of multiple design and configuration choices that, when combined, increased the agent’s effective privileges.
The research highlights a broader shift in the threat landscape. AI agents often operate autonomously, without continuous human oversight, and are trusted with elevated access to critical systems. If compromised, they behave less like external attackers and more like trusted insiders, significantly increasing risk.
Palo Alto Networks responsibly disclosed its findings to Google, which responded by updating documentation to better explain how Vertex AI handles service accounts and permissions. Unit 42 recommends that organizations enforce strict least-privilege controls, use dedicated custom service accounts such as Bring Your Own Service Account (BYOSA), limit OAuth scopes, and subject AI agent deployments to the same rigorous security reviews as production code.
As AI systems become more deeply embedded in enterprise infrastructure, the research underscores the need to rethink how trust, permissions, and isolation are managed for autonomous systems acting on an organization’s behalf.
