News Security

Tenable Research Warns No-Code Agentic AI Can Enable Financial Fraud and Workflow Hijacking

Jailbreaking

Jailbreaking tests in Microsoft Copilot Studio reveal how easily AI agents can be manipulated to leak PCI data and perform unauthorized financial actions

Tenable has uncovered critical risks in the rapidly expanding world of “no-code” agentic AI, revealing how tools like Microsoft Copilot Studio can be manipulated to execute financial fraud and expose sensitive customer data. In its latest research released today, the exposure management company demonstrated how an AI travel agent built in Copilot Studio could be hijacked through prompt injection, enabling attackers to bypass identity checks, leak credit-card information, and alter financial details.

The experiment involved designing an autonomous travel-booking agent responsible for creating and modifying reservations using demo customer data, including full PCI details. Despite being programmed with strict identity-verification rules, Tenable researchers successfully manipulated the agent into disclosing payment card information of unrelated customers. They also instructed the agent to modify a trip price to $0—essentially granting unauthorized free services.

“AI agents may look harmless, but with the wrong permissions they can become tools for financial fraud.”

Keren Katz, Senior Group Manager, AI Security Product & Research, Tenable

According to Tenable, the democratization of AI agent-building—while intended to enhance efficiency—has introduced a new category of risks. Non-developer employees often create agents without understanding how much implicit permission these systems hold. This becomes dangerous when agents can write, update, or access sensitive systems without layered governance.

Keren Katz, Senior Group Manager of AI Security Product and Research at Tenable, said the findings are a wake-up call for enterprises embracing low-code and no-code AI platforms. “AI agent builders, like Copilot Studio, democratise the ability to build powerful tools, but they also democratise the ability to execute financial fraud,” she said. “That power can easily turn into a real, tangible security risk.”

The report calls for urgent enforcement of AI governance frameworks. Tenable recommends three immediate steps for enterprises: mapping every data system an AI agent can access, applying the principle of least privilege to restrict write permissions, and continuously monitoring agent behaviour to detect unintended actions or data leakage.

As businesses accelerate AI adoption, Tenable’s research underscores a critical reality: without disciplined oversight and secure development practices, no-code agentic AI poses a direct threat to financial integrity, customer privacy, and regulatory compliance.

Related posts

Maharashtra Leads India’s AI-Powered Cybercrime Fight with Launch of MahaCrimeOS AI

enterpriseitworld

Okta Expands Bengaluru Operations to Secure India’s AI-Driven Future

enterpriseitworld

Women in Cloud Welcomes Microsoft’s $17.5 Billion Commitment to Accelerate India’s AI-First Vision

enterpriseitworld