Security USA

Perforce Report Exposes Data Privacy Confusion in AI Model Training

91% of organisations allow sensitive data into AI pipelines despite rising concerns over breaches

Perforce Software has released its 2025 State of Data Compliance and Security Report, warning that organisations worldwide are caught in a paradox when it comes to AI adoption and data privacy.

The survey reveals that 91% of organisations believe sensitive data should be used in AI model training, even though 78% express high concern about theft or breaches of model data. The report highlights a critical gap in understanding: once sensitive data is fed into an AI model, it cannot be removed or fully secured.

“The rush to adopt AI presents a dual challenge for organisations: teams are feeling both immense pressure to innovate with AI and fear about data privacy,” said Steve Karam, Principal Product Manager at Perforce. “To navigate this complexity, organisations must adopt AI responsibly and securely, without slowing down innovation.”

“You should never train your AI models with personally identifiable information.” – Steve Karam, Principal Product Manager, Perforce

Rising data breaches in non-production

The study also underscores growing risks in software development, testing, AI, and analytics environments, where 60% of organisations reported data breaches or theft in the past year — an 11% increase from 2024. Despite this, 84% still permit data compliance exceptions in non-production environments, perpetuating exposure to sensitive information.

“Too many organisations see the cure of masking data and implementing controls as worse than the disease of allowing exceptions. But this leads to a significant vulnerability,” noted Ross Millenacker, Senior Product Manager at Perforce.

Investments in AI data privacy ahead

Driven by the mounting risks, 86% of organisations plan to invest in AI data privacy solutions in the next 1–2 years, reflecting a growing recognition of the urgent need for stronger safeguards.

To help close the gap, Perforce recently introduced AI-powered synthetic data generation in its Delphix DevOps Data Platform. By combining data masking, synthetic data, and automated delivery, the platform aims to ensure privacy compliance while supporting AI/ML model training at scale.

With more than 75% of the Fortune 100 relying on its solutions, Perforce said the findings should serve as a wake-up call to organisations pursuing AI initiatives without adequate security controls.

Related posts

Cyber Insurance Uptake Among Small Businesses Surges 50% in Past Year

enterpriseitworld

Leostream Brings Greater Flexibility to AWS Virtual Desktops with WorkSpaces Core Support

enterpriseitworld

Databricks Launches Data Intelligence for Cybersecurity

enterpriseitworld