
- TrendAI report finds 67% of businesses pressured to deploy GenAI despite security concerns
- Key risks include sensitive data exposure, malicious prompts, expanded attack surface, and autonomous code abuse
- Governance gaps: only 38% have AI policies, 57% say AI evolves faster than it can be secured, and many lack visibility or kill switch mechanisms
Businesses are rushing to integrate Generative Artificial Intelligence (GenAI) into their processes and operations, despite knowing the risks they are exposing themselves to - and to make matters worse, many are unsure how to move forward and minimize their risks, further exacerbating the problem.
A new report from TrendAI polled 3,700 business and IT decision-makers across 23 countries, finding the majority (67%) was being pressured to approve AI integration despite security concerns.
One in seven (roughly 15%) described these concerns as “extreme”, but still approved deployment.
Not for the lack of awareness
The report outlined numerous risks associated with AI tools which are keeping business makers awake at night. For two in five, the biggest risk is AI agents accessing sensitive data, while a third (36%) worry about malicious prompts compromising security.
AI agents are programs that allow AI to operate apps, or even entire computers. Malicious prompts, shared via phishing emails, for example, could result in AI agents sending sensitive data to hacking groups, changing app settings, or even downloading malware.
For a third of the respondents (33%), AI creates a growing attack surface for criminals to exploit. The same percentage also fears abuse of trusted AI status and risks linked to autonomous code deployment.
“Organizations are not lacking awareness of risk, they’re lacking the conditions to manage it. When deployment is driven by competitive pressure rather than governance maturity, you create a situation where AI is embedded into critical systems without the controls needed to manage it safely,” says Rachel Jin, Chief Platform & Business Officer, Head of TrendAI.
Management and governance are more difficult to pull off than it seems, at least with AI. For more than half (57%), AI is advancing faster than it can be secured. That means, as soon as a system is set up, new potential risks emerge, forcing the defenders to re-evaluate their position. What’s more, 55% reported only moderate confidence in their understanding of AI legal frameworks, and just a third (38%) currently have comprehensive AI policies in place.
Regulation and compliance

Finally, for two in five (41%), unclear regulation and compliance standards are seen as a barrier to progress. This way of thinking creates something of a trap for organizations, as they end up using “shadow AI” - unsanctioned tools that defenders don’t have insight in. That way, they don’t know what gets shared, or which data ends up sent into the aether.
To be able to say they’ve safely integrated AI in their workflows, businesses need two things, the researchers suggest: observability and auditability, and a “kill switch” mechanism. At the moment, almost a third of the respondents (31%) said they lacked visibility over their entire AI systems.
When it comes to kill switch mechanisms, around 40% support the idea, but half (50%) are unsure about how to implement one.
Despite regulatory and governance challenges and risks, the opinion around AI remains positive. In fact, almost half (44%) believe agentic AI will “significantly improve” cyber defense in the short term.
“Agentic AI is moving organizations into a new risk category,” Jin added. “Our research shows the concerns are already clear, from sensitive data exposure to loss of oversight. Without visibility and control, organizations are deploying systems they don’t fully understand or govern, and that risk is only going to increase unless action is taken.”