As the Head of AI Readiness at Ideal, I am witnessing firsthand how organisations are grappling with one of today’s most pressing cybersecurity challenges: the explosive growth of generative AI in the workplace. Through my many conversations with CISOs, DPOs, and IT leaders, a clear pattern has emerged – enterprises are struggling to balance the transformative potential of AI with its inherent security risks.
The challenge is stark: employees are increasingly turning to third-party Large Language Models (LLMs) and generative AI applications that exist beyond traditional security perimeters. While these tools offer unprecedented productivity benefits, they also create new vectors for data exfiltration and security breaches that keep security leaders awake at night.
By Pax Zoega, Head of AI Readiness, Ideal

From Wild West to Lockdown
In my discussions with security leaders, I’ve observed two posture extremes.
Some organisations have adopted a ‘Wild West’ stance, placing no restrictions on AI tool usage. While this is great for innovation, it leaves sensitive data vulnerable to exposure through uncontrolled AI interactions. At the opposite end of the spectrum, some have implemented complete lockdowns on AI applications. This approach is bad for both innovation and security because it often backfires by driving employees toward ‘Shadow AI’ (the use of unauthorised AI tools).
The solution lies in finding the middle ground through what we call the ‘Permit, Restrict, Monitor’ (PRM) framework. This balanced approach allows for controlled innovation while maintaining robust security measures.
Many of the security professionals I speak with admit – off the record at least – that they only have a partial picture of AI risk in their organisations. Conducting a comprehensive AI audit can establish a baseline. For example, a “NIST AI Audit” maps out in detail the current risk landscape and provides gap analysis. This allows risk and security teams to prioritise their mitigation activities and build a comprehensive risk management framework. Even building an inventory of AI usage, using existing proxy analysis and network monitoring tools, is a good start.

Building a Balanced PRM Security Framework
The key to successful AI governance starts with establishing clear boundaries. Forward-thinking organisations are implementing enterprise agreements with AI providers that include specific data protection clauses. These agreements ensure that enterprise data is properly ringfenced and, crucially, cannot be used for training future AI models.
But agreements alone aren’t enough. Successful implementations I’ve seen include:
-
- Deployment of secure API gateways that regulate data flow between approved AI applications and internal systems
- Deploying runtime security solutions continuously observe AI application behaviour, including system calls, file access, network connections, and data flow
- Implementation of strong Data Loss Prevention (DLP) solutions that can identify and block sensitive data from being shared with unauthorised AI platforms
- Network-level controls that restrict access to non-approved AI services while maintaining smooth access to authorised tools
- Where the organisation has remote or hybrid workers, providing access to permitted AI applications via a secure enterprise browser (e.g. Palo Alto’s Prisma Access Browser) that allows data exfiltration risk and Shadow AI use to be monitored and controlled
Making It Work: The Human Element
Perhaps the most critical element of an effective PRM AI security strategy is employee engagement. The most successful organisations are those that have invested in the right AI tools for their employees and in comprehensive training programmes that go beyond simple dos and don’ts. These programmes help employees understand not just how to use approved AI tools safely, but also why using those tools is both good for them and for the organisation.
Crucially, this balance between the self-interest of employees and the best interests of the enterprise, will only be effective if the permitted AI apps provided by the organisation are best-in-class and provide all the core functionalities they require. If not, employees simply use workarounds with Shadow AI.
In a recent survey of 6,000 knowledge workers in the US, UK & Germany, nearly half believe that Generative AI (GenAI) will improve their job promotion prospects and 46% would continue to use their preferred AI apps, even when banned by their organisations. Of knowledge workers using Shadow AI, a third do so because their organisation does not provide a permitted alternative. In this sense, when it comes to GenAI, investing in the right tools is a pre-requisite of an effective data security posture.
Monitoring for Success
Effective monitoring is crucial for maintaining security without hampering productivity. Leading organisations are implementing:
-
- Real-time usage analytics to understand how AI tools are being utilised across the enterprise
- Regular security audits that specifically target AI-related vulnerabilities
- Compliance tracking systems that ensure adherence to both internal policies and external regulations
If you are wondering whether your permitted AI policy is working effectively to curb Shadow AI use, a good rule of thumb is that if your usage analytics show significantly less than 75% of your users are using your permitted apps, then it’s likely that they are using non-permitted alternatives. (75% GenAI usage rates among knowledge workers seems to be a consistent figure from multiple surveys over the last year.)
Looking Forward
The reality is that AI is not just another technology trend – it’s a fundamental shift in how work gets done. Organisations that try to completely restrict its use will find themselves at both a competitive and security disadvantage, while those that embrace it without proper controls risk serious security breaches.
The path forward is clear: implement a balanced framework that enables innovation while maintaining security. This means having clear policies about which AI tools are approved for use, permitting access to secure, best-in-class tools, establishing strong technical controls to enforce these policies, and maintaining robust monitoring systems to ensure compliance.
As AI technology continues to evolve, so too must our security frameworks. The organisations that will thrive are those that view AI security not as a barrier to innovation, but as an enabler of sustainable AI adoption.

Pax Zoega, Head of AI Readiness, Ideal
As the Head of AI Readiness at Ideal, I am witnessing firsthand how organisations are grappling with one of today’s most pressing cybersecurity challenges: the explosive growth of generative AI in the workplace. Through my many conversations with CISOs, DPOs, and IT leaders, a clear pattern has emerged – enterprises are struggling to balance the transformative potential of AI with its inherent security risks.
The challenge is stark: employees are increasingly turning to third-party Large Language Models (LLMs) and generative AI applications that exist beyond traditional security perimeters. While these tools offer unprecedented productivity benefits, they also create new vectors for data exfiltration and security breaches that keep security leaders awake at night.
By Pax Zoega, Head of AI Readiness, Ideal

From Wild West to Lockdown
In my discussions with security leaders, I’ve observed two posture extremes.
Some organisations have adopted a ‘Wild West’ stance, placing no restrictions on AI tool usage. While this is great for innovation, it leaves sensitive data vulnerable to exposure through uncontrolled AI interactions. At the opposite end of the spectrum, some have implemented complete lockdowns on AI applications. This approach is bad for both innovation and security because it often backfires by driving employees toward ‘Shadow AI’ (the use of unauthorised AI tools).
The solution lies in finding the middle ground through what we call the ‘Permit, Restrict, Monitor’ (PRM) framework. This balanced approach allows for controlled innovation while maintaining robust security measures.
Many of the security professionals I speak with admit – off the record at least – that they only have a partial picture of AI risk in their organisations. Conducting a comprehensive AI audit can establish a baseline. For example, a “NIST AI Audit” maps out in detail the current risk landscape and provides gap analysis. This allows risk and security teams to prioritise their mitigation activities and build a comprehensive risk management framework. Even building an inventory of AI usage, using existing proxy analysis and network monitoring tools, is a good start.

Building a Balanced PRM Security Framework
The key to successful AI governance starts with establishing clear boundaries. Forward-thinking organisations are implementing enterprise agreements with AI providers that include specific data protection clauses. These agreements ensure that enterprise data is properly ringfenced and, crucially, cannot be used for training future AI models.
But agreements alone aren’t enough. Successful implementations I’ve seen include:
- Deployment of secure API gateways that regulate data flow between approved AI applications and internal systems
- Deploying runtime security solutions continuously observe AI application behaviour, including system calls, file access, network connections, and data flow
- Implementation of strong Data Loss Prevention (DLP) solutions that can identify and block sensitive data from being shared with unauthorised AI platforms
- Network-level controls that restrict access to non-approved AI services while maintaining smooth access to authorised tools
- Where the organisation has remote or hybrid workers, providing access to permitted AI applications via a secure enterprise browser (e.g. Palo Alto’s Prisma Access Browser) that allows data exfiltration risk and Shadow AI use to be monitored and controlled
Making It Work: The Human Element
Perhaps the most critical element of an effective PRM AI security strategy is employee engagement. The most successful organisations are those that have invested in the right AI tools for their employees and in comprehensive training programmes that go beyond simple dos and don’ts. These programmes help employees understand not just how to use approved AI tools safely, but also why using those tools is both good for them and for the organisation.
Crucially, this balance between the self-interest of employees and the best interests of the enterprise, will only be effective if the permitted AI apps provided by the organisation are best-in-class and provide all the core functionalities they require. If not, employees simply use workarounds with Shadow AI.
In a recent survey of 6,000 knowledge workers in the US, UK & Germany, nearly half believe that Generative AI (GenAI) will improve their job promotion prospects and 46% would continue to use their preferred AI apps, even when banned by their organisations. Of knowledge workers using Shadow AI, a third do so because their organisation does not provide a permitted alternative. In this sense, when it comes to GenAI, investing in the right tools is a pre-requisite of an effective data security posture.
Monitoring for Success
Effective monitoring is crucial for maintaining security without hampering productivity. Leading organisations are implementing:
- Real-time usage analytics to understand how AI tools are being utilised across the enterprise
- Regular security audits that specifically target AI-related vulnerabilities
- Compliance tracking systems that ensure adherence to both internal policies and external regulations
If you are wondering whether your permitted AI policy is working effectively to curb Shadow AI use, a good rule of thumb is that if your usage analytics show significantly less than 75% of your users are using your permitted apps, then it’s likely that they are using non-permitted alternatives. (75% GenAI usage rates among knowledge workers seems to be a consistent figure from multiple surveys over the last year.)
Looking Forward
The reality is that AI is not just another technology trend – it’s a fundamental shift in how work gets done. Organisations that try to completely restrict its use will find themselves at both a competitive and security disadvantage, while those that embrace it without proper controls risk serious security breaches.
The path forward is clear: implement a balanced framework that enables innovation while maintaining security. This means having clear policies about which AI tools are approved for use, permitting access to secure, best-in-class tools, establishing strong technical controls to enforce these policies, and maintaining robust monitoring systems to ensure compliance.
As AI technology continues to evolve, so too must our security frameworks. The organisations that will thrive are those that view AI security not as a barrier to innovation, but as an enabler of sustainable AI adoption.

Pax Zoega, Head of AI Readiness, Ideal