After the rapid advances of the last few years, most organizations are searching for ways to embed AI in their operations. Many have found success, but often only with limited pilot initiatives focused on a handful of users; the hard part is taking the next steps to create a truly AI-native workplace.

Even if the potential value is clear, and the business case is solid, scaling AI success throughout the organization is laden with difficulty.

When people don’t trust AI outputs, or they believe AI will replace them, adoption remains stubbornly low. And when organizations rush to implement the latest technology without a clear definition of the measurable outcomes they want, ROI is difficult to prove, which reduces the appetite for large-scale AI investments.

Security concerns stifle AI transformations

Along with adoption and investment, security is one of the biggest challenges for moving from a tightly controlled pilot project to an organization-wide AI transformation. Many CISOs have concerns about how an AI-native workplace could create a larger attack surface and increase their cyber threat exposure.

CISOs look at new threats like prompt injections, or the risk of compromised AI agents helping attackers move laterally through the network, and ask, “Are we ready?” They don’t want to be the person who says no to something that so many people want. But it’s difficult for a CISO to say yes to widespread AI adoption without certainty around security preparedness.

Governing identities, devices, and data in the AI-native workplace

A secure AI-native workplace depends on strong policies across three pillars:

  • Identity governance
    Often, this is the easiest place to start, as many organizations have already done work to define identity policies, and manage and audit user access and data usage. It may be simply a case of determining who can access AI tools and how they can use them, similarly to existing identity policies for any other application.

    However, in a workplace powered by AI assistants like Microsoft Copilot, there are other “users” to consider. Organizations must now manage AI agent identities as well as employee identities and define what agents can access, what they can do with the data they find, and what actions they can trigger.
  • Device governance
    To enable broad use of AI tools like Copilot, every endpoint device must be visible and secure. Endpoint detection and response (EDR) tools in block mode are a must, enabling the continuous monitoring of every device that accesses Copilot, and providing rule-based automated responses to policy violations and security incidents. EDR also gives security operations center (SOC) teams real-time device visibility and insights to assist incident investigations.
  • Data governance
    Copilot, agents, and other AI apps need secure access to data if they’re to be effective and deliver value. That makes data security posture management (DPSM) for AI an essential part of the SCO toolkit. Solutions like Microsoft Purview offer DPSM capacities that enable the safe adoption of AI through risk assessments, ready-to-use policies, compliance controls, and AI activity analytics.

    Connecting these three governance pillars creates a powerful zero trust AI implementation methodology. In other words, organizations already applying zero trust principles to strengthen their security posture will find they’re more prepared than they think for large-scale Copilot deployment.  

    With zero trust policies and continuous monitoring of AI activity across identities, devices, and data, CISOs can have confidence that they have effective controls in place and say yes to the AI-native workplace in a secure and pragmatic way.

Empowering users to be productive and secure

With so many factors to consider, it can be tempting to over-rotate and lean too far into security at the expense of user productivity. If using Copilot and agents is too complex, people won’t bother using it or worse, they’ll use unsecured external AI tools instead.

  • Make the right thing the easy thing
    One of the key security concerns in AI adoption is the use of risky prompts and prompt injections that compromise systems or expose sensitive data. That’s why it’s important to have prompt cards, or templates for your users that give AI tools the business context and security parameters for a prompt, and prompt libraries that provide users with pre-approved prompts for numerous use cases. This also helps weave AI workflows and hyperautomation with AI to reduce costs.
  • Consider future user personas
    When preparing for AI security, it’s vital to look beyond the initial deployment and consider the future needs of different user personas. Security requirements and policies will need to evolve to empower people who are building new agents or even new large language models (LLMs). So, in the case of Copilot, it’s important to define controls for no-code, low-code, and pro-code users across Copilot Studio, Azure Foundry, and Power Platform.

Accelerate and secure your AI transformation with Capgemini

At Capgemini, we know what it takes to enable organization-wide pragmatic AI adoption at large enterprises; we currently support several of the world’s largest Copilot deployments, with over 30,000 users each. With Microsoft 365 Copilot, we bring together people, processes, data, and technology to maximize business value. 

Our AI scaling capability is grounded in deep experience of applying security considerations in the specific business context of each organization for thousands of customers. So, instead of devoting time, effort, and budget to building your own security solutions or managing multiple security vendors, we can deliver everything you need, including continuous strategy and GRC, continuous protection, and vigilance.

To learn more about how you can quickly and safely expand AI adoption beyond pilot projects, talk to us!