Skip to Content

Unleashing engineering potential with generative AI

Capgemini
Sarah Richter, Hugo Brue, Udo Lange
Sep 18, 2025
capgemini-invent

Over recent months, companies have intensified their adoption of Gen AI. This, along with Gen AI’s rapid evolution, has led to new practices and roles for engineers

Although generative AI (Gen AI) initially gained recognition in engineering through applications in software development, its scope has broadened to help tackle today’s major engineering challenges. According to Gartner, Gen AI will require 80% of the engineering workforce to upskill through 2027.   

In today’s market context, Gen AI for engineering enables companies to optimize processes, reducing time-to-market by speeding up the production of engineering deliverables and improving product quality and compliance by automating certain quality control tasks. These contributions are especially critical since products and ecosystems are increasingly complex and regulated, with more stakeholders and highly personalized requirements.

In this blog, we explore effective strategies for integrating Gen AI technologies, offer practical recommendations to maximize their impact, and share key insights on how to unlock the value they bring to engineering. 

How to unlock the potential value of Gen AI in engineering  

Companies are struggling to unlock the potential value of Gen AI in engineering. This is not caused by a lack of use case ideas, but rather the lack of an efficient end-to-end assessment supporting the implementation of suitable use cases into productive systems. In addition, companies face difficulties in upscaling their implemented use cases effectively across the entire engineering department. 

Along the engineering value stream, we have been supporting our clients to integrate Gen AI successfully and, more importantly, to maintain profitability sustainably. In this blog, we share our top three lessons to help you reach your own goals with Gen AI. 

Unleashing engineering potential with generative AI blog infographic

Evaluation process 

Choose the right use cases to be implemented by using measurable assessment gateways 

To get the most value from Gen AI over time, it’s important to choose the right use cases and develop a strategic order of pursuit. Many companies try to connect Gen AI’s impact to their KPIs, but they often find it hard because of the overwhelming variety of application options. To act effectively, companies should use goal-oriented evaluation criteria as gateways within the use case decision process. 

To avoid being overwhelmed by too many options, it’s important to use clear and specific criteria that go beyond the simple effort-benefit considerations of how much effort something takes or what benefit it brings.   

As well as incorporating a specific evaluation criteria, task and process silos must be broken in order to optimize complex and interdependent engineering processes. Therefore, we recommend mapping the potential use case to the value stream throughout the entire V-cycle. This helps you evaluate ideas more clearly and see where different parts of the core engineering processes can support each other and create extra value. 

To compare the company readiness with the individual engineering use cases, we have developed an exhaustive assessment method that considers eight dimensions: strategy, governance and compliance, processes, data, IT infrastructure and security, employees, cost and investment readiness, and ethical and ecological impact.  

However, the minimal criteria to be considered within the Gen AI use cases selection process covers four focal points:

  • Functional criteria: Business impact of the specific Gen AI use case in engineering. The use case delivers measurable business impact within engineering workflows. 
  • Technical criteria: The necessary data and foundational technical requirements are available to implement the use case. 
  • Regulatory criteria: The use case complies with legal and regulatory standards, such as the EU AI Act and internal company policies. 
  • Strategic criteria: The use case aligns with and enhances the engineering value stream. 

Additionally, it is highly efficient to set a main KPI to ensure the comparability of use cases across the overall engineering Gen AI portfolio. This is an important part of establishing a strategic fit.

Implementation specifications 

Use the full range of relevant data by shifting the focus from engineering text to engineering data 

In markets, we see successfully implemented Gen AI engineering use cases in two key areas: the beginning and end phases of the V-cycle. Exemplary Gen AI use cases to highlight can be found in requirements engineering and compliance demonstration, which are both still highly document-centric and text-based. The most common applications here are conversational agents based on retrieval-augmented generation (RAG) technology. RAG solutions represent one of the most repeatable and transverse applications across the entire value chain, which is why these applications have been at the core of Gen AI strategies for the last few years. 

Both application areas are ideal for starting your Gen AI implementation journey because the solutions are mature, and the results are significant. Our client engagements suggest that by using Gen AI capabilities on technical documentation (e.g., retrieval and summarization), it is possible to generate high efficiency gains and reduce the time engineers spend accessing knowledge and information in the right technical context by up to 50%.

Even though the use of text-based LLMs works very well, the full potential for Gen AI in engineering has not yet been unleashed. Most of the engineering data is not available as a pure text format. Therefore, achieving a higher level of value generation requires overcoming the limitation of a text-based knowledge base. Within engineering, this means including the vast amount of information formats from various data sources across the product lifecycle (e.g., visualizations, diagrams, sensor data, GPS, or even sounds). For application in engineering, we want to highlight the extension of large language models (LLMs) with large multimodal models (LMMs) capabilities. Especially within complex problem definitions, LMMs show a high potential for significant Gen AI usage improvements and operational efficiency across the product development process. We are rapidly discovering the potential transformation of everyday tasks with generative AI for data engineering.

Applying Gen AI 

How to scale up to unlock the full value potential  

Implementation activities of generative AI for engineering are constantly gaining focus. Today, we see companies collecting a full use case funnel, realized in many small implementations of Gen AI addressing specific engineering tasks, resulting in small and often local value gains, respectively. 

Following the rule of “start small, think big,” we share the belief to first gain conviction of the added value and acceptance by implementing such quick wins. Start with simple and cost-sensitive use cases, such as RAG, and progressively extend to more complex use cases. However, we recommend to always keeping the bigger picture of scaling targets in mind. 

An overall AI strategy helps to guide the starting process, to connect existing Gen AI applications, and identifies synergy potentials from the beginning. 

The topic of scaling becomes crucial when using levers to strengthen and expand value generation. A successful upscaling of Gen AI implementations can be executed vertically by expanding the application area or horizontally by linking different Gen AI use cases. As connecting prior local solutions throughout the development process is highly difficult, we want to share the scalability factors we integrate into Gen AI implementation planning and execution. 

Future developments and fields of action

As Gen AI rapidly transforms the engineering sector, hybrid AI emerges as a key solution to meet its specific demands. Simultaneously, advances in multi-agent systems and the multimodal capabilities of language models open up new perspectives for process automation and optimization. 

The hybridization of AI capabilities (hybrid AI) to address the specificities of the engineering field 

LLMs are intrinsically statistical. This means that the risk of failing or being ineffective with investment in implementing Gen AI solutions is still there. One approach to mitigate these risks is to combine the capabilities of Gen AI with the more traditional methods of deterministic AI. This combination leverages the strengths of both approaches while addressing their respective limitations, enabling the development of more robust and tailored AI systems. In the field of engineering, where some activities inherently require reliability, predictability, and repeatability, this synergy proves particularly relevant for addressing critical challenges, such as system and process safety.   

Recent advances in LLMs and LMMs have marked a significant milestone in the improvement of AI agents. These agents are now capable of planning, learning, and making decisions based on a deep understanding of their environment and user needs. As new architectures and use cases continue to emerge, the transition toward multi-agent systems that collaborate in increasingly complex contexts is progressing further. u003cbru003eu003cbru003eWe will witness the increasing integration of specialized agents to handle specific tasks, such as requirement extraction, requirement quality control, or requirement traceability reconstruction. Each agent will be able to perform a particular task, and these agents can be orchestrated by a u0022super-agentu0022 through complex workflows. This agent-based approach will enable greater automation of processes, making them more streamlined and efficient while reducing the need for human oversight. u003cbru003eu003cbru003eHowever, this reduction in supervision could increase the risk of accidents. Therefore, special attention must be given to assessing the implications of AI agents and multi-agent systems in terms of safety, reliability, and societal impact. Moreover, there should be a focus on technical solutions and appropriate governance frameworks to ensure positive and lasting transformations in engineering.

LLMs are no longer limited to analyzing text. It is now possible to process other types of content, such as images, sounds, and diagrams. Much of the critical information in engineering reports is presented in visual form, and multimodal capabilities will allow this data to be retrieved and exploited more effectively. This will enhance the performance of conversational agents and improve the relevance of their analyses.

Software vendors are actively working to integrate Gen AI modules directly into their tools, especially for generative design. The goal is for these features to become an integral part of the engineer’s daily use, rather than external add-ons. For example, we can expect Gen AI modules integrated into project lifecycle management (PLM) solutions, further facilitating digital continuity.  

With generative AI for software engineering, new capabilities are helping to revolutionize the design process by improving efficiency: some actors have achieved up to a 90% reduction in product design times. This increased efficiency and the reduction in material usage, observed across various projects, lead to significant cost savings.

Innovation through Gen AI 

Generative AI in engineering is bringing human skills and intelligent automation together to solve complex challenges and shorten the development cycles drastically. The organizations that want to lead in the field of engineering must act decisively, scaling Gen AI strategically to unlock lasting innovation, resilience, and competitive edge.

Meet our experts

Udo Lange

Udo Lange

Global Head of Digital Engineering and R&D Transformation at Capgemini Invent
As Head of Digital Engineering & R&D Transformation at Capgemini Invent, Udo Lange brings over 25 years in consulting, innovation, and PLM. He helps global industrial firms embrace digitalization to deliver high-performance, sustainable products while optimizing lifecycle costs. With a passionate team, he blends engineering, IT, and transformation expertise to solve complex challenges across sectors like automotive and machinery, shaping the future of engineering and product development.
Jérôme Richard

Jérôme Richard

Vice President, Head of Gen AI for Engineering Offer, Capgemini Invent
Jérôme Richard combines expertise in operational excellence with knowledge of digital levers to accelerate change and drive organizational transformation for clients. By blending strategy, technology, data science and engineering with an inventive mindset, he helps clients innovate and navigate the future. As Vice President of Intelligent Industry, he guides teams helping clients envision, design, build, and operate smart products and plants.
Hugo Cascarigny

Hugo Cascarigny

Vice President & Global Head of Data & AI for Intelligent Industry, Capgemini Invent
Hugo Cascarigny has been passionate about AI, data, and analytics since he joined Invent 12 years ago. As a long-time member of the industries and operations teams, he is dedicated to transforming AI into practical efficiency levers within Engineering, Supply Chain, and Manufacturing. In his role as Global Data & AI Leader, he spearheads the development of AI and generative AI offerings across Invent.

    FAQs

    The benefits of using generative AI for software engineering include accelerating development, automation of repetitive coding tasks, enhanced quality of code, optimization suggestions, quicker prototyping phases, human error mitigation, and productivity gains. For human engineers, there is more time to spend on value-adding tasks, such as complex problem solving. 

    Generative AI plays many roles in data engineering. It automates the creation of data pipelines, collates realistic test data, detects inconsistencies, enhances the quality of data, documents workflows, and streamlines design. The net result is faster, more scalable, and consistent data engineering processes. 

    Companies can scale generative AI in engineering by adopting robust governance frameworks. This is the foundation on which to integrate AI into existing workflows. Next, they can establish model security, train teams, leverage cloud infrastructure, and continuously monitor performance to maintain reliability and alignment with business goals and operations. 

    Some real-world applications of generative AI in software engineering include code generation, test automation, documentation writing, anomaly detection, system design suggestions, and more rapid knowledge retrieval. This makes it possible for teams to innovate faster while reducing time-to-market and operational costs. 

    Challenges companies face when implementing generative AI in engineering include model bias, security risks, intellectual property concerns, explainability issues, and skill gaps. Moreover, ensuring AI-generated code meets compliance is particularly noteworthy and critical. These challenges can be overcome with sound implementation strategies built on robust frameworks.

    Enhancing IT ops with a multi-AI agent approach

    Dnyanesh-Joshi
    Dnyanesh Joshi
    September 15, 2025

    Across the enterprise, departments are placing increased demands on their organization’s data to enable multi-AI agents. It’s the IT operations (IT ops) department’s challenge to deliver the optimal environment for agentic AI to eventually bring business value.

    Enterprises are grappling with a volatile, uncertain business climate – and to address this, they are increasingly turning to their data to draw actionable insights that enable competitive advantages through agents.

    As networks grow more complex and the demands on them increase, IT ops departments need to develop better tools, including multi-AI agent systems to enhance the decision-making process by making recommendations aligned to set business goals.

    Properly designed and implemented agentic AI solutions are game-changers – but IT ops must be prepared to take advantage of these powerful tools, which requires a well-crafted plan and a partner that can deliver more than just the technology.

    The IT ops imperative

    In conversations with IT professionals, my Capgemini colleagues and I have identified a number of common challenges for IT operations at enterprises across all sectors. Simply stated, IT professionals are under pressure to boost service performance while reining in costs – including operating expenses and costs for infrastructure and cloud services. They’re also under pressure to better identify, provision, and deploy the solutions required to allow other departments to take advantage of emerging technologies such as agentic AI.

    The organization’s own data is an important source of the information required to help IT professionals achieve these goals. Unfortunately, legacy business intelligence systems often fail to satisfy their needs. There are several reasons for this:

    • Analytics systems rarely support strategic foresight and transformative innovation – instead providing business users with yet another dashboard.
    • The results are often, at best, a topic for discussion at the next team meeting – not sufficient for a decision-maker to act upon immediately and with confidence.
    • Systems typically fail to personalize their output to provide insights contextualized for the person viewing them – instead offering a generic, unsatisfying result.
    • Systems often aggregate data within silos, which means their output still requires additional interpretation to be valuable.

    In short, many legacy systems miss the big picture, miss actionable meaning, miss the persona – and miss the point.

    Based on my experience, I recommend an organization address this through multi-AI agent systems.

    With the introduction of Gen AI Strategic Intelligence System by Capgemini, this could be the very system that bridges the gap between the old way, and a value-driven future. This system converts the vast amounts of data generated by each client, across their enterprise, into actionable insights. It is agentic: it operates continuously and is capable of independent decision-making, planning, and execution without human supervision. This agentic AI solution examines its own work to identify ways to improve it rather than simply responding to prompts. It’s also able to collaborate with multiple AI agents with specialized roles, to engage in more complex problem-solving and deliver better results.

    How would organizations potentially go about doing this?   

    Define the technology and business KPIs

    First, organizations must establish well-defined KPIs and associated roadmaps to take full advantage of agentic AI recommendations – KPIs that align technology with business objectives.

    This starts by identifying the end goals – the core business objectives and associated KPIs relevant to IT operations. These represent the IT operation’s key activities that support other departments as they contribute to the organization’s value, and strengthening them is always a smart exercise. The good news is that even small improvements to any of these KPIs can deliver enormous benefits.

    The roadmap should leverage pre-existing AI models to generate predictive insights. It should also ensure scalability, reliability, and manageability of all AI agents – not just within the realm of IT operations, but throughout the organization. And it should be designed to leverage domain-centric data products from disparate enterprise resource planning and IT systems.

    Finally, the roadmap must identify initiatives to ensure the quality and reliability of the organization’s data by pursuing best-in-class data strategies. These include:

    • Deploying the right platform to build secure, reliable, and scalable solutions
    • Implementing an enterprise-wide governance framework
    • Establishing the guardrails that protect data privacy, define how generative AI can be used, and shield brand reputation

    Choose a partner that delivers more than tech

    Second, the organization must engage the right strategic partner. While innovative agentic AI systems are essential, that partner must also be able to support the IT team with business transformation expertise and industry-specific knowledge.

    Capgemini leverages its technology expertise, its partnerships with all major platform providers, and its experience across multiple industrial sectors to design, deliver, and support agentic AI strategies and solutions that are secure, reliable, and tailored to the unique needs of its clients.

    Capgemini’s solution draws upon the client’s data ecosystem to perform root cause analysis of KPI changes, and then generates prescriptive recommendations and next-best actions – tailored to each persona within the IT department. The result is goal-oriented insights aligned with business objectives, ready to help IT empower the organization through actionable roadmaps for sustainable growth and competitive advantage.

    *Meaningful, measurable benefits

    Capgemini estimates that with the right implementation and support, the potential benefits include augmenting the IT workforce through autonomous processing, touchless data crunching, improved data and systems integrations, continuous monitoring of controls and compliance, and real-time access to reports and insights.

    The potential for IT operations to translate these internal gains into meaningful advantages for other departments across the enterprise means that leveraging agentic AI for its own strategic insights cannot be ignored.

    *Results based on industry benchmarks and observed outcomes from similar initiatives with clients. Individual results will vary.

    The Gen AI Strategic Intelligence System by Capgemini works across all industrial sectors, and integrates seamlessly with various corporate domains. Download our PoV here to learn more or contact our below expert if you would like to discuss this further.

    Meet the author

    Dnyanesh-Joshi

    Dnyanesh Joshi

    Large Deals Advisory, AI/Analytics/Gen-AI based IT/Business Delivery oriented Deals Shaping Leader
    Dnyanesh is a seasoned Large Deals Advisory, AI/Analytics/Gen-AI based IT/Business Delivery oriented Deals Shaping Leader with 24+ years of experience in Large Deals Wins by Value Creation through Pricing Strategy, Accelerator Frameworks/Products, Gen-AI based Strategic Operating Model/Productivity Gains, Enterprise Data Strategy, Enterprise, Data Governance, Gen-AI/ Supervised, Unsupervised and Machine Learning based Business Metrics Enhancements and Technology Consulting. Other areas of expertise are Pre-sales and Solutions Selling, Product Development, Global Programs Delivery, Transformational Technologies implementation within BFSI, Telecom and Energy-Utility Domains.

      Enabling business continuity through RISE with SAP 

      Gary James
      18th June 2024

      Businesses must migrate away from legacy enterprise resource planning (ERP) and towards cloud-enabled ERP.

      The business world is changing. Technology moves swiftly, and users want their software to reflect this. Enterprises need to maintain a continuous view of software, as small updates are released at a steady drip instead of one large, yearly update. But business continuity is a hard ask when legacy systems can’t keep up with the barrage of new features and functionality offered by regular updates.

      This means businesses must migrate away from legacy enterprise resource planning (ERP) and towards cloud-enabled ERP. This will guarantee agility and the ability to scale and implement changes without disruptions, ensuring continuity for the business and the end user. 

      However, this is usually easier said than done. Working with the cloud is highly technical, and businesses often don’t have the people who possess the skills and know-how to enable cloud migration. And it’s a large buy-in with a slow return on investment, making it an unattractive option for many. 

      There are also operational hurdles, like how do you enable ongoing end-user adoption and solution changes? How do you adopt a flexible operating and resourcing model? Dual maintenance also causes issues in keeping current upgraded and development systems in sync with each other. And the overall adoption process is a hassle because of licensing, migration, and infrastructure operations that need to be maintained.  

      To support the migration process, businesses often turn to RISE with SAP, an AI-powered cloud ERP. It guarantees agility and the ability to scale and implement changes without disruptions. But to exploit RISE with SAP to its fullest, there are certain challenges that customers have when it comes to adoption. This includes monitoring, which requires expertise from a business and technology perspective, as well as several other key services. A reliable application development and maintenance (ADM) partner can assist businesses with overcoming those hurdles while maintaining business continuity.

      Business observability and continuous monitoring

      Businesses can’t improve upon what they don’t measure, so an integrated approach that enables unconstrained application availability and gathers useful data at the same time is necessary. 

      ADM partners can create these monitoring dashboards with embedded capability for observability and continuous, proactive, end-to-end monitoring. With 24/7 availability, it can correlate alerts and generate tickets across the scope of SAP systems and edge software, such as regional warehouse control systems or operational technology (OT) platforms. This continuous monitoring through various channels means that issues can be flagged and fixed before they become bigger incidents.  

      Continuous value delivery 

      To ensure software delivers continuous value, there needs to be a loop of people – and process-driven transformation, enabled by a constant cycle of maintain, optimize, run. An ADM partner can support this cycle at several checkpoints with a focus on business continuity. The cycle focuses on target operating models that are designed for the future and centered on people and technology.  

      Continuous organizational change management (OCM) will ensure both people and processes are prepared for new ways of working, both for the current environment while also preparing them for the future, including off and onboarding. To support continuous OCM and drive awareness and engagement, processes can be updated with new technology and automation that’s supported by structured end-user adoption through Enablement-as-a-Service (EaaS) capability powered by SAP Enable Now but managed by your ADM partner. 

      Continuous end-user enablement 

      Once a new feature is implemented, it’s important to ensure that end users are using it correctly. Your ADM partner can administer digital adoption by fully managing the end-to-end learning journey for all users. The process is structured so that the business can focus on getting their workers up to speed and manage everything from go-live to decommissioning. This is all so that the focus remains on core value activities instead of worrying about their ERP. 

      Continuous quality assurance 

      Quality assurance must be at the heart of a stable, resilient RISE with SAP environment. Upgrading legacy ERP architecture and databases to a managed SAP S/4HANA platform brings users a lot of benefits but there are risks inherent in this increased level of change. With an ADM partner at your side, these inherent risks will be managed, along with continuous quality assurance to ensure continuous service delivery. 

      Continuity is key 

      Adopting a continuous delivery approach to RISE with SAP with a trusted ADM partner gives clear segregation of responsibilities and ownership and enables not only business continuity but a clear path towards a renewable enterprise.  

      As one of the largest and most experienced SAP systems integrators, and with market-leading ADM capabilities, Capgemini can further simplify the business transformation possibilities enabled by RISE with SAP. Find out more about our ADMnext for SAP solutions here.

      Gary James

      Gary James

      Expert in Application Management Services, Automation
      I lead the development and market engagement for Capgemini’s European ADM Center of Excellence. Building on over 25 years of experience in business applications, I work with clients to ensure that their application strategies and services are aligned to their requirements in a dynamic, ever-changing market. The majority of those 25 years have been spent on, or around, SAP technologies and services where I have worked globally on a number of implementations – as well as the creation of Operating Models to support and enhance existing multi-component SAP landscapes.

        Supply Chain Resilience – the AI way

        Sudarshan Sahu
        August 20, 2025

        Climate change isn’t a distant threat—it’s a reality to deal with now.

        Businesses need to rethink how they operate, especially when it comes to supply chains, which are crucial for global trade. Just like in the movie Interstellar, where survival depended on data, AI, and adaptability, today’s supply chains need to be flexible and smart to handle disruptions and climate challenges. AI-powered insights and actions are like the movie’s robot TARS: helping predict risks, optimize logistics, and reduce waste. Data ensures that every decision is as precise as a gravity equation. AI enhances precision in supply chains by analyzing vast data in real time, predicting risks, and optimizing logistics. It’s the key to transforming supply chains into smarter, greener, and more resilient systems that balance profitability with ecological responsibility.

        Supply chains aren’t just stretched — they’re under siege. Disruption is no longer the exception; it’s the norm. That’s why resilience — the ability to anticipate, adapt, and recover fast — has shifted from nice-to-have to non-negotiable. A recent report from The Business Continuity Institute delivers the reality check: 80% of organizations faced supply chain disruptions last year, most more than once. That’s an uptick despite better planning — proof that we’re still reacting more than we’re preparing. Meanwhile, sustainability pressures are mounting. With supply chains responsible for over 60% of global carbon emissions, according to the World Economic Forum, they’re no longer just operational engines — they’re climate liabilities too.

        Let’s face it—what we’re doing right now isn’t cutting it. The cracks in our supply chains are showing, and incremental fixes won’t be enough. It’s time for bold moves. If we want supply chains that can truly withstand shocks and stay ahead of the curve, we need to lean into smarter, faster, more adaptive solutions. That’s where AI steps in—not just as a tool, but as a game-changer. With its ability to forecast disruptions, optimize operations, and accelerate response times, AI is shaping the supply chains of the future. To stay ahead, companies must embrace green supply chain management (GSCM), where sustainability is built into every step. AI supercharges this shift, turning GSCM into a smart, data-driven engine. From cutting carbon to driving circular economies, AI enables supply chains that are not just efficient, but truly green.

        Resilience, Not Yet Autonomous: Supply Chains Still Heavily Rely on People

        Supply chains are navigating a perfect storm: geopolitical instability, extreme weather, shifting consumer expectations — and growing uncertainty in global trade. Disruptions are no longer outliers; they’re part of the operating environment. While many organizations are embedding risk management into supply chain strategy, execution is still stuck in manual mode. Too much effort goes into collecting, cleaning, and stitching together data — leaving little room for insight, foresight, or speed. AI and machine learning are still underused, and critical response actions often rely on human intervention alone. The result? Slow reactions, mounting workloads, and talent focused on firefighting instead of forward-thinking.

        What’s missing? Technology that doesn’t just capture and store data, but actively turns it into prescriptive insights and clear, actionable recommendations. Unfortunately, most tools in the market today still fall short of that promise. Instead, businesses are left stitching together manual processes and siloed teams to make sense of a rapidly changing environment. To build truly resilient supply chains, we need to shift from reactive, human-heavy models to intelligent, tech-augmented systems. The future isn’t about replacing people—it’s about empowering them with tools that amplify their decision-making, speed up response times, and free them to focus on what matters most.

        Greening the Chain: How AI and Data are Changing the Game

        Data and AI are at the core of this transformation, delivering unmatched insights, predictive accuracy, and optimization potential. By leveraging real-time data and predictive analytics, AI can identify potential risks—such as supplier delays, extreme weather, or geopolitical issues—before they impact operations. This early warning capability allows businesses to proactively mitigate threats through alternative sourcing, dynamic rerouting, or inventory adjustments. AI also enables scenario modeling, helping organizations test various disruption scenarios and build contingency plans with data-backed confidence. As a result, companies can maintain continuity, reduce downtime, and ensure customer satisfaction, even in the face of unexpected challenges. In today’s volatile global environment, AI is no longer a luxury but a critical enabler of resilient and future-ready supply chains.

        AI-enhanced supply chain resilience framework

        The AI-enhanced supply chain resilience framework strengthens supply chain agility and robustness by harnessing advanced AI technologies. It integrates real-time data from IoT devices into a centralized system for comprehensive analysis. Through predictive analytics and machine learning, the framework forecasts demand and detects potential risks—like supplier disruptions or market shifts—enabling proactive risk mitigation and smarter decisions in areas like inventory and logistics.

        AI-driven communication tools improve collaboration with suppliers and stakeholders, ensuring seamless, transparent information flow. Continuous monitoring and adaptive feedback loops allow the supply chain to respond swiftly to changing conditions, driving ongoing improvement and innovation. By adopting this framework, businesses gain end-to-end visibility, reduce vulnerabilities, and ensure operational continuity—ultimately building a more resilient and high-performing supply chain.

        Leveraging AI enables businesses to streamline operations, improve efficiency, cut costs, and elevate customer experiences. One powerful application is demand forecasting, where AI analyzes historical data to accurately predict customer needs. This leads to smarter inventory management—minimizing overstock and stockouts while optimizing capital use. Another key use case is route optimization. AI-driven tools evaluate factors like weather, traffic, and transport costs to determine the most efficient delivery paths. This reduces time and expenses while ensuring faster, more reliable service that meets growing customer expectations.

        How organizations can harness it effectively:

        According to the International Data Corporation (IDC), 55% of Forbes Global 2000 OEMs are projected to have revamped their service supply chains with AI and by 2026, 60% of Asia based 2000 companies will use generative artificial intelligence (GenAI) tools to support core supply chain processes as well as dynamic supply chain design and will leverage AI to reduce operating costs by 5%. This signifies a widespread adoption of AI to improve efficiency and gain a competitive advantage in supply chain management. Further, Generative AI can be harnessed to monitor global events and proactively identify emerging risks. It can automatically generate risk assessments, simulate scenarios, and suggest strategic mitigation plans—empowering supply chain teams to manage risks more effectively. Its conversational interface enhances user experience and accelerates response times. Over time, this evolves into a system-guided, data-driven approach, drawing from a rich library of scenarios and mitigation strategies to deliver contextual, timely responses to risk events.

        Considering all of the facts

        The fusion of data and AI isn’t just a tech upgrade — it’s a strategic shift for building supply chains that can bend without breaking. Organizations that embed intelligence into their operations now won’t just survive the next disruption — they’ll lead the transition to greener, faster, more adaptive ecosystems. By 2025, global supply chains will be reengineered out of necessity and powered by innovation. AI won’t just help companies — it will help nations stay resilient, competitive, and climate-conscious. It will redefine how we make, move, and manage everything. And like TARS in Interstellar, the most effective systems won’t just follow instructions — they’ll anticipate, adapt, and act as true copilots. What supply chains need now isn’t just visibility. It’s vision.

        Start innovating now –

        Give Your Supply Chain an AI-enabled Sixth Sense

        • Plug your supply chain into real-time feeds—from IoT sensors to storm trackers—and let AI act like your all-seeing oracle. Spot trouble (like delayed shipments or political curveballs) before it hits the fan

        Make Generative AI Your Strategic Co-Pilot

        • Leverage Generative AI to generate real-time risk assessments, simulate disruption scenarios, and recommend mitigation strategies, all in a conversational interface

        Build a Digital Twin—Your Virtual Supply Chain Lab

        • Think of it as a flight simulator for your supply chain. A digital twin lets you mirror operations in a virtual space to test “what-if” scenarios—from port delays to carbon constraints—without breaking a sweat in real life.

        Interesting read? Capgemini’s Innovation publication, Data-powered Innovation Review – Wave 10 features more such captivating innovation articles with contributions from leading experts from Capgemini. Explore the transformative potential of generative AI, data platforms, and sustainability-driven tech. Find all previous Waves here.  Find all previous Waves here.

        Meet the author

        Sudarshan Sahu

        Sudarshan Sahu

        Process Lead, Emerging Technology Team, Data Futures Domain, Capgemini
        Sudarshan possesses deep knowledge in emerging big data technologies, data architectures, and implementing cutting-edge solutions for data-driven decision-making. He is enthusiastic about exploring and adopting the latest trends in big data, blending innovation with practical strategies for sustainable growth. At the forefront of the industry, currently he is working on projects that harness AI-driven analytics and machine learning to shape the next generation of big data solutions. He likes to stay ahead of the curve in big data trends to propel businesses into the future.

          Leading with purpose: Capgemini named a Leader in Avasant’s Cybersecurity Services 2025 RadarView™ 

          Marco Pereira
          May 5, 2025

          We are proud to share that Capgemini has been recognized as a Leader in Avasant’s Cybersecurity Services 2025RadarView™ – an achievement that reflects our relentless commitment to helping clients build secure, resilient, and future-ready enterprises. 

          This recognition is more than a milestone – it’s a powerful validation of our ability to deliver continuous cyber resilience through our robust cybersecurity portfolio that is aligned with our clients’ evolving business and regulatory needs.

          Avasant’s comprehensive evaluation of global service providers is based on innovation, capabilities, and industry impact, and placed Capgemini at the forefront. Our leadership position is a direct result of our strategic investments, innovation-led approach, and ability to scale cyber defense solutions globally. 

          Empowering clients with continuous resilience 

          At Capgemini, cybersecurity is foundational to continuous business resilience. Our end-to-end security services are designed not only to protect, but to enable our clients to anticipate, withstand, and rapidly recover from disruption – ensuring continuity and confidence in an unpredictable world. 

          Avasant’s assessment highlights our strengths in zero trust architecture, secure cloud transformation, AI-driven threat intelligence, and our global cyber defense center networks. These capabilities power an integrated and proactive security approach that ensures organizations stay secure and resilient – always. 

          Sector-specific cyber innovation 

          Our differentiated approach includes industry-specific solutions tailored to the complex needs of highly regulated and high-impact sectors: 

          • OT/IoT security in manufacturing, energy and utilities: Securing manufacturing environments from design to deployment, including implementing industrial-grade frameworks across 300+ sites with IEC 62443 alignment. 
          • Financial services: Leveraging a best-of-platform approach to drive security consolidation and compliance automation. 
          • Connected healthcare and automotive: Ensuring secure innovation across medical devices, vehicles, and 5G ecosystems. 
          • Aerospace, oil and gas: Establishing 24×7 SOCs, improving cyber maturity by 95 percent, and delivering integrated IT/OT threat intelligence. 

          We’re also shaping future-ready security through pioneering engagements – like our quantum cryptography roadmap for a European bank, developed with our Quantum Lab and Cambridge Consultants. 

          The road ahead 

          Our promise to clients is simple: cybersecurity that enables sustainable transformation and continuous resilience. Every investment we make, every partnership we build, and every capability we evolve is designed to deliver on that promise. 

          This leadership ranking from Avasant reinforces our purpose. As threats grow in complexity and the pace of change accelerates, we will continue to be the trusted partner that helps clients move forward with security, agility, and confidence. 

          Click here to read the excerpt. 

          Contact Capgemini to understand how we are uniquely positioned to help you structure cybersecurity strength from the ground up.  

          Meet the author

          Marco Pereira

          Marco Pereira

          Executive Vice President, Global Head of Cybersecurity
          Marco is an industry-recognized cybersecurity thought leader and strategist with over 25 years of leadership and hands-on experience. He has a proven track record of successfully implementing highly complex, large-scale IT transformation projects. Known for his visionary approach, Marco has been instrumental in shaping and executing numerous strategic cybersecurity initiatives. Marco holds a master’s degree in information systems and computer engineering, as well as a Master of Business Administration (MBA). His unique blend of technical expertise and business acumen enables him to bridge the gap between technology and strategy, driving innovation and achieving organizational goals.

            Cold storage, hot insights – Managing data efficiently with Sentinel’s new storage tiers

            Mona Ghadiri
            Aug 14, 2025

            As security data volumes continue to grow, organizations face the dual challenge of retaining data for long periods while managing storage costs. Microsoft Sentinel Data Lake addresses this with the introduction of a new cold storage tier – an innovation that brings flexibility, scalability, and cost-efficiency to security data management.

            Understanding the cold storage tier

            The cold storage tier is designed for long-term retention of infrequently accessed data. It complements the existing hot and warm tiers, enabling organizations to implement a tiered storage strategy that aligns with their operational and compliance needs. With seamless transitions between tiers, security teams can access historical data when needed without incurring high costs.

            This is particularly valuable for industries with stringent regulatory requirements or those conducting forensic investigations. Cold storage ensures that data remains accessible and secure, even years after it was collected.

            Benefits for security operations

            The new storage tier offers several advantages:

            • Significant cost savings for long-term data retention
            • Simplified compliance with data governance policies
            • On-demand access to archived data for threat hunting and analysis.

            By optimizing storage costs, organizations can allocate more resources to proactive security measures and advanced analytics.

            Capgemini’s MXDR services: Maximizing storage efficiency


            Capgemini’s MXDR services are uniquely positioned to take advantage of Sentinel’s new storage capabilities. Through its Cyber Defense Centers, Capgemini helps clients implement intelligent data retention strategies that balance performance and cost.
            With the cold storage tier, Capgemini can:

            • Store historical telemetry for extended periods without budget strain
            • Enable retrospective threat analysis and compliance audits over longer periods of time
            • Integrate storage policies with real-time monitoring and response workflows.

            This holistic approach ensures that clients not only meet regulatory requirements but also enhance their overall security posture.

            Strategic value for the future


            The addition of cold storage to Microsoft Sentinel Data Lake is more than a technical upgrade – it’s a strategic enabler. It empowers organizations to retain valuable data, derive insights from it, and respond to threats with greater agility. When combined with Capgemini’s MXDR expertise, the result is a powerful, cost-effective solution for modern security operations.

            About the author

            Mona Ghadiri

            Mona Ghadiri

            Vice President, Global Offer Lead for Cybersecurity Defense
            Mona is a three-time Microsoft Security MVP, recognized for expertise in SIEM, XDR, and Security Copilot. She has led development of Microsoft-based cyber services and now focuses on SOC transformation, pragmatic AI in security, and talent development. A global speaker and advocate for women in AI and cybersecurity, she serves on multiple Microsoft community boards. Mona holds a BA and MBA and brings a unique blend of product leadership, engineering, and industry recognition.

              Facing the quantum cyber threat: moving from denial to action

              Clément Brauner
              Apr 21, 2025

              One of the most pressing concerns is the quantum cyber threat, which demands immediate attention and action

              While less visible than artificial intelligence, quantum computing is advancing just as rapidly. In early 2025, Microsoft and Amazon both unveiled quantum processors with self-correcting capabilities, marking a decisive step towards stable, industrial-grade machines. The horizon for quantum computing is becoming clearer and closer, bringing with it the reality of threats that we must now seriously prepare for.

              Cryptography in Danger

              One of the strengths of quantum computing is its ability to perform massive calculations in parallel, significantly reducing the time required. This could enable, for example, the creation of highly targeted deepfakes from minimal data, which could be a formidable weapon in the wrong hands. However, the most significant threat concerns cryptography. While the security of commonly used asymmetric encryption algorithms today relies on the fact that it would take classical computers thousands of years to break them, a quantum machine could do it in just a few hours. This means all our data, communications, and authentication systems would become immediately vulnerable. In the post-quantum world, no identity, communication, or transaction can be guaranteed if it remains encrypted as it is today.

              This risk is not a fantasy. The algorithms that underpin it have been ready for a long time, and their performance has been mathematically demonstrated. What is missing today are sufficiently stable and industrialized quantum machines. Recent announcements show that they will arrive not in ten, fifteen, or twenty years, but much sooner. And once available, they will be accessible to everyone via the cloud, as current quantum processors, despite their limitations, already are. Hackers are ready, the entry cost will be low, and as soon as the platforms are available, the risk will materialize massively and immediately.

              An Already Present Risk

              The threat is therefore major and imminent. It is even already present, as malicious actors can store encrypted data they collect today to decrypt it when they have the capability. In five years, most long-term strategic information, financial assets, health data, industrial, diplomatic, or military secrets will still be of great value. The issue is similar in all sectors whose products have a long lifespan and incorporate digital technology: defense, aerospace, transportation, energy, health… This strategy, known as “harvest now, decrypt later,” is a proven reality, with many states acknowledging that they practice it in long-term judicial investigations.

              At some level, all organizations will be affected, from SMEs to multinationals, from local authorities to government ministries. In France more than elsewhere, very few have yet realized this, and even fewer have started to implement appropriate measures.

              Overhauling Trust Systems

              We can, and must, prepare now for this major cryptographic challenge, which will require nothing less than a complete overhaul of all trust systems: directories, APIs, certificates, storage, networks, application development… It will be a considerable project with major operational impacts that cannot be improvised in an emergency. It is necessary to start by scrutinizing the IT system to identify and assess risks, then prioritize interventions, allocate budgets, implement, test, and deploy solutions, manage change, coordinate with partners and suppliers… And it will be impossible to compress all this into a few weeks or months when the threat materializes.

              Fortunately, technological countermeasures are being put in place. After a long competition, the NIST (National Institute of Standards and Technology) has approved encryption algorithms (five to date) capable of resisting quantum computers. Or rather, they are resistant based on current knowledge, which means that a certain cryptographic agility will be necessary in case they too are eventually broken. In France, several startups and large companies offer solutions in this area. It is also possible to implement hybrid encryption solutions to protect data against both today’s classical threats and tomorrow’s quantum threats. Finally, another area of study, still experimental, concerns the physical security of communications with QKD (Quantum Key Distribution), which provides absolute certainty that exchanges have not been intercepted.

              For once, action must be taken without waiting for explicit regulatory pressure. While ANSSI has been warning about the quantum risk since 2022, these warnings are not yet accompanied by any obligations, not even for OIVs. However, texts like NIS 2, DORA, or GDPR hold leaders accountable without specifying the technical nature of the threats. In other words, an organization subject to such regulations will have no excuse if its data is stolen and decrypted by a quantum computer. In the face of the quantum threat, denial, skepticism, or inaction are no longer acceptable, especially in the current security and geopolitical context.

              Click here to know more about our Quantum Lab.

              Meet the authors

              Clément Brauner

              Clément Brauner

              Quantum Computing Lead, Capgemini Invent
              Clément is a manager at Capgemini Invent. Passionate about technology, he currently works as the SPOC for quantum activities in France and is a member of the “Capgemini Quantum Lab,” which aims to help clients build skills in quantum technologies, explore relevant use cases, and support them in their experiments and partnerships.
              Jérôme Desbonnet

              Jérôme Desbonnet

              Vice President, Cybersecurity CTIO
              As VP, Cybersecurity CTIO, Insights & Data, Jérôme creates security architecture models. Jérôme plans and executes significant security programs to ensure that Capgemini’s clients are well protected.
              Pierre-Olivier Vanheeckhoet

              Pierre-Olivier Vanheeckhoet

              Head of Paris Innovation Center, Capgemini

                Machines need zero trust too: Why devices deserve context-aware security

                Lee Newcombe
                Jun 25, 2025

                In the first post in this series, I wrote about the business and security outcomes that can be achieved for users (and the organizations to which they belong!) by adopting approaches labeled as “zero trust.” But why should we limit ourselves to interactions with human users? Don’t machines deserve a little attention too?

                The answer, of course, is “yes” – not least because this would otherwise be a remarkably short post. So, I’m going to talk about the application of those high-level characteristics of zero trust mentioned in my last post – dynamic, context-based, security – to operational technology (OT).

                As every OT professional will quite rightly spell out – at length – OT is not IT. They have grown from separate disciplines, talk different network protocols, have different threat models, and often have different priorities when it comes to the application of the confidentiality, integrity, and availability triad we have used for so long in the security world. When your company faces losses of millions of dollars a day from a production line outage, or your critical national infrastructure (CNI) service can no longer function, availability rapidly becomes the key business issue, particularly where intellectual property may not be a core concern. Before diving into the application of dynamic, context-based, security principles to OT, we should probably set a little more context:

                • OT facilities may not be as well-segmented as modern corporate IT networks. They were either isolated or “behind the firewall,” so why do more? (Of course, best practice has long pointed toward segmentation, however if best practice were always implemented I’d likely be out of a job).
                • OT covers a vast range of technologies and different types of devices, from sensors out in the field through to massive manufacturing plants. Threat models differ! Context matters.
                • Devices often have embedded operating systems (typically cut-down versions of standard operating systems); these systems require patching and maintenance if they are not to become susceptible to known vulnerabilities.
                • Equipment requires maintenance. You’ll often find remote access facilities in the OT environment for the vendors to be able to conduct such maintenance remotely. (You might see where this is going from a security perspective.)
                • The move toward intelligent industry is pushing OT toward increasing use of machine learning and artificial intelligence, all of which is heavily reliant upon data – which means you need a way to export that data to the services performing the analysis. Your “air gap” isn’t really an air gap anymore. (And if we’re talking about critical national infrastructure, then there may well also be some sovereignty issues to consider.)
                • Legacy is a real problem. What happens if a business buys a specialist piece of kit and then the vendor goes bust? It could well form a critical part of the manufacturing process, and so stripping it out is not always possible, let alone straightforward.
                • OT doesn’t always talk IP. This is a problem for traditional security tools that only understand IP. We need to use specialized versions of traditional security tooling like monitoring solutions – solutions that can understand the communications protocols in use. Meanwhile, network transceivers/data converters may contain software components that can sometimes get overlooked from a security perspective.
                • Good models for thinking about OT security are out there, e.g. the Purdue model and the ISO 62443 series (which provide structures for the different levels of technology and functionality in OT environments, from the physical switches and actuators up to the enterprise information and management systems). It’s not as much of a wild west out there as my words so far may indicate – but we can do better.

                For the purposes of this article, the above highlights some interesting requirements from an OT security perspective:

                1. We need to understand the overall OT environment, and be able to secure access into and within it.
                2. We need to make the OT environment more resilient – reduce the blast radius of compromise. We really do not want one compromised machine taking out a whole facility.
                3. We want to be able to control machine-to-machine communications, and communications across the different layers of the Purdue model, e.g., from the shop floor to the management systems, or even across to the enterprise environment for import into the data lake for analysis purposes.

                Lots of interesting problems, some of which seem very similar to those discussed in the context of securing human user access to applications and systems.

                How do we start the process of finding some solutions? Well, first things first. We need a way to distinguish the devices we are securing, i.e., some form of machine identity. We have a variety of options here, from the installation of trusted digital certificates through to the use of network-based identifiers (including IP addresses and hardware addresses where available). Once we have identities, we can start to think of how to use them to deliver context-based security.

                Let’s start by establishing some baselines of normal behavior:

                • How do the devices in scope communicate?
                • What other devices do they communicate with, and what protocols do they use?
                • Are there some obvious segmentation approaches that we can take based off of those communication patterns? If not, are there some more context-based approaches we can take, e.g., do specific communications tend to take place at specific times of day?

                Such profiling may need to take place over an extended period of time in order to get a true understanding of the necessary communications. We should certainly be looking at how we control support access from vendors into the OT environment; let’s just start by making sure Vendor A can only access their own technology and not that of Vendor B. Let’s not forget to support access from internal users either, particularly if they have a habit of using personal or other unapproved devices. Going back to that segmentation point for a second, do we have any legacy equipment that is no longer in active support? If so, are we able to segment such kit away and protect access into and out of that environment to limit the risk associated with such legacy kit?

                Whether we are trying to apply dynamic, context-based security to machines or users, many of the same considerations apply:

                1. Is there a way to uniquely identify and authenticate the entities requesting access?
                2. Where are the signals going to come from to enable us to define the context used to either grant or deny access?
                3. How can we segment the resources to which access is being requested?
                4. Where are we going to apply the enforcement mechanisms that act as the barriers to access? Do these mechanisms have consistent network connectivity or must they operate independently?
                5. How do we balance defense in depth with simplicity and cost of operation?

                If an organization already has some technologies that can help to deliver the required outcomes, e.g., some form of secure software edge, there will often be some merit in extending that coverage to the OT environment, particularly with respect to remote access into such environments.

                I’ve shown that we can apply the same zero trust principles to machines that we can apply to users. However, knowing the principles and believing they have value is one thing, finding an appropriate strategy to deliver them in an enterprise context is something completely different. The final post in this series will talk about how we can approach doing this kind of enterprise security transformation in the real world.

                About the author

                Lee Newcombe

                Lee Newcombe

                Expert in Cloud security, Security Architecture

                  Making it real: Bringing zero trust to life in your business

                  Lee Newcombe
                  Jul 31, 2025

                  In the previous two blog posts, I’ve written about what “zero trust” means from a more prosaic perspective on the actual outcomes organizations are looking to achieve – a reduction in the impact of any compromise, and dynamic, context-based, security fit for the modern world. It’s all a bit abstract though, and perhaps still too funneled through the lens of technology. There’s no point in ivory towers; all of this stuff needs to be deliverable! So, how can organizations go about scoping, developing, and operating this more modern security philosophy?

                  I’m an architect, so clearly the first thing that I’m going to say is that you need to understand the context within which you are working. Why is that so important? Well, the context will identify the strategies and behaviors of the business, the elements in scope, and the stakeholders, technologies, and business processes that your delivery must support. The other obvious reason for starting with the context is that it’s much easier to get to a destination if you know where you are starting from! So, what kind of areas need exploring?

                  • Business context – why do your stakeholders want this change to happen? Which parts of the business are in scope? What are the overall business strategies? This latter one isn’t just the often-abstract consideration and alignment; zero trust networking can be particularly helpful in terms of accelerating mergers and acquisitions.
                  • Scope – are you covering the enterprise as a whole? Does that include your operational technology? Data loss protection? How about your fancy agentic AI? Does that include every geography? Any exceptions?
                  • Stakeholder context – do you know who the key stakeholders will be? Who will pay for the change? Who will be the executive sponsor who will enforce alignment and ensure the right behaviors throughout the organization? Who will be impacted by the changes being delivered? Who will see the change as a positive and who will see the change as a negative (either personally or organizationally)? What will your communications strategy look like?
                  • Technology context – can you actually do what you want to do, within the timeframe you want to do it, with what you know of the current and target technology landscapes? What is the overall technology strategy of the organization? Are you cloud-first? Are you best-of-breed or happy to go with single vendor solutions?
                  • Agree, and enforce, the vision. I can’t stress this one enough. Everyone needs to know the target state and why it is the target state. The vision needs to be owned and supported by someone with enough organizational heft to stop deviations from that agreed vision. You may get folks who are not 100% aligned with the rationale, approach, or vision itself – you need a suitable authority to be available to corral dissenters (alongside offering an opportunity for constructive input). How do you get to an agreed vision? Start with an established framework. I like the CISA framework for zero trust, which shows the scope of zero trust and allows you to place any ongoing initiatives within the relevant parts of the framework. Frameworks provide structure. Structure offers the opportunity to create alignment and reduce duplication.
                  • Agree on success criteria and the definition of done. How will you demonstrate that the initiative has successfully completed? What metrics will you use to demonstrate progress along the way? What happens next? How do you maintain ongoing alignment with evolving technology?

                  Okay. So let’s say that we now know who our key stakeholders are, what we want to do, and why we want to do it. We then need to do it. Some thoughts…

                  • Skills. Know when you need specialist support.
                  • Track progress. Yes, project management matters. I’m not going to pretend that this is the bit of the job that I most enjoy, but I do recognize that we need to be able to demonstrate progress to stakeholders. (Seeing milestones hit, or backlog items delivered, is also good for team morale. People like to see their efforts having an impact!) Know your critical path and dependencies, work backwards to make sure that what you want to achieve is achievable given wider constraints.
                  • Communicate. This ties into the above… you spent a lot of time identifying your stakeholder community, and you really need to keep in touch with them. Let them know how the initiative is proceeding. Ask them for help if you need it – senior stakeholders are often keen to help as it justifies the time they are spending meeting with you.
                  • Delivery methodologies. Pick the right tools for the job – whether project management, architecture frameworks, industry standards, or technology. But don’t be dogmatic and do NOT assume that everyone has the same understanding of what you may think of as standard industry terms. Establish a common taxonomy as part of agreeing on the overall methodologies.
                  • Respect reality. Requirements may change during the course of an initiative. You may encounter unexpected obstacles, perhaps even insurmountable obstacles, during delivery. I’m not going to go into the basics of change management here, but I do want to stress the importance of recognizing when things move from difficult to (practically) impossible. Don’t risk burning folks out trying to do the impossible.
                  • Prepare the organization. Look at your target operating model and any necessary changes to roles, responsibilities, and accountabilities. Train your users. You can have the best technology in the world, but if your users don’t know how to use it or, worse, don’t want to use it, then your program as a whole will be a failure. In short, make sure your organization is ready to accept and use the technology capabilities you are delivering.

                  Much of the above is fairly standard thinking in the transformation and delivery space. However, having spent a few days enjoying some interesting conversations at Zenith Live 2025, it seems that there are still lots of organizations out there that are struggling to get the most out of the technology capabilities that they have available to them. Some of the folks I was chatting with were still struggling to get alignment and consequently experiencing duplication across their organizations, often due to a lack of executive sponsorship. Others were still struggling to sell the benefits of moving towards modern security approaches due to lack of an overall vision. Some had more technology-focused concerns around integration, which I suspect a comprehensive architectural approach could help to address. I’d like to think that our conversations helped, and the fact that some took photos of the slides that I was chatting them through indicated at least some of them saw value in the approaches discussed above.

                  And this brings this short series of blogs to an end. My aim was to discuss “zero trust” in more practical, business-focused terms, and to show folks how they can do this stuff in the real world. Please do let me know whether or not I succeeded.

                  You can access blog one here, and blog two here.

                  Lee Newcombe

                  Lee Newcombe

                  Expert in Cloud security, Security Architecture

                    Capgemini addresses IAM security challenges with new IAM FastTrack

                    Peter Gunning
                    Aug 18, 2025

                    As identity and password-based cyberattacks increase, the urgency grows for clients to migrate from legacy identity solutions quickly as they transition to zero trust and start adopting AI, while protecting themselves from sophisticated cyberattacks. At Capgemini, we know identity governance, security, and productivity can align through innovation, but the process of selecting, procuring, and implementing effective solutions can take years, while the challenges are here and now.

                    Identity tool sprawl has created overlapping policies, features, and functionalities. Managing legacy on-prem identity technology alongside cloud identity continues to be a difficult area to manage and requires fresh thinking to navigate through today’s identity security challenges. 

                    Organizations need to consolidate and standardize identity governance and automation for productivity at speed, so that their security teams to have the best chance of protecting the identity attack surface, fast.

                    Risk through complexity

                    The identity lifecycle’s complexity, and risk to security, often only becomes apparent during breaches when a rogue identity can gain permission, explore the network, and find crucial unsecured systems and datastores to expose, export, or encrypt. Security risks come from all angles as we try to accommodate internal and external collaboration needs, machine and human identities, JML journeys, and privileged access.

                    Good control over the identity and access management lifecycles is possible, but needs strong, automated policies and procedures and a transparent control plane.

                    Multiple requirements across different personas

                    Mature organizations will have well-structured identity data stores, constructed so that personas and other logical groups of entities are well-defined and can be administered and controlled according to local criteria defining what they can or cannot do. 

                    Privileged identity management and conditional access policies need to be in place to ensure that access to privileged resources is controlled and that a zero-trust ethos pervades across the organization.

                    Recertification campaigns back this up, ensuring access is restricted to personas who still need it and automatically removed from those who don’t. Segregation of duties policies prevent dangerous combinations of permissions and help demonstrate good governance to auditors.

                    Managing third-party identities can be especially challenging, but there are cross-cloud tools available that can bring it back under control, ensuring that “3P” identities can be monitored, recertified, and automatically removed from your AD when they leave the provider, along with any permissions they held.

                    But getting all of this in place in a reasonable timeframe can seem overwhelming.

                    Capgemini’s new Microsoft Entra FastTrack

                    Increasingly Capgemini is contacted by organizations, often already on the Microsoft platform, seeking advice on rapidly implementable solutions to these IAM security challenges, especially as the cybersecurity world changes quickly around us. Responding to this demand, Capgemini has developed a Microsoft Entra-specific version of our tried and tested FastTrack assessment methodology.

                    Today, many organizations use Microsoft Entra tools, even if only as the main corporate Active Directory. While many CISOs rely on this as the organization’s main identity store and authentication and authorization mechanism, they do not necessarily understand the extent to which Entra can be used as a tool to enforce good governance of identity and access management processes and that there may be an opportunity to exploit or extend existing capability to fulfill their requirements.

                    With the frequency of cyber-attacks on the rise it is becoming increasingly important that organisations are able to respond to the threat landscape as quickly as possible, preferably using existing tools and minimising disruption to collaboration and mission-critical business functions. In many cases, Entra, with its widespread adoption and multi-use-case capability, may be the best option to fulfil these criteria.

                    Launched recently, our Entra FastTrack analyzes an organization’s current state of IAM maturity, assesses its requirements, and creates a detailed report and roadmap based on the exploitation or extension of existing Microsoft Entra assets, leading to the right path to security and compliance fast, without the need to spend months or years reaching for new solutions.

                    This FastTrack leverages Capgemini’s long history delivering Microsoft implementations across multiple sectors, allowing us to bring our own real-world experience and recommendations to the delivery, and accelerating the migration away from existing legacy infrastructure.

                    Conclusion

                    Capgemini’s new Microsoft Entra FastTrack could be your organization’s best and fastest way to improving your IAM and IGA processes and securing your assets against cyberattacks, without the need for expensive research and deployment of new solutions.

                    About the author

                    Peter Gunning

                    Peter Gunning

                    IAM Consultant, Capgemini UK
                    Peter is a seasoned Identity and Access Management (IAM) professional with over 20 years of experience. He has held senior IAM roles across the Finance and Telecommunications sectors, bringing deep expertise in designing and implementing secure, scalable identity solutions. Currently, Peter serves as an IAM Consultant within Capgemini UK’s IAM practice, where he helps clients strengthen their security posture through strategic IAM initiatives.