Skip to Content

Building continuous cyber resilience in the age of AI and quantum risks

Marco Pereira
Sep 30, 2025

In today’s digital economy, cyber resilience is the backbone of trust, innovation, and growth. As digital ecosystems expand and technologies evolve, organizations are shifting from reactive security to proactive resilience. Increasingly, they recognize that being prepared for sophisticated threats – many powered by AI—is not just prudent, but essential for long-term success.

Meanwhile, the rise of quantum computing promises transformative breakthroughs, but also introduces new risks that could challenge today’s cryptographic standards. The future demands a security posture that is continuous, adaptive, and forward-looking, therefore driving resilience and trust.        

The new cyber reality: AI as both ally and adversary

Artificial intelligence is transforming cybersecurity in two powerful ways:

  • As a threat vector: Adversaries are using AI to create sophisticated phishing attacks, automate vulnerability discovery, and develop malware that adapts in real time. These attacks are faster, stealthier, and harder to detect with traditional defenses.
  • As a defense multiplier: At the same time, AI enables organizations to spot anomalies faster, automate incident response, and reduce the workload on scarce cyber talent. AI-enhanced SOCs can process massive amounts of telemetry, remove noise and prioritize threats with precision.

The implication is clear: cybersecurity strategies must embed AI deeply, not as an add-on but as a core enabler of continuous protection and vigilance.

Quantum: The next disruptive risk

Quantum computing may still be years away from breaking current encryption – but the time to prepare is now. Algorithms that safeguard today’s digital economy – from banking transactions to healthcare records – could be broken once large-scale quantum computers arrive.

Waiting until that moment is not an option. Forward-looking organizations are already beginning to:

  • Assess quantum risk exposure across critical assets
  • Adopt quantum-resistant cryptography in pilots and high-risk areas
  • Build migration roadmaps to post-quantum security.

The organizations that act early will not only reduce risk but also demonstrate leadership and trust to customers, partners, and regulators.

Continuous resilience: A new operating model

Traditional security models – periodic audits, static controls, and perimeter defenses – are no longer sufficient. Resilience today must be continuous, and build on three foundational pillars:

  1. Continuous strategy and GRC: Embedding security and compliance into the fabric of business decisions. From zero trust to proactive risk management, organizations need a governance and compliance model that adapts as regulations and risks evolve.
  2. Continuous protection: Safeguarding IT, OT, and cloud environments with layered defenses. Here, AI plays a pivotal role, augmenting human expertise to detect and neutralize threats before they cause damage.
  3. Continuous vigilance: Always-on monitoring, threat intelligence, and incident response. Cyber Defense Centers operating 24/7 across the globe ensure that no threat goes unnoticed and no incident goes unmanaged.

Together, these pillars form an end-to-end approach to resilience, ensuring organizations can operate with confidence no matter how the threat landscape shifts.

Why cyber resilience must be continuous

The drivers are clear:

  • Speed of threats: AI-driven attacks can unfold in seconds, demanding real-time defenses.
  • Complexity of ecosystems: With hybrid cloud, IoT, and OT converging, the attack surface is broader than ever.
  • Regulatory pressure: New laws from the EU’s NIS2 to sector-specific mandates require continuous compliance.
  • Talent gaps: Automation and AI help augment skilled cyber professionals.

Continuous resilience is not just about technology – it’s about people, processes, and culture. Organizations that embed cyber awareness across their workforce are far better equipped to resist, respond, and recover.

Capgemini’s perspective

At Capgemini, we help clients navigate this shift by:

  • Designing trust-by-design strategies that align with regulations and business goals
  • Deploying AI-powered protection across IT, OT, and supply chains
  • Operating global Cyber Defense Centers that provide 24/7 vigilance and rapid response
  • Preparing clients for the quantum era with advisory services and post-quantum cryptography pilots.

Our approach is holistic, industry-specific, and global – ensuring that resilience is not a one-time milestone but an ongoing capability.

Looking ahead: Building future-ready cyber resilience

Cybersecurity Awareness Month is a reminder that in the digital world that we live, security is no longer optional, it is existential. Organizations that embrace continuous resilience won’t just withstand disruption – they’ll gain a competitive edge by building trust across their ecosystems.

The age of AI and quantum risks is not one to fear, but one to prepare for. Because in cybersecurity, resilience is not a project. It is a journey: continuous, adaptive, and future-ready. Explore how Capgemini helps enterprises build continuous cyber resilience: https://www.capgemini.com/au-en/services/cybersecurity/

About the Author

Marco Pereira

Marco Pereira

Executive Vice President, Global Head of Cybersecurity
Marco is an industry-recognized cybersecurity thought leader and strategist with over 25 years of leadership and hands-on experience. He has a proven track record of successfully implementing highly complex, large-scale IT transformation projects. Known for his visionary approach, Marco has been instrumental in shaping and executing numerous strategic cybersecurity initiatives. Marco holds a master’s degree in information systems and computer engineering, as well as a Master of Business Administration (MBA). His unique blend of technical expertise and business acumen enables him to bridge the gap between technology and strategy, driving innovation and achieving organizational goals.

    Increasing velocity with the Accelerated Delivery CenterĀ®

    David McIntire
    September 29, 2025

    Strengthening the agility of IT departments has evolved to be as critical as cost optimization, as IT increasingly represents the core of business operations.

    The ability of an IT department to rapidly react and adapt as the macro business environment changes is now an absolute imperative.

    This is reflected in the State of the CIO survey conducted by CIO.com earlier this year. Modernizing applications, aligning IT initiatives with business goals, and driving business innovation were all cited as focus areas by more than a quarter of CIOs surveyed.

    Capgemini’s Accelerated Delivery Center (ADC) is specifically designed to enable clients to rapidly scale up the delivery of IT services that include application development and maintenance (ADM) activities. Accelerated Delivery Centers are spaces within key Capgemini delivery locations that enable agile, product-aligned teams to rapidly build and deploy solutions supported by a suite of tools and accelerators. ADCs facilitate collaboration, minimize hand-offs, shape cost efficiency, and enable faster time to market while laying the foundations for large-scale transformations. They are built on cross-functional teams for product-oriented deliveries (PODs), leveraging agile processes to improve software delivery while aligning the backlog to the work that drives the highest business value for the client. Business alignment is reinforced when personnel from the client’s business side receive coaching on developing an agile culture, and take part in agile ceremonies. This approach supports effective processes and incorporates feedback loops for continuous improvement.

    These delivery PODs are supported by a common, horizontal team that handles the project management and platform engineering elements of continuous integration / continuous delivery (CI/CD) tooling and secured pipelines, maximizing the productivity of the PODs as well as building ā€œCenters of Practicesā€ to enable standardization and industrialization across the enterprise. The DevOps automation platform eliminates manual errors across the flow, and delivers containerized immutable infrastructure for increased reliability in test results.

    Delivering for a multinational bank

    Faced with challenges in meeting aggressive delivery timelines and disconnects between IT teams and the business, a multinational bank experienced multiple handovers and unstable deliveries. To resolve the issue, Capgemini deployed its ADC model.

    Capgemini’s ADC scaled up three PODs of seven resources in just two weeks. Utilizing ADC’s reference architecture, the team auto-generated the baseline application platform, automated builds, and created function and performance test cases using Capgemini’s App Swift Solution accelerator.

    This auto-generated, well-structured boilerplate code was production-ready, enabling the developers to follow the same structure and improving team velocity by 30–40 percent. Deployment frequency was also accelerated from quarterly to monthly. All of these combined to improve time to market by 25–30 percent.

    In parallel with greater speed, quality increased as defect counts dropped from 3.0 per story point to .2 defects per story point.

    Collaboration with the business improved, as the communication gap was reduced through the use of Rapid Design and Visualization (RDV) user stories. These user stories also enabled and optimized test-driven development (TDD) and behavior-driven development (BDD) practices.

    Moving ADC into the future

    ADC’s structured delivery model and tooling build a solid foundation for adopting  emerging agentic and Gen AI capabilities. Leveraging agentic capabilities to automatically translate requirements into designs and test cases, autonomously build code snippets, or execute test cases are increasingly becoming a core component of modern application development.

    As agentic capabilities mature, the ADC delivery model will see even greater acceleration to drive the business value that CIOs are focused on delivering.

    Meet the author

    David McIntire

    David McIntire

    ADMnext North America Offer Lead
    As part of the North American ADM Center of Excellence, I focus on developing innovative go-to-market offerings, thought leadership, and client solutions. I possess more than 20 years of experience in both shaping ADM solutions that help clients achieve their business objectives and defining performance management programs that demonstrate the value realized. I also develop thought leadership that enables clients to better understand the current state – and future direction – of the ADM market.

      The next step for scientific discovery: Merging AI and quantum

      Phalgun Lolur
      Sep 24, 2025

      Artificial intelligence (AI) has exhibited an immense impact on scientific discovery in recent years. Materials science and structural biology are two fields that have particularly experienced the benefits of the technology, with tech giants and startups alike leveraging AI to streamline the production of novel materials, identification of protein structures, and length of discovery cycles.

      Yet, beneath the excitement, a critical truth is often overlooked; the future of discovery will not be driven by AI alone. As it stands, AI possesses a number of scientific limitations, including an inability to distinguish between novel discovery and rediscovery, data leakage, and limited explainability.

      The question that many researchers have found themselves asking is: how do we overcome these hinderances while continuing to leverage the power of AI? The answer lies in the integration of AI into existing scientific processes with realistic expectations and rigorous validation procedures. First principles modeling, a scientific approach grounded in quantum mechanics, has emerged as a primary candidate for achieving this outcome while also paving the way for scientists to access the potential of AI and quantum computing.

      Quantum and AI are enhancing our knowledge-based understanding of the world by providing powerful tools to analyze complex data, uncover hidden patterns, and drive scientific discoveries. – Julian van Velzen

      Why quantum matters

      Scientific innovation revolves around the laws of physics. While AI is an optimal tool for recognizing patterns amidst vast datasets, it lacks the ability to understand scientific behaviors in real-world environments.

      First principles modeling, also referred to as ab initio modeling, uses quantum mechanics to more accurately predict material properties and molecular behavior. Not only does this framework ensure a more reliable foundation for experimentation by relying on physics, it also introduces the following four key benefits:

      1. Validation of AI predictions: Quantum mechanical methods like the Density Functional Theory (DFT) and Coupled-Cluster theory can confirm provide a physics-based validation of AI-generated candidates before having to invest in costly experiments.
      2. Exploration of novel domains: Quantum mechanics can be used to model systems where limited or no training data already exists, opening pathways to true innovation.
      3. High-quality training data: Simulations informed by quantum mechanics help establish robust datasets that improve AI accuracy over time.
      4. Mechanistic understanding: Unlike black-box AI predictions, quantum methods explain why a material or molecule behaves as it does, enabling smarter, more informed experimentation.

      In using quantum mechanics as the foundation for discovery, first principles modeling provides an increased level of reliability that AI can’t deliver on its own. Quantum computing promises to accelerate these benefits even further by making experimental processes faster and more efficient, significantly expanding upon the boundaries of what’s scientifically possible.

      The combination of AI and quantum in a seamless continuum is fundamentally redefining how we think about scientific discovery and innovation, allowing us to create more targeted solutions to drive R&D outcomes – Mark Roberts

      How leaders can get ahead of the curve

      Once leaders understand the full spectrum of value that comes as a result of the concurrence of AI and quantum, the next step is to look beyond incremental improvements and invest in integrated pipelines that leverage both technologies. We’ve outlined five strategic implications that executives must consider as they enhance their discovery pipelines:

      1. Adopt realistic expectations: AI and quantum alone won’t eliminate the need for experimental validation or domain expertise. The technologies should be viewed as an E&D accelerator as opposed to an autonomous source of breakthrough discoveries.
      2. Build hybrid teams: The most successful integrations of AI and quantum come from organizations that develop hybrid teams of data scientists and domain experts. Neither group can achieve desired results on their own, which present significant considerations for hiring, organizational structure, and knowledge management.
      3. Use data as a strategic asset: High-quality, scientific data is a competitive advantage. Organizations with proprietary, well-structured datasets will outperform their competitors and should consider data strategy as a top priority.
      4. Invest in computational infrastructure: First principles calculations require vast computing resources. Organizations must integrate high-performance computing capabilities into their pipelines to support AI and quantum mechanical modeling.
      5. Adopt improved validation frameworks: Implement rigorous validation protocols for AI-generated discoveries. Multiple computational and experimental checks should be a standard practice to avoid pursuing false leads that drain resources.

      The path to tomorrow

      The next era of scientific discovery will be defined by integration. AI brings speed and scale, while quantum delivers depth and accuracy. Together, they create a discovery pipeline that is not just fast, but more dependable. As quantum computing capabilities grow, this synergy will only deepen, expanding what’s possible. In acting now by investing in people, data, and infrastructure, leaders can shape breakthroughs that will define the next generation of scientific and technological progress.

      Meet the experts

      Phalgun Lolur

      Phalgun Lolur

      Scientific Quantum Development Lead
      Phalgun leads the Capgemini team on projects in the intersection of chemistry, physics, materials science, data science, and quantum computing. He is endorsed by the Royal Society for his background in theoretical and computational chemistry, quantum mechanics and quantum computing. He is particularly interested in integrating quantum computing solutions with existing methodologies and developing workflows to solve some of the biggest challenges faced by the life sciences sector. He has led and delivered several projects with partners across government, academia, and industries in the domains of quantum simulations, optimization, and machine learning over the past 15 years.
      Julian van Velzen

      Julian van Velzen

      Principal, Head of Quantum Lab
      I’m passionate about the possibilities of quantum technologies and proud to be putting Capgemini’s investment in quantum on the map. With our Quantum Lab, a global network of quantum experts, partners, and facilities, we’re exploring with our clients how we can apply research, build demos, and help solve business and societal problems that till now have seemed intractable. It’s exciting to be at the forefront of this disruptive technology, where I can use my background in physics and experience in digital transformation to help clients kick-start their quantum journey. Making the impossible possible!
      Dr Mark Roberts

      Dr Mark Roberts

      CTO Applied Sciences, Capgemini Engineering and Deputy Director, Capgemini AI Futures Lab
      Mark Roberts is a visionary thought leader in emerging technologies and has worked with some of the world’s most forward-thinking R&D companies to help them embrace the opportunities of new technologies. With a PhD in AI followed by nearly two decades on the frontline of technical innovation, Mark has a unique perspective unlocking business value from AI in real-world usage. He also has strong expertise in the transformative power of AI in engineering, science and R&D.

        How Outcome IQ delivers the excitement of Ryder Cup to billions

        Deepak Juneja
        Deepak Juneja
        Sep 25, 2025

        Combining deep insight and personal intimacy to magnify the power of golf

        Arnold Palmer once said, ā€œGolf is deceptively simple and endlessly complicated.ā€ After working on Outcome IQ for several years, I am inclined to agree. The tool, which Capgemini created for the Ryder Cup tournament in 2023 and further developed for the 2025 tournament embraces the complexity of the sport. That includes the 170 million ways a match play scorecard can be filled out!

        Outcome IQ distills all that complex data into a simple package that works for any fan, be they a hardcore golf devotee, a casual viewer, or anything in between.

        The technology evaluates every stroke, hole, and match in just a few seconds, offering fans a real-time understanding of momentum shifts, strategic decisions, and performance under pressure. And that’s only half the challenge. Once we have the insights, we deliver them to all channels where fans are interested. Just as important: the insights have to feel like they were tailor-made for each of those channels.

        The on-course experience

        Golf may not pack out stadiums, but it is very much a spectator sport. In 2023, the event was held in Rome; 2025’s tournament is at the historic Bethpage Black course in Farmingdale, New York; 2027’s in County Limerick, Ireland. Capgemini research shows that worldwide, over a third of sports fans regularly watch games in-venue. Events like Ryder Cup draws thousands of spectators, some in VIP locations, others in designated media centers.

        These fans get to experience Outcome IQ through a custom-designed user interface that delivers the all-important probability score and also Gen AI fueled insights that offer commentary on every shot.

        Armchair viewers – hardcore and casual

        Yet as with any sporting event , the overwhelming majority of fans don’t view it in person. Instead, millions tune into channels like NBC or Sky Sports from the comfort of their home – and Outcome IQ has become a crucial part of this fan experience.

        By putting data-driven competitive probabilities on the screen, the at-home viewer gets a taste of the drama and competitive atmosphere of the course. This is particularly valuable for the casual viewer, who might not know the stakes or even the details of how golf is played. Outcome IQ brings down the barrier, making the sport more accessible to new fans.

        Mobile platforms

        Content is now consumed anywhere – including sports content. According to Capgemini research, 70% of sports fans prefer to consume sport on their smartphones. You can see fans following the latest game on a subway train, in a restaurant, or just walking down the street. These fans don’t just want to replicate the TV viewing experience on a mobile device. Mobile users want features tailored for them that will give them a reason to open one specific app out of thousands. And why shouldn’t they? It’s the second-largest platform for consumption, after TV, and 82% of fans in 2023 said that technology had improved the experience of watching sports.

        The benefit of appearing on mobile devices is you can vary the format of what you deliver. It’s a great spot to show a dense graphic illustrating current probabilities:

        Outcome IQ also lets you go deeper into analysis, such as identifying key moments. For a tournament like Ryder Cup, where you have several days of action and multiple players, these can be an accessible way to see how the momentum shifted.

        And, of course, a simple notification can grab the attention with a single statistic:

        In the time since the unveiling of ChatGPT to the public in 2022, AI has made massive inroads into all areas of people’s lives. The technology behind sport and corresponding media is no exception, and we can expect the process to continue for many years to come. In the process, it has shown the potential for this kind of AI-led analysis to be distributed across multiple platforms for different audiences, within and beyond the sporting world.

        For billions of people, golf is now a game intermediated by AI . Outcome IQ will be a familiar sight for a whole generation of fans.

        Authors

        Deepak Juneja

        Deepak Juneja

        Informatica Global Capability LeaderĀ Head of Financial Services Data Management and Data Fabric Portfolio, Insights & Data, Capgemini
        Deepak is a Senior Data Management Executive and Chief Data Strategist with over 25 years of experience in the financial services industry in leading teams in defining and operationalizing data and analytics strategies to become data- and insight-driven enterprises.
        Gaurav-Verma

        Gaurav Verma

        Senior Director, Portfolio Leader Data Visualization, Financial Services Insights and Data
        Gaurav Verma is Senior Leader in Data Analytics with over 18 years of experience in Reporting and Visualization, specializing in financial services industry. He excels at converting complex data into clear, actionable insights to drive business growth.

          Ryder Cup 2025

          AI-powered Outcome IQ to deliver dynamic probabilities and match insights.

          The true cost of cloud: Managing rising spend without sacrificing innovation

          James Dunn
          Sep 25, 2025

          The on-demand tech paradox

          Across industries – from manufacturing and banking to telecom, life sciences, and public services – organizations are embracing on-demand technologies like cloud, SaaS, and Gen AI to drive agility, innovation, and scale. But with this transformation comes a paradox: the faster you scale, the harder it is to control spending.

          Capgemini’s latest research report, The on-demand tech paradox, reveals that 82% of executives have seen significant cost increases in cloud, SaaS, and Gen AI. And 61% say these costs are impacting profitability. The promise of innovation is real – but so is financial strain.

          The hidden cost spiral

          Let’s step into the shoes of a CIO at a global enterprise. Their teams are deploying Gen AI for R&D, SaaS tools for collaboration, and cloud infrastructure for scalability. But soon, finance flags budget overruns, IT struggles with forecasting, and business units are buying tools independently.

          This isn’t an isolated case. The report shows:

          • 76% of organizations exceeded public cloud budgets (by 10% on average)
          • 68% overspent on Gen AI; 52% on SaaS
          • 59% say cloud waste is a major challenge
          • 58% describe on-demand tech costs as ā€œa big black hole.ā€

          What’s driving the cost explosion?

          1. Decentralized tech spend
          A significant portion of technology spending is now being driven by business units rather than IT departments. Specifically, 59% of Gen AI and 48% of SaaS expenditures are initiated outside of IT’s control, making it difficult to maintain oversight and governance. Alarmingly, 12% of SaaS spending is completely unmanaged, increasing the risk of redundant purchases and wasted resources.

          2. Reactive cost planning
          Many organizations are adopting cloud-first strategies without adequate cost planning. In fact, 54% of organizations move to the cloud reactively, only considering costs after deployment. This issue is especially pronounced in sectors like public services (65%) and defense (63%), where cost considerations are often an afterthought rather than a foundational part of the strategy.

          3. Underutilized tools and governance gaps
          Despite the availability of cloud cost management tools, only 37% of organizations actually act on the insights these tools provide. Most FinOps teams are focused on day-to-day operations rather than strategic cost management, leading to missed opportunities for optimization and value creation.

          A wake-up call from the FinOps frontline

          J.R. Storment, Executive Director of the FinOps Foundation, offers a sharp reminder:
          ā€œOnly considering cost after deployment can lead to unwelcome outcomes– surprisingly high cloud bills, lower product margins, and fewer options for optimizationā€ 

          This insight reflects a widespread issue: cost planning is often an afterthought, even in sectors with high regulatory and operational complexity. The result? Missed opportunities, budget overruns, and underwhelming ROI.

          From cost control to value enablement

          Managing cloud economics isn’t just about cutting spend – it’s about unlocking value. Yet only 2% of FinOps teams cover cloud, SaaS, and Gen AI holistically. And just 42% influence business decisions.

          To shift from reactive cost control to strategic enablement, organizations must:

          • Expand FinOps scope across all on-demand tech
          • Embed cost awareness into architecture and development

          Align finance, tech, and business on a shared ā€œlanguage of value.ā€

          Five actions to regain control

          1. Shift left on cost planning: Bake cost intelligence into early design and architecture decisions.

          2. Automate cost controls: Use tools for auto-scaling, idle resource detection, and license decommissioning.

          3. Use AI for forecasting and optimization: Spotify uses AI to predict demand and optimize workloads in real time.

          4. Build a culture of accountability: Tag costs to teams, implement chargeback models, and foster shared ownership.

          5. Measure ROI holistically: Move beyond cost metrics to include innovation, productivity, and sustainability.

          Remember: Cost is a proxy for sustainability

          53% of executives agree that inefficient on-demand tech usage leads to excessive energy consumption and carbon emissions. Sustainable FinOps – where cost and carbon efficiency go hand-in-hand – is the next frontier.

          Download the full report

          The Capgemini Research Institute’s on-demand tech economics report offers a roadmap to navigate this paradox – backed by data from 1,000 global executives and insights from industry leaders. Download now and discover how to shift from reactive cost control to strategic value enablement.

          About the author

          James Dunn

          James Dunn

          Global cloud portfolio lead, Cloud Infrastructure Services
          James is a technically astute and visionary leader with over 15 years of experience driving enterprise cloud transformation. He brings deep expertise in strategic sales, offering co-creation, partner ecosystem management, and technical leadership. James has successfully led global business units across cloud migration, DevOps, platform engineering, and managed services—delivering high-impact outcomes and accelerating cloud adoption at scale.

            Why scaling AI is the key to sustainable business transformation

            Aurelie Lustenberger
            Sep 24, 2025

            AI is set to transform organizations, including in terms of sustainability. Scaling sustainable AI across the enterprise – instead of simply adding isolated use cases – can unlock long-term business value and impact.

            AI is fast emerging as a game-changer for productivity. It’s also opening the door to smarter decision-making and greater agility – particularly when dealing with complex, cross-functional challenges. In sustainability, AI – and especially agentic AI (with sustainability-focused agents) – is already being leveraged for a variety of goals. Organizations are using agentic AI to advance climate strategy, energy efficiency, waste management, compliance reporting, and more. They’re addressing social topics too, such as diversity and inclusion, employee training, and employee mental health.

             Technology made for flexibility

             We know that adopting AI at scale is more operationally efficient, sustainable, and cost-effective than using it only for isolated cases. Scaled AI can drive greater impact and increased value across the company. But large-scale implementation represents a major investment. How can businesses ensure they reap the benefits and avoid the pitfalls? 

            Organizations adopting AI most effectively start by identifying the problems at a strategic level, then working with an expert partner like Capgemini to design, deliver, and deploy a scaled solution. These cloud-based technologies are scalable by nature, tailored in response to business needs.

             Scaling brings increased efficiency

             While there are benefits to running AI agents in isolated use cases, scaling can ensure efficiency and therefore increased sustainability. Starting with a clear plan to launch at scale can help companies avoid issues resulting from uneven implementation. Defining an organization-wide strategy can help establish clear applications, avoid duplicated efforts, and prevent the kinds of inefficiencies typically caused by using multiple different systems in parallel.

            Ā Scaling successfully also means process-wide implementation. AI is most effective when built into a process from end to end, instead of only in one stage. Take a multinational company’s process for the month-end financial close as an example: From subsidiaries submitting their reports to the global accounting team consolidating them, then calculating liabilities, and submitting for CFO approval, there are multiple sequential steps that can drag out the timeline. However, if an AI agent is placed at every step – validating subsidiary data, consolidating figures in real-time, flagging issues continuously, and so on – the company can avoid a frantic scramble at month-end. A fully ā€˜agentified’ process will have smoother step-to-step transitions and fewer bottlenecks.Ā 

            No matter the use case, AI needs data. With data from across an organization, AI can conduct a richer, deeper, and more critical analysis. Unlike siloed AI solutions, which may miss vital pieces of information, a fully connected approach can manage processes and predict outcomes for a whole company. Equipped with robust data, AI agents can also cross-check data and analysis with other agents for a deeper, more accurate output. 

            Scaled AI also allows for standardization. Data chains and AI that are connected by design ensure operations run smoothly. They ensure accurate data collection at every stage of the value chain and provide outputs that are aligned across the board. 

            External disclosures: scaled AI in action

             To improve external disclosures processes such as CDP questionnaires, investors or shareholders questionnaires or regulatory reporting such as CSRD in Europe, employees must obtain information from both internal teams and suppliers, synthesize the inputs and then analyze the results. 

            However, the work required to collect and organize this information is time-consuming and resource-intensive for human employees. AI can do it much more efficiently – freeing up the humans to focus their time and efforts on the next level of optimization and strategy. Specialized agents can be set up depending on the type of data and focus to serve multiple reporting needs instead of parallel data collection and computation. 

            Doing it right the first time

             With strategic, organization-wide AI adoption, businesses can achieve transformational results. Deploying agentic AI at scale can make processes more efficient, unlocking real business value and creating lasting impact across entire organizations. Agentic AI enables companies to not only more easily meet their compliance requirements (like CDP reporting) – it also liberates humans to focus on higher-value work.

            By scaling agentic AI with standardized systems and integrated data flows, its benefits can be fully realized. At Capgemini and Microsoft, we know that when AI is implemented with a clear purpose and on a large scale, it acts as a catalyst for sustainable transformation across the organization and beyond. Develop your company’s catalyst today:

            Author

            Aurelie Lustenberger

            Aurelie Lustenberger

            Senior Director, Sustainability Performance, Capgemini Invent
            AurƩlie supports organizations on their sustainable transformation journey. From defining the data strategy for ESG performance to implementing reporting to steer ESG trajectory, she leverages data and analytics to drive sustainable business value for her clients.

              Techceleration: How to lead at the speed of change
              Forward-thinking organizations are aligning strategy and culture to turn tech’s rapid evolution into business value

              Günther Reimann
              Sep 23, 2025
              capgemini-invent

              Technology is developing faster than most organization’s strategies. While you can’t slow the pace of change, you can choose to lead it through focused technology adoption.

              How do organizations stay ahead while technology development is accelerating?

              New technologies emerge at an ever-accelerating speed; traditional operating models and planning cycles are no longer sufficient to keep pace. Nowhere is this more evident than in the field of AI.

              Recent data underscores this paradigm shift: Generative AI adoption has increased from 6% in 2023 to 36% in 2025, and AI agent deployment has grown from 4% to 14% over the same period reflecting technology adoption trends visible across sectors. These significant upticks exemplify the broader trend of accelerated technology diffusion within industry.

              As this momentum continues, organizations must not simply respond to change, they must anticipate and orchestrate it. The imperative for proactive leadership in navigating techceleration is clear: those able to architect and execute adaptive strategies will define the future of their industries. 

              Techceleration How to lead at the speed of change infographic

              Techceleration refers to the rapidly increasing speed of technological progress. This includes breakthroughs like 5G, generative AI, and quantum computing. These innovations are driving major changes across industries and organizations which must be actively managed.  

              Adapting to rapid technological advancement necessitates staying abreast of emerging tools and prevailing trends. But it also requires embedding a strategic, business-aligned approach to technology and evolving how you operate, compete, and create value. 

              AI stands in the middle of techceleration and demonstrates how innovation can quickly translate into impact. Instead of advancing incrementally, AI drives comprehensive transformation at an accelerated pace.  

              It can streamline operational IT processes to reduce costs and enhance digitization, while also powering real-time, personalized support at customer and employee touchpoints to redefine service and experience. 

              From technology adoption to technology anticipation 

              To create business value through techceleration, organizations must adopt an innovation-led approach where technology drives strategic goals. This approach should include:

              • Implementing a strategic and hands-on technology watch to actively monitor emerging technologies in order to drive innovation.
              • Scaling the use of AI to accelerate transformation.
              • Leveraging on-demand tech solutions to boost efficiency.

              Techceleration calls for a strategy that clearly identifies where value can be created and gives the organization the flexibility to shift direction quickly.

              As part of this strategy, the IT function should establish itself as a business partner by proactively identifying technological trends, introducing innovative ideas, and supporting the organization in adopting and scaling these advancements efficiently.

              Achieving this transformation necessitates the implementation of a governance and delivery model attuned to business objectives.

              Implementing a strategic and hands-on technology watch

              Leading companies achieve strategic advantage by recognising and implementing emerging technologies at optimal times – when these innovations are sufficiently advanced to support scaling yet remain novel enough to confer a competitive edge. Achieving this requires a structured approach across four phases:
              Techceleration How to lead at the speed of change infographic 2
              An innovation radar, customized for an organization, helps systematically identify, evaluate, and prioritize emerging technologies. This strategic tool directs efforts toward technology adoption trends, business needs, and internal strengths to support exploration and implementation.Ā 

              ExplorationĀ 

              This phase includes scanning for transformative technologies, anticipating their impact, and aligning disruptive trends with strategic goals. The Innovation Radar identifies focus areas like trending tech, organisational needs, and network expertise to gather relevant insights continuously.

              Here, organizations test emerging technologies through pilot projects and prototypes. The goal is to understand practical applications, benefits, and operational requirements. For example, a financial firm might explore quantum computing to solve complex problems. The Innovation Radar combines insights from different areas to create practical use cases.

              This phase converts technology into business value by scaling at the optimal time – when reliable and ahead of competitors. Organizations manage risks during integration, aided by the Innovation Radar for structured decisions, stakeholder agreement, and budgeting.

              After completing the exploration, experimentation, and exploitation stages of a technology watch, organizations can move forward with implementing an appropriate technology. This ensures alignment with business objectives and scalability readiness.

              When implementing a technology, an organization must balance the pursuit of cutting-edge innovation with the realities of operational integration and risk management.Ā Ā 

              To strike this balance successfully, an organization might want to consider the following initiatives:

              • Executive briefings to enable synchronization with overall strategy.
              • Strengthening tech innovation teams with strategists and specialists.
              • Empowering diverse, passionate teams with possibilities to explore new tech.
              • Creating internal platforms to collect and invest in promising ideas.
              • Collaborating with startups e.g., via venture funds.
              • Building ecosystems with partners, suppliers, and industry players to drive collective innovation.

              Scaling GenAI with technology adoption

              Techceleration has rapidly expanded the digital capabilities of organizations, but it has also raised the stakes for how emerging technologies are adopted. 

              Gen AI is a prime example. The question is no longer whether to adopt Gen AI, but how to do so effectively and at scale. This shift demands more than technical experimentation. It requires a strategic framework that integrates Gen AI into organizational design, talent development, partnerships, and data infrastructure. 

              The potential is substantial. Internally, Gen AI can streamline operational processes such as software engineering and application lifecycle management, reducing IT costs while advancing digital maturity. Externally, it enables more personalized client interactions, moving organizations closer to genuine client-centricity. 

              A global manufacturing conglomerate partnered with Capgemini to address supply chain inefficiencies caused by unpredictable market demand. To solve this, Capgemini developed a Gen AI-powered chatbot and forecasting engine that provides real-time insights and highly accurate demand predictions. As a result, the client was able to optimize warehouse logistics, reduce inventory losses, and significantly boost profit margins. Following the success of the initial prototype, the solution is now being scaled globally, demonstrating the transformative potential of Gen AI in supply chain management. 

              Yet, scaling Gen AI is complex. Transitioning from proof-of-concept to production-ready solutions creates challenges in data governance, availability, and quality. Organizations must define a clear implementation roadmap, supported by credible business cases and ongoing validation of Gen AI outputs. Addressing bias and ensuring transparency in AI-generated content is essential to building trust and sustaining value. 

              On-demand tech value 

              Techceleration has made the value of on-demand technology more urgent than ever. As technology adoption outpaces traditional planning cycles, the pressure to extract measurable value from digital investments has intensified. This shift has elevated the role of technology in business strategy and has reframed how its value is assessed. In this context, the rise of on-demand technology presents both a challenge and an opportunity. 

              To begin with, the flexibility and scalability of on-demand IT, enabled by cloud services and consumption-based models, have become indispensable. Yet, this very flexibility introduces a new layer of complexity. As organizations embrace these models, they must contend with fluctuating costs, fragmented systems, and the need for continuous oversight. 

              Techceleration How to lead at the speed of change infographic 3

              What was once a straightforward budgeting exercise has evolved into a balancing act between performance, cost, and business need. Organizations that have developed the capability to link technology-spend directly to business value are putting themselves at a clear advantage. The advantage lies not in cost-cutting per se, but in precision. By making data-driven decisions about where and how to invest, they optimize their technology stack as well as their strategic outcomes. This approach transforms IT from a support function into a driver of competitive advantage. 

              However, the path to this level of maturity is not without obstacles. As consumption-based pricing becomes the norm, the ability to forecast and monitor total technology spend becomes essential. Without robust governance, organizations risk creating a fragmented landscape. Strong technology adoption strategies keep ownership clear and value visible. Financial discipline alone is insufficient. It is important to foster a culture of accountability that encourages teams to consider both cost and carbon impact. This cultural shift ensures that optimization efforts are sustainable and aligned with broader organizational goals. 

              Accelerate with technology adoption strategies

              Techceleration demands speed as well as strategy. By embracing innovation, anticipating change, and embedding scalable solutions like Gen AI and on-demand tech, organizations can unlock sustainable value.

              Capgemini recognizes the complex challenges organizations face today. Drawing on decades of experience partnering with clients to drive digital transformation, we’re uniquely positioned to provide practical support and strategic guidance as businesses adapt to the rapid pace of technological change. 

              In the next blog of this series, we’ll explore how softwarization is reshaping every layer of business.

              Meet our expert

              Günther Reimann

              Günther Reimann

              Vice President, Global Head of Inventive & Sustainable IT
              Günther Reimann is practical strategist for business technology and digitization with over 20 years of experience. Günther drives client growth through purpose-led IT transformation, competitive capabilities, and innovation. Leading Germany’s Business Technology portfolio and Inventive & Sustainable IT globally, he champions resilient tech and the strategic acceleration of the digital (r)evolution.

                Stay informed

                Subscribe to get notified about the latest articles and reports from our experts at Capgemini Invent

                How outcome-based sales are transforming agribusiness partnerships

                Dr. Arne Bollmann
                Arne Bollmann
                Sep 22, 2025
                capgemini-invent

                Outcome-based sales and servitization are changing collaboration in agriculture, placing greater value on shared goals and measurable outcomes

                Innovation continues to shape the agricultural sector and the way agricultural input providers operate is evolving. Instead of focusing solely on selling products, many are shifting toward offering services and agribusiness solutions that emphasize results and collaboration. Today, they have evolved into comprehensive service providers, offering support services alongside a wide range of products including:

                • Seeds and traits
                • Seed treatments
                • Herbicides, fungicides, and insecticides
                • Biologicals
                • Digital tools and data-driven solutions

                This approach, known as outcome-based sales (OBS) or profit lever servitization, helps companies and farmers collaborate on shared objectives, focusing on proven results rather than transactions.

                By integrating advanced technology, agribusiness consulting, and data-driven analytics, organizations can deliver tailored recommendations that optimize yield and profitability. This strengthens long-term partnerships and positions agribusiness companies as strategic allies in driving sustainable agricultural growth.

                The agricultural landscape has become increasingly complex, with farmers facing challenges from climate change, resource scarcity, and evolving consumer demands.  

                Meanwhile, falling prices and generic alternatives undermining the perceived value and uniqueness of branded products are forcing major agribusiness firms are reinvent themselves to remain competitive.  

                In response to the multifaceted challenges, agribusiness corporations have expanded their offerings to provide comprehensive agribusiness solutions. This change isn’t just about adding more products or services. It’s an intentional move to adapt to shifting market demands. One of the driving forces behind this change is servitization.  

                Servitization: An emerging trend in agribusiness strategy 

                Servitization is an approach strategy that shifts the emphasis from simply selling products to delivering complete holistic, systemic solutions that combine goods with value-added services.  

                In agribusiness strategy, servitization means moving beyond supplying seeds, fertilizers, or machinery to providing integrated services that enhance farmers’ productivity, efficiency, and sustainability. These agribusiness solutions include: 

                • Advanced data analysis 
                • Cutting-edge sensor technology 
                • Expert agronomic consulting 

                The concept of servitization represents a fundamental shift in how agribusiness corporations interact with their customers. Instead of one-time transactions, these companies are now fostering long-term relationships built on continuous value delivery. For instance, advanced data analysis services might involve real-time monitoring of crop health using satellite imagery and IoT sensors, allowing for early detection of pest infestations or nutrient deficiencies.  

                Another example of serivtization in agriculture is expert agronomic consulting. This might take the form of AI-powered decision support systems that provide personalized recommendations based on a farm’s unique conditions and historical performance data. 

                Alongside servitization, outcome-based sales (OBS) is also playing a major role in the transformation of agricultural industry. 

                Harvesting the value of outcome-based sales for agriculture infographic1

                Outcome-based sales: A win-win partnership

                OBS is an approach to building sustainable partnerships between agribusiness corporations and farmers. Here’s how it works: 

                • Participation model: Farmers share data from sowing to harvest with the agribusiness corporation. 
                • Outcome promise: The corporation guarantees specific results, such as a weed-free field. 
                • Data integration: Farmer-specific data is combined with historical and current data, including satellite and weather information. 
                • Tailored recommendations: Based on this comprehensive data analysis, the corporation provides customized advice to achieve the promised outcome. 
                • Risk sharing: If the agreed-upon metrics aren’t met, the farmer receives compensation. 

                The OBS model represents a radical departure from traditional agribusiness practices. It aligns the interests of the corporation and the farmer in unprecedented ways. For example, instead of simply selling a herbicide, a company might guarantee a certain level of weed control. This shifts the focus from product features to actual on-farm results.  

                The data sharing aspect is crucial, as it allows the corporation to continually refine its recommendations based on real-world performance. This could mean adjusting application rates based on soil type, weather conditions, and crop growth stage. The risk-sharing component is particularly innovative, as it demonstrates the corporation’s confidence in its products and services while providing farmers with a safety net. This model has the potential to transform agriculture from a largely transactional industry to one built on deep, data-driven partnerships. 

                Personalization as a key factor 

                OBS relies heavily on the exchange of data and the use of personalized insights. This approach builds trust and loyalty through individualized support. It also allows for tailored product recommendations and helps farmers mitigate risks from unpredictable external factors 

                Personalization in agriculture improves customer satisfaction as well as increasing the effectiveness of farming practices. By leveraging big data and advanced analytics, agribusiness corporations can provide insights that are tailored to the specific conditions of each field, or even each part of a field. 

                This level of granularity allows for precision agriculture on an unprecedented scale. For instance, a farmer might receive recommendations for variable-rate seeding based on soil fertility maps, or get alerts about potential disease outbreaks based on local weather patterns and crop susceptibility. This personalized approach not only optimizes resource use and maximizes yields but also helps farmers navigate the increasingly unpredictable conditions brought about by climate change. 

                Why OBS and servitization matter now 

                The importance of adopting OBS and servitization models is clear. As agriculture faces increasing pressures from population growth, climate change, and limited resources. The need for more efficient and sustainable farming practices is critical.  

                OBS and servitization models address these challenges head-on by fostering a collaborative ecosystem where continuous improvement is the norm and offering: 

                • Market differentiation: In a commoditized market, service-based models provide a competitive edge. 
                • Sustainable revenue: Continuous service models ensure steady income for agribusiness corporations. 
                • Risk mitigation: Shared responsibility leads to better outcomes for both parties. 
                • Innovation driver: The model encourages ongoing development of better products and services. 
                • Data-driven agriculture: Facilitates the shift towards more precise, efficient farming practices. 

                For agribusiness corporations, the shift to service-based models provides a buffer against the commoditization of physical products and creates opportunities for developing new revenue streams.  

                For farmers, these models offer access to cutting-edge technologies and expertise that might otherwise be out of reach, potentially leveling the playing field between small and large operations. Moreover, the focus on data-driven decision-making aligns perfectly with the broader trend of digitalization in agriculture, positioning early adopters at the forefront of the industry’s transformation. 

                From vision to reality 

                The transition from traditional product-centric models to OBS is already underway. Leading agribusiness corporations are already piloting OBS programs, with promising results.  

                For instance, some companies are offering leaf health guarantee programs for certain crops, where farmers pay a premium for crop protection products and agricultural support and receive compensation if the leaf health falls below a certain threshold. These early initiatives are showing significant promise, with participating farmers reporting improved yields, reduced input costs, and better risk management. As these programs mature and expand, we can expect to see a ripple effect throughout the industry, driving wider adoption of servitization and OBS models. 

                As data analytics capabilities improve and farmers become more tech-savvy, we’re likely to see rapid adoption of these models across the industry. 

                Adapting to change 

                The shift towards servitization and OBS in agriculture is a fundamental reimagining of how the industry operates. For agribusiness corporations, embracing these models means investing in new capabilities, from data analytics to customer relationship management. It requires a cultural shift towards greater transparency and collaboration.

                For farmers, it means being open to new ways of working, sharing data, and making decisions. The potential rewards are substantial: increased yields, reduced environmental impact, and improved economic stability.

                Harvesting the value of outcome-based sales for agriculture infographic

                The agricultural sector is approaching a phase where data, services, and outcomes are taking center stage. As these models gain traction, we can expect to see a cascade of innovations in areas like predictive analytics, autonomous farming systems, and blockchain-based supply chain management.

                Driving sustainable growth together

                Outcome-based sales and servitization are reshaping agribusiness, driving collaboration, innovation, and measurable results. As agribusiness corporations and farmers embrace these models, integrating them into a forward-looking agriculture business plan will be key to long-term success and sustainable growth.

                Meet our experts

                Dr. Arne Bollmann

                Arne Bollmann

                Senior Manager, Agribusiness & CropScience, Capgemini Invent
                Arne Bollmann brings extensive experience in agriculture, crop science, strategy and corporate development. He has led strategic initiatives at Capgemini Invent and KWS, combining deep industry knowledge with consulting expertise. His background spans marketing, sales, R&D and agroservice, alongside hands-on management of his family farm, driving innovation and sustainable transformation across global markets.

                  FAQs

                  Yes. Outcomebased selling (OBS) ties price to the results that matter, such as higher yields, cleaner fields, or lower input costs. When incentives align around measurable outcomes, providers invest in the agronomy, analytics, and service discipline that lift productivity and reduce waste. That, in turn, improves margins for the farm while giving suppliers more predictable, recurring revenue. In practice, OBS operationalizes wellknown efficiency levers from precision agriculture, then adds risksharing and performance guarantees on top. The result is a clearer business case, better retention, and a stronger agribusiness strategy.

                  References 

                  Capgemini — Servitization (solution page) 

                  Capgemini — Servitization: (Re)Dawn of the XaaS 

                  Capgemini — Digital Core for Enterprise CXOs 

                  U.S. GAO — Precision Agriculture: Benefits and Challenges (2024) 

                  World Bank — DataDriven Digital Agriculture (brief) 

                  OBS is already visible in the market. Bayer piloted performancebased pricing that ties payment to outcomes such as weedfree or diseasefree fields. Syngenta’s AgriClime shares weather risk by refunding spend if rainfall drops below agreed thresholds. Indigo Ag pays farmers per verified ton of carbon removal or emissions reduction, turning sustainability outcomes into cash flow. These models bundle products, digital tools, and agribusiness consulting into practical agribusiness solutions that farmers can plan for in an agriculture business plan.Ā 

                  References 

                  Successful Farming — Bayer outcomebased pricing 

                  Syngenta — AgriClime FAQ 

                  Indigo Ag — Carbon by Indigo 

                  Capgemini Invent — Sowing innovation in farming (PDF) 

                  Capgemini Invent — Digital farming booklet (PDF) 

                  OBS rewards measured outcomes rather than input volumes, so it is naturally compatible with sustainability objectives. Contracts can be tied to soil health, nutrient use, water efficiency, or verified emissions reductions, with shared dashboards that track progress. This turns sustainability from a cost into value creation, supported by transparent data, repeatable methods, and credible assurance. The same data backbone that underpins OBS also strengthens supplychain reporting and Scope 3 management, reinforcing resilience and enabling more scalable agribusiness solutions.Ā 

                  References 

                  Capgemini — Building sustainable value chains (summary) 

                  Capgemini — Building sustainable value chains (playbook PDF) 

                  Capgemini Research Institute — A world in balance 2024 

                  OECD — Making agrienvironmental payments more costeffective 

                  FAO — The State of Food and Agriculture 2022 

                  Farmers get lower risk, clearer economics, and access to tools and advice that raise yields and cut inputs. Suppliers gain deeper loyalty, steadier revenue, and a sharper feedback loop to improve offers. In short, OBS aligns incentives around customer results. For marketing and commercial teams, this becomes a durable agribusiness strategy: package the right analytics, advisory, and guarantees into service tiers that are easy to adopt, then measure and iterate. For farmers, it is practical: a single partner accountable for outcomes, with costs that fit the agriculture business plan.Ā 

                  References 

                  Capgemini — Servitization (solution page) 

                  Capgemini — Servitization: (Re)Dawn of the XaaS 

                  Capgemini — Digital Core for Enterprise CXOs 

                  Capgemini Invent — Sowing innovation in farming (PDF) 

                  Industrial Marketing Management — Outcomebased contracting (accepted manuscript) 

                  U.S. GAO — Precision Agriculture: Benefits and Challenges 

                  Three areas require deliberate design. First, data governance and trust: farmers want clarity on ownership, access, and use of their data. Second, capability and cost: OBS needs sensorization, integration, analytics, field support, and billing upgrades, plus robust agribusiness consulting to onboard customers. Third, contract design and measurement: outcomes must be specific, attributable, and verifiable across multiple parties. Teams that address these headon can scale outcomebased agreements with fewer surprises.

                  References 

                  Capgemini — Servitization (solution page) 

                  Capgemini — Servitization: (Re)Dawn of the XaaS 

                  Capgemini — Building sustainable value chains (summary) 

                  OECD — Data governance in digital agriculture 

                  OECD — The digitalisation of agriculture 

                  California Management Review — Aligning performance metrics in outcomebased contracts 

                  Stay informed

                  Subscribe to get notified about the latest articles and reports from our experts at Capgemini Invent

                  Unleashing engineering potential with generative AI

                  Capgemini
                  Sarah Richter, Hugo Brue, Udo Lange
                  Sep 18, 2025
                  capgemini-invent

                  Over recent months, companies have intensified their adoption of Gen AI. This, along with Gen AI’s rapid evolution, has led to new practices and roles for engineers

                  Although generative AI (Gen AI) initially gained recognition in engineering through applications in software development, its scope has broadened to help tackle today’s major engineering challenges. According to Gartner, Gen AI will require 80% of the engineering workforce to upskill through 2027.   

                  In today’s market context, Gen AI for engineering enables companies to optimize processes, reducing time-to-market by speeding up the production of engineering deliverables and improving product quality and compliance by automating certain quality control tasks. These contributions are especially critical since products and ecosystems are increasingly complex and regulated, with more stakeholders and highly personalized requirements.

                  In this blog, we explore effective strategies for integrating Gen AI technologies, offer practical recommendations to maximize their impact, and share key insights on how to unlock the value they bring to engineering.Ā 

                  How to unlock the potential value of Gen AI in engineering  

                  Companies are struggling to unlock the potential value of Gen AI in engineering. This is not caused by a lack of use case ideas, but rather the lack of an efficient end-to-end assessment supporting the implementation of suitable use cases into productive systems. In addition, companies face difficulties in upscaling their implemented use cases effectively across the entire engineering department. 

                  Along the engineering value stream, we have been supporting our clients to integrate Gen AI successfully and, more importantly, to maintain profitability sustainably. In this blog, we share our top three lessons to help you reach your own goals with Gen AI. 

                  Unleashing engineering potential with generative AI blog infographic

                  Evaluation process 

                  Choose the right use cases to be implemented by using measurable assessment gateways 

                  To get the most value from Gen AI over time, it’s important to choose the right use cases and develop a strategic order of pursuit. Many companies try to connect Gen AI’s impact to their KPIs, but they often find it hard because of the overwhelming variety of application options. To act effectively, companies should use goal-oriented evaluation criteria as gateways within the use case decision process. 

                  To avoid being overwhelmed by too many options, it’s important to use clear and specific criteria that go beyond the simple effort-benefit considerations of how much effort something takes or what benefit it brings. ā€Æ 

                  As well as incorporating a specific evaluation criteria, task and process silos must be broken in order to optimize complex and interdependent engineering processes. Therefore, we recommend mapping the potential use case to the value stream throughout the entire V-cycle. This helps you evaluate ideas more clearly and see where different parts of the core engineering processes can support each other and create extra value. 

                  To compare the company readiness with the individual engineering use cases, we have developed an exhaustive assessment method that considers eight dimensions: strategy, governance and compliance, processes, data, IT infrastructure and security, employees, cost and investment readiness, and ethical and ecological impact.  

                  However, the minimal criteria to be considered within the Gen AI use cases selection process covers four focal points:

                  • Functional criteria: Business impact of the specific Gen AI use case in engineering. The use case delivers measurable business impact within engineering workflows. 
                  • Technical criteria: The necessary data and foundational technical requirements are available to implement the use case. 
                  • Regulatory criteria: The use case complies with legal and regulatory standards, such as the EU AI Act and internal company policies. 
                  • Strategic criteria: The use case aligns with and enhances the engineering value stream. 

                  Additionally, it is highly efficient to set a main KPI to ensure the comparability of use cases across the overall engineering Gen AI portfolio. This is an important part of establishing a strategic fit.

                  Implementation specifications 

                  Use the full range of relevant data by shifting the focus from engineering text to engineering data 

                  In markets, we see successfully implemented Gen AI engineering use cases in two key areas: the beginning and end phases of the V-cycle. Exemplary Gen AI use cases to highlight can be found in requirements engineering and compliance demonstration, which are both still highly document-centric and text-based. The most common applications here are conversational agents based on retrieval-augmented generation (RAG) technology. RAG solutions represent one of the most repeatable and transverse applications across the entire value chain, which is why these applications have been at the core of Gen AI strategies for the last few years. 

                  Both application areas are ideal for starting your Gen AI implementation journey because the solutions are mature, and the results are significant. Our client engagements suggest that by using Gen AI capabilities on technical documentation (e.g., retrieval and summarization), it is possible to generate high efficiency gains and reduce the time engineers spend accessing knowledge and information in the right technical context by up to 50%.

                  Even though the use of text-based LLMs works very well, the full potential for Gen AI in engineering has not yet been unleashed. Most of the engineering data is not available as a pure text format. Therefore, achieving a higher level of value generation requires overcoming the limitation of a text-based knowledge base. Within engineering, this means including the vast amount of information formats from various data sources across the product lifecycle (e.g., visualizations, diagrams, sensor data, GPS, or even sounds). For application in engineering, we want to highlight the extension of large language models (LLMs) with large multimodal models (LMMs) capabilities. Especially within complex problem definitions, LMMs show a high potential for significant Gen AI usage improvements and operational efficiency across the product development process. We are rapidly discovering the potential transformation of everyday tasks with generative AI for data engineering.

                  Applying Gen AI 

                  How to scale up to unlock the full value potential  

                  Implementation activities of generative AI for engineering are constantly gaining focus. Today, we see companies collecting a full use case funnel, realized in many small implementations of Gen AI addressing specific engineering tasks, resulting in small and often local value gains, respectively. 

                  Following the rule of ā€œstart small, think big,ā€ we share the belief to first gain conviction of the added value and acceptance by implementing such quick wins. Start with simple and cost-sensitive use cases, such as RAG, and progressively extend to more complex use cases. However, we recommend to always keeping the bigger picture of scaling targets in mind. 

                  An overall AI strategy helps to guide the starting process, to connect existing Gen AI applications, and identifies synergy potentials from the beginning. 

                  The topic of scaling becomes crucial when using levers to strengthen and expand value generation. A successful upscaling of Gen AI implementations can be executed vertically by expanding the application area or horizontally by linking different Gen AI use cases. As connecting prior local solutions throughout the development process is highly difficult, we want to share the scalability factors we integrate into Gen AI implementation planning and execution. 

                  Future developments and fields of action

                  As Gen AI rapidly transforms the engineering sector, hybrid AI emerges as a key solution to meet its specific demands. Simultaneously, advances in multi-agent systems and the multimodal capabilities of language models open up new perspectives for process automation and optimization.Ā 

                  The hybridization of AI capabilities (hybrid AI) to address the specificities of the engineering fieldĀ 

                  LLMs are intrinsically statistical. This means that the risk of failing or being ineffective with investment in implementing Gen AI solutions is still there. One approach to mitigate these risks is to combine the capabilities of Gen AI with the more traditional methods of deterministic AI. This combination leverages the strengths of both approaches while addressing their respective limitations, enabling the development of more robust and tailored AI systems. In the field of engineering, where some activities inherently require reliability, predictability, and repeatability, this synergy proves particularly relevant for addressing critical challenges, such as system and process safety.Ā Ā Ā 

                  Recent advances in LLMs and LMMs have marked a significant milestone in the improvement of AI agents. These agents are now capable of planning, learning, and making decisions based on a deep understanding of their environment and user needs. As new architectures and use cases continue to emerge, the transition toward multi-agent systems that collaborate in increasingly complex contexts is progressing further.Ā 

                  We will witness the increasing integration of specialized agents to handle specific tasks, such as requirement extraction, requirement quality control, or requirement traceability reconstruction. Each agent will be able to perform a particular task, and these agents can be orchestrated by a “super-agent” through complex workflows. This agent-based approach will enable greater automation of processes, making them more streamlined and efficient while reducing the need for human oversight.Ā 

                  However, this reduction in supervision could increase the risk of accidents. Therefore, special attention must be given to assessing the implications of AI agents and multi-agent systems in terms of safety, reliability, and societal impact. Moreover, there should be a focus on technical solutions and appropriate governance frameworks to ensure positive and lasting transformations in engineering.

                  LLMs are no longer limited to analyzing text. It is now possible to process other types of content, such as images, sounds, and diagrams. Much of the critical information in engineering reports is presented in visual form, and multimodal capabilities will allow this data to be retrieved and exploited more effectively. This will enhance the performance of conversational agents and improve the relevance of their analyses.

                  Software vendors are actively working to integrate Gen AI modules directly into their tools, especially for generative design. The goal is for these features to become an integral part of the engineer’s daily use, rather than external add-ons. For example, we can expect Gen AI modules integrated into project lifecycle management (PLM) solutions, further facilitating digital continuity.Ā Ā 

                  With generative AI for software engineering, new capabilities are helping to revolutionize the design process by improving efficiency: some actors have achieved up to a 90% reduction in product design times. This increased efficiency and the reduction in material usage, observed across various projects, lead to significant cost savings.

                  Innovation through Gen AI 

                  Generative AI in engineering is bringing human skills and intelligent automation together to solve complex challenges and shorten the development cycles drastically. The organizations that want to lead in the field of engineering must act decisively, scaling Gen AI strategically to unlock lasting innovation, resilience, and competitive edge.

                  Meet our experts

                  Udo Lange

                  Udo Lange

                  Expert in Digital Engineering and Asset Management, High Tech Solutions, Industrie 4.0, Product Lifecycle Management
                  Jerome Richard

                  Jerome Richard

                  Hugo Cascarigny

                  Hugo Cascarigny

                  Vice President & Global Head of Data & AI for Intelligent Industry, Capgemini Invent
                  Hugo Cascarigny has been passionate about AI, data, and analytics since he joined Invent 12 years ago. As a long-time member of the industries and operations teams, he is dedicated to transforming AI into practical efficiency levers within Engineering, Supply Chain, and Manufacturing. In his role as Global Data & AI Leader, he spearheads the development of AI and generative AI offerings across Invent.

                    FAQs

                    The benefits of using generative AI for software engineering include accelerating development, automation of repetitive coding tasks, enhanced quality of code, optimization suggestions, quicker prototyping phases, human error mitigation, and productivity gains. For human engineers, there is more time to spend on value-adding tasks, such as complex problem solving.Ā 

                    Generative AI plays many roles in data engineering. It automates the creation of data pipelines, collates realistic test data, detects inconsistencies, enhances the quality of data, documents workflows, and streamlines design. The net result is faster, more scalable, and consistent data engineering processes.Ā 

                    Companies can scale generative AI in engineering by adopting robust governance frameworks. This is the foundation on which to integrate AI into existing workflows. Next, they can establish model security, train teams, leverage cloud infrastructure, and continuously monitor performance to maintain reliability and alignment with business goals and operations.Ā 

                    Some real-world applications of generative AI in software engineering include code generation, test automation, documentation writing, anomaly detection, system design suggestions, and more rapid knowledge retrieval. This makes it possible for teams to innovate faster while reducing time-to-market and operational costs.Ā 

                    Challenges companies face when implementing generative AI in engineering include model bias, security risks, intellectual property concerns, explainability issues, and skill gaps. Moreover, ensuring AI-generated code meets compliance is particularly noteworthy and critical. These challenges can be overcome with sound implementation strategies built on robust frameworks.

                    Generative AI is just the start of data-powered collaboration. Human-Machine Understanding will deliver technology-enabled decision-making that truly adapts to human needs.

                    The rollout of Generative AI technologies like OpenAI’s ChatGPT and subsequent large language models from late 2022 onwards has given new impetus to the potential of technology-enabled decision-making. This transition of AI from the laboratory to a consumer app marks a shift in human and machine collaboration. Suddenly, we can engage with technology conversationally, explaining our challenges and receiving actionable advice. At last, following so many technological advances during the past few decades, we finally have a trusted, data-enabled assistant to help us make decisions – or do we?

                    Towards a new era in decision-making processes

                    Overshadowed by the hyperbole accompanying the rollout of AI models is an underlying reality: technology-enabled decision making is a one-way system that relies heavily on humans. Current AI models provide limited analysis and interaction, such as requesting clarifications or flagging unhelpful responses. But they lack understanding of your decision-making process itself – how you weigh options, what stages of reasoning you follow, and how your cognitive approach shapes your conclusions. In short, we’re still on our own for big decision-making moments.

                    However, the current, visible manifestation of AI services is just the first stage of a move towards deeper human and machine collaboration. The next stage, via human-machine understanding (HMU), will finally deliver technology-enabled decision-making partners.

                    HMU-equipped systems will provide the data-rich helping hand humans crave. These systems will understand your decision-making challenges and deliver the right information at the right time, tailored to your requirements. Consider how you explain complex analysis to a colleague – if they’re new, you might explain differently than to someone you’ve worked with for years. HMU brings this adaptive capability to AI decision support.

                    In high-stakes, time-sensitive scenarios, such as healthcare and strategic decision making, HMU-equipped systems could even account for internal human states, such as stress or fatigue, that might affect decision-making processes.

                    Unlocking HMU’s decision-making value

                    A key challenge in this evolution is building trust and transparency in AI-driven decisions. To address this, AI decision support systems must be able to explain their analysis and reasoning processes effectively; beyond the black-box thinking of many current AI models. However, just like human co-workers can explain complex processes to each other, HMU will provide explanations aligned with the user’s unique requirements.

                    One promising research area lies in understanding human mental models and decision-making processes. Let’s look at healthcare and strategic decision-making use cases to see how progress towards HMU will lead to better outcomes.

                    Enabling healthcare

                    Modern healthcare is a data-rich process. Pioneering collaborations between clinicians and machines exploit this data for better healthcare outcomes, with Generative AI already being used to enhance decision-making processes.

                    Take Color Health’s AI copilot system, which helps clinicians create cancer treatment plans by analyzing patient data and healthcare guidelines to identify missing diagnostics. Early results show clinicians can identify four times more missing labs and tests while reducing analysis time from weeks to minutes and maintain oversight at every step.

                    Similarly, Google’s Med-PaLM helps doctors with complex cases by analyzing medical knowledge and patient data to suggest potential diagnoses and treatment options, while Microsoft’s Nuance DAX focuses on ambient clinical intelligence, automatically documenting patient encounters to help physicians focus more on patient interaction.

                    Confidence is crucial for AI-enabled healthcare decisions. Developments in storytelling-based Explainable AI (XAI) that provide comprehensible explanations to users, from smart home environments to eHealth interfaces, can build trust and address the diverse needs of healthcare professionals and patients.

                    Sensing and monitoring technologies are another area of data-led progress. AI-powered systems now include transformers that recognize surgical gestures with 94% accuracy[1]. Digital twin systems, meanwhile, enable real-time monitoring by integrating data from sensors, devices, and systems to optimise clinical and non-clinical operations[2].

                    Enhancing strategy

                    Successful decision-making relies on informed choices from disparate data. Decision-makers must often act without a full understanding of the environment, relying on fragmented data to assess risks, predict outcomes, and determine the best course of action. Traditional systems often fail to synthesize fragmented information effectively, limiting their usefulness in complex, high-stakes contexts. Capgemini’s deep tech powerhouse, Cambridge Consultants, conducted a project for the UK Government to explore how HMU can help.

                    The team used the strategy game StarCraft II as a controlled test environment, replicating scenarios where decision-makers operate with incomplete information. The research developed AI assistants using neural networks and unsupervised and supervised learning to enhance human decision-making processes by tackling two key problems:

                    1. Reducing ambiguity: Using advanced neural networks to analyze partial and historical data to predict unseen elements of the environment, providing both predictions and confidence levels to help decision-makers assess risks.
                    2. Strategy detection: Classifying and tracking opponent strategic patterns over time to allow users to anticipate and adapt to evolving challenges.

                    A user-centric explainability framework was crucial to this effort. The framework combined an understanding of user requirements with XAI delivery techniques and interface designs. By tailoring explanations to user needs, decision-makers trusted the AI outputs and integrated the insights into their workflows. The system provided enhanced clarity, allowing users to make decisions with confidence. The explainability framework increased trust in AI systems, helping to bridge the gap between technical outputs and user understanding. These techniques could also be applied in other settings beyond strategy, including resource management, logistics, or crisis response.

                    The Future: A vision for adaptive decision support

                    These early applications show how HMU systems can help to redefine how humans and machines collaborate to solve complex problems. By understanding user needs, interpreting contextual nuances, and providing tailored support, HMU systems transform machines from static tools into adaptive partners in decision-making processes.

                    From boards to steering committees, humans draw on the wisdom of groups, sometimes even wisdom of the crowd. Machines have traditionally struggled in large group settings, but recent advances show that they can enable groups to outperform individual decision-makers.

                    Future developments will focus on improving real-time adaptability, enhancing explainability, and integrating these capabilities into diverse decision-making environments. The result? Effective and trusted human-machine collaboration that creates benefits for everyone.


                    [1] Chen, Ketai, D. S. V. Bandara, and Jumpei Arata. “A real-time approach for surgical activity recognition and prediction based on transformer models in robot-assisted surgery.” International Journal of Computer Assisted Radiology and Surgery (2025): 1-10.

                    [2] Han, Yilong, et al. “Digital twinning for smart hospital operations: Framework and proof of concept.” Technology in Society 74 (2023): 102317.

                    Ali Shafti

                    Ali Shafti

                    Head of Human-Machine Understanding, Cambridge Consultants, part of Capgemini Invent
                    Ali leads a team of specialists in AI, psychology, cognitive and behavioral sciences to create next generation technologies that can truly understand and support users in dynamic, strenuous environments. Ali holds a PhD in Robotics with focus on human-robot interaction and has more than 12 years experience in research and development for human-machine interaction.
                    Matt Rose

                    Matt Rose

                    Senior Analyst, Cambridge Consultants, part of Capgemini Invent
                    Matt is a strategic foresight analyst with over a decade of experience in research, horizon scanning, and data visualization. He has led future-focused projects across AI, robotics, and emerging technologies for high-profile clients. Drawing on a rich background in UX design, including work in the gaming industry, Matt brings a user-centered lens to innovation, helping organizations navigate technological change and design digital products that resonate with real-world needs.
                    Matthew J Clayton

                    Matthew J Clayton

                    Principal Algorithm Engineer, Cambridge Consultants, part of Capgemini Invent
                    Matthew is an experienced AI developer and data scientist who specializes in autonomous systems, numerical modelling, and applied statistics. Matthew has developed AI-enabled systems in many technology areas including robot navigation and mapping, reinforcement learning, computer vision, bio-sensing, cyberdefense, and cognitive radar. Matthew holds a DPhil in Astrophysics from the University of Oxford.
                    Alexandre Embry

                    Alexandre Embry

                    CTIO, Head of AI Robotics and Experiences Lab
                    Alexandre Embry is CTIO, member of the Capgemini Technology, Innovation and Ventures Council. He is leading the Immersive Technologies domain, looking at trends analysis and developing the deployment strategy at Group level. He specializes in exploring and advising organizations on emerging tech trends and their transformative powers. He is passionate about enhancing the user experience and he is identifying how Metaverse, Web3, NFT and Blockchain technologies, AR/VR/MR can advance brands and companies with enhanced customer or employee experiences. He is the founder and head of the Capgemini’s Metaverse-Lab, and of the Capgemini Andy3D immersive remote collaboration solution.