Skip to Content

Unlocking the power of data with SAP Business Data Cloud and Databricks

Frank Gundlich and David Allison
14 Apr 2025

Capgemini, a data- and analytics-first organization and global launch partner for SAP Business Data Cloud (BDC), understands the value of integrating data and AI foundations from the earliest stages of a business transformation.

With the recent acquisition of Syniti, we further empower organizations in utilizing data and AI as the cornerstone of their business transformation.

Enterprises across the globe are leveraging data and AI to drive insights, enable intelligent processes, and foster innovation to improve customer intimacy, drive new routes to market, expand business models, and reduce total cost of ownership (TCO).

As businesses strive to stay competitive, the integration of advanced data management and AI capabilities becomes paramount. SAP, a leader in enterprise software, has taken a significant step forward with the launch of SAP Business Data Cloud and the SAP Databricks solution, propelling them ahead of their competitors with a 360-degree view of enterprise data and AI.

The importance of data in business transformation

SAP has long been at the forefront of helping businesses manage their data. With the introduction of SAP Business Data Cloud, SAP is redefining how enterprises harness their data. The SAP Business Data Cloud solution unifies and governs all SAP data while seamlessly connecting with third-party data. By integrating SAP Datasphere, SAP Analytics Cloud (SAC), and SAP Business Warehouse (BW) alongside Databricks, SAP Business Data Cloud delivers a unified experience that empowers businesses to make informed decisions.

SAP Business Data Cloud: A new era in data management

SAP Business Data Cloud represents a paradigm shift in enterprise data management. Together with Capgemini’s data-first methodology, it provides a trusted and harmonized data foundation, ensuring high-quality data that businesses can rely on. This foundation is crucial for driving impactful decisions and fostering innovation.

One of the standout features of SAP Business Data Cloud is its ability to deliver fully managed SAP data products across all business processes. These curated data products align with a highly optimized and unified “one domain” model, maintaining their original business context and semantics. This means businesses get immediate access to high-quality data without the hidden costs of rebuilding and maintaining data(base) extracts.

Additionally, SAP Business Data Cloud offers a suite of pre-built analytical applications, known as Insights Apps. As a global launch partner for SAP Business Data Cloud, Capgemini has been working closely with SAP and partners like Syniti, Databricks, and Collibra to integrate our knowledge into these apps. Insight Apps incorporate pre-defined metrics, AI models, and planning capabilities, simplifying how businesses connect and integrate every part of their operations. This accelerates use cases aligned with critical business functions, including ERP, spend, supply chain, HR, customer experience, and finance.

SAP Databricks: Enhancing AI and data engineering

As a Databricks Partner of the Year award winner, we are excited by the integration of Databricks into SAP Business Data Cloud, as it marks a significant milestone in enterprise data management. This partnership brings the power of Databricks directly into the SAP ecosystem, enabling businesses to leverage advanced data engineering and AI capabilities to support the integration of SAP data into the enterprise ecosystem, both internally and externally.

Databricks empowers data professionals to accelerate AI models and generative AI applications on their business data. Native capabilities like Delta Sharing harmonize SAP data products with existing lakehouses bidirectionally. This zero-copy approach allows businesses to apply advanced AI and machine learning models to various use cases, such as predicting payment dates on open receivables, without the need for complicated ETL pipelines.

Moreover, SAP Business Data Cloud with the included Databricks capabilities facilitates the modernization of SAP Business Warehouse, providing additional migration options for existing BW customers under one license. On-premises SAP BW customers can easily transition to an SAP BW Private Cloud Edition, accessing their data as a data product with the object store via Delta Share. This simplifies the modernization journey and maximizes the value of existing SAP BW investments.

Driving innovation with AI and machine learning

AI and machine learning are at the heart of SAP’s new offerings. The integration of Joule AI Copilot into SAP Business Data Cloud exemplifies this commitment. Joule AI leverages a knowledge graph to connect data, metadata, and business processes, enabling AI agents and large language models (LLMs) to understand data within its business context.

This mapping creates clear data links, making insights more reliable for users and applications. Training AI agents and Joule on business knowledge and context drives increased productivity. For instance, users can use AI to complete cross-functional tasks, uncover insights, and summarize critical information across the business – without heavy reliance on IT support. This empowers the usage of AI to automate complex analytics and planning tasks, such as risk assessment, forecasting, and other advanced scenario simulations.

Partner ecosystem and open data integration

SAP Business Data Cloud is built to prioritize openness and customer choice. It supports an open data ecosystem, integrating natively with leading data and AI partners like Collibra, Confluent, and DataRobot. This openness simplifies the data landscape and unleashes transformative insights from all data sources.

SAP has also announced partnerships with the likes of Capgemini. These partners bring deep business process and industry domain expertise, building insight apps on SAP Business Data Cloud. From data enrichment to data activation, partner insight apps build on top of the data products and core services provided by SAP Business Data Cloud.

Conclusion

Data is the lifeblood of modern enterprises. It fuels decision-making, drives operational efficiency, and enables businesses to respond swiftly to market changes. For many organizations, this means integrating data from various sources, ensuring that it is high quality, and applying advanced analytics and AI to uncover hidden patterns and trends.

The launch of SAP Business Data Cloud and SAP Databricks marks a new era in enterprise data management. By unifying and governing data, integrating advanced AI capabilities, and fostering an open data ecosystem, SAP is empowering businesses to unlock the full potential of their data. As enterprises continue to navigate the complexities of digital transformation, these new offerings provide a robust foundation for driving innovation, enhancing decision-making, and enabling intelligent, AI-driven processes.

Author

Frank Gundlich

Global Head SAP Data & AI
Fuelled by a deep passion for SAP Data & AI, Frank leads with a unique blend of strengths that turn vision into reality. As an activator, maximizer, and futurist, he thrives on driving innovation, elevating performance, and shaping bold strategies that push the boundaries of what’s possible in data transformation.
David Allison

David Allison

European SAP Data & Analytics Lead
As Capgemini’s SAP Data & Analytics lead for Europe David works closely with his clients to integrate a data first approach to SAP that sets the foundation for enabling intelligent processes with data from across the ecosystem, both internally and externally.

    The grade-AI generation:
    Revolutionizing education with generative AI

    Dr. Daniel Kühlwein
    March 19, 2025

    Our Global Data Science Challenge is shaping the future of learning. In an era when AI is reshaping industries, Capgemini’s 7th Global Data Science Challenge (GDSC) tackled education.

    By harnessing cutting-edge AI and advanced data analysis techniques, participants, from seasoned professionals to aspiring data scientists, are building tools to empower educators and policy makers worldwide to improve teaching and learning.

    The rapidly evolving landscape of artificial intelligence presents a crucial question: how can we leverage its power to solve real life challenges? Capgemini’s Global Data Science Challenge (GDSC) has been answering this question for years and, in 2024, it took on its most significant mission yet – revolutionizing education through smarter decision making.

    The need for innovation in education is undeniable. Understanding which learners are making progress, which are not, and why is critically important for education leaders and policy makers to prioritize the interventions and education policies effectively. According to UNESCO, a staggering 251 million children worldwide remain out of school. Among those who do attend, the average annual improvement in reading proficiency at the end of primary education is alarmingly slow—just 0.4 percentage points per year. This presents a sheer challenge in global foundational learning hampering efforts made to achieve the learning goal as set forth in the Sustainable Development Agenda.

    The grade-AI generation: A collaborative effort

    The GDSC 2024, aptly named “The Grade-AI Generation,” brought together a powerful consortium. Capgemini offered its data science expertise, UNESCO contributed its deep understanding of global educational challenges, and Amazon Web Services (AWS) provided access to cutting-edge AI technologies. This collaboration unlocks the hidden potential within vast learning assessment datasets, transforming raw data into actionable insights for decision making that could change the future of millions of children worldwide.

    At the heart of this year’s challenge lies the PIRLS 2021 dataset – a comprehensive global survey encompassing over 30 million data points on 4th grade children’s reading achievement. This dataset is particularly valuable because it provides a rich and standardized data that allows participants to identify patterns and trends across different regions and education systems. By analyzing factors like student performance, demographics, instructional approaches, curriculum, home environment, etc. the AI-powered education policy expert can offer insights that would take much longer time and resources to gain from traditional methods. Participants were tasked with creating an AI-powered education policy expert capable of analyzing this rich data and providing data-driven advice to policymakers, education leaders, teachers, but also parents, and students themselves.

    Building the future: Agentic AI systems

    The challenge leveraged state-of-the-art AI technologies, particularly focusing on agentic systems built with advanced Large Language Models (LLMs) such as Claude, Llama, and Mistral. These systems represent a significant leap forward in AI capabilities, enabling more nuanced understanding and analysis of complex educational data.

    “Generative AI is the most revolutionary technology of our time,” says Mike Miller, Senior Principal Product Lead at AWS, “enabling us to leverage these massive amounts of complicated data to capture for analysis, and present knowledge in more advanced ways. It’s a game-changer and it will help make education more effective around the world and enable our global community to commit to more sustainable development.“

    The transformative potential of AI in education

    The potential impact of this challenge extends far beyond the competition itself. As Gwang-Chol Chang, Chief, Section of Education Policy at UNESCO, explains, “Such innovative technology is exactly what this hackathon has accomplished. Not just only do we see the hope for lifting the reading level of young children around the world, we also see a great potential for a breakthrough in education policy and practice.”

    The GDSC has a proven track record of producing innovations with real-world impact. In the 2023 edition, “The Biodiversity Buzz,” participants developed a new state-of-the-art model for insect classification. Even more impressively, the winning model from the 2020 challenge, “Saving Sperm Whale Lives,” is now being used in the world’s largest public whale-watching site, happywhale.com, demonstrating the tangible outcomes these challenges can produce. 

    Aligning with a global goal

    This year’s challenge aligns perfectly with Capgemini’s belief that data and AI can be a force for good. It embodies the company’s mission to help clients “get the future you want” by applying cutting-edge technology to solve pressing global issues.

    Beyond the competition: A catalyst for change

    The GDSC 2024 is more than just a competition; it’s a global collaboration that brings together diverse talents to tackle one of the world’s most critical challenges. By bridging the gap between complex, costly collected learning assessment data and actionable insights, participants have the opportunity to make a lasting impact on global education.

    A glimpse into the future

    The winning team ‘insAIghtED’ consists of Michal Milkowski, Serhii Zelenyi, Jakub Malenczuk, and Jan Siemieniec, based in Warsaw Poland. They developed an innovative solution aimed at enhancing actionable insights using advanced AI agents. Their model leverages the PIRLS 2021 dataset, which provides structured, sample-based data on reading abilities among 4th graders globally. However, recognizing the limitations of relying solely on this dataset, the team expanded their model to incorporate additional data sources such as GDP, life expectancy, population statistics, and even YouTube content. This multi-agent AI system is designed to provide nuanced insights for educators and policymakers, offering short answers, data visualizations, yet elaborated explanations, and even a fun section to engage users.

    The architecture of their solution involves a lead data analyst, data engineer, chart preparer, and data scientist, each contributing to different aspects of the model’s functionality. The system is capable of querying databases, aggregating data, performing internet searches, and preparing elaborated answers. By integrating various data sources and employing state-of-the-art AI technologies like Langchain and crewAI, the insAIghtED model delivers impactful, real-world, actionable insights that go beyond the numbers, helping to address complex educational challenges and trends.

    Example:

    Figure 1: Show an example of the winning model. The image has the model answering the following prompt “Visualize the number of students who participated in the PIRLS 2021 study per country”

    As we stand on the brink of an AI-powered educational revolution, the Grade-AI Generation challenge serves as a beacon of innovation and hope. It showcases how the combination of data science, AI, and human creativity and passion can pave the way for a future where quality education is accessible to all, regardless of geographical or socioeconomic barriers.

    Start innovating now –

    Dive into AI for good
    Explore how AI can be applied to solve societal challenges in your local community or industry.

    Embrace agentic AI systems
    Start experimenting with multi-agent AI systems to tackle complex, multi-faceted problems in your field.

    Collaborate globally
    Seek out international partnerships and datasets to bring diverse perspectives to your AI projects.

    Interesting read?Capgemini’s Innovation publication,Data-powered Innovation Review – Wave 9 features 15 captivating innovation articles with contributions from leading experts from Capgemini, with a special mention of our external contributors fromThe Open Group, AWS andUNESCO. Explore the transformative potential of generative AI, data platforms, and sustainability-driven tech. Find all previous Waves here.

    Meet our authors

    Dr. Daniel Kühlwein

    Managing Data Scientist, AI Center of Excellence, Capgemini

    Mike Miller

    Senior Principal Product Lead, Generative AI, AWS

    Gwang-Chol Chang

    Chief, Section of Education Policy, Education Sector, UNESCO

    James Aylen

    Head of Wealth and Asset Management Consulting, Asia

    James Aylen

    Head of Wealth and Asset Management Consulting, Asia

    Question-Answer Generation (QAG) for automated summarization evaluation: A reference-free approach

    Sangeeta Ron
    21 Mar 2025

    The challenge of text summarization in financial services

    The financial services industry generates an immense volume of documentation daily. From customer interactions and regulatory filings to legal proceedings and risk assessments, organizations must process, interpret, and act upon large amounts of unstructured data. Traditionally, this has been a time-consuming and labor-intensive process, often susceptible to human error and inconsistencies. As regulatory frameworks evolve and customer expectations rise, the demand for accurate, efficient, and standardized document summarization has never been more critical.

    In banking, institutions must navigate a constantly shifting regulatory landscape. Compliance teams are responsible for reviewing extensive regulatory filings, risk reports, and audit documents—any misinterpretation can result in significant financial and legal consequences. Beyond compliance, customer service operations require rapid access to key insights from call center interactions to enhance service efficiency. Additionally, loan and credit risk assessment teams manually analyze financial statements, credit histories, and other documents to determine creditworthiness, a process that is both time-intensive and costly.

    The insurance sector faces similar challenges, particularly in underwriting, policy management, and claims processing. Insurance providers must constantly interpret complex regulatory changes while ensuring accurate policy underwriting and risk assessment. Claims processing teams review medical reports, legal documents, and third-party assessments to determine coverage and fraud risk. Manual document reviews in these areas not only slow down operations but also introduce inconsistencies that can impact decision-making.

    The increasing complexity of financial services documentation makes manual summarization an unsustainable approach. Generative AI (GenAI) offers a powerful solution by enabling automated summarization of key insights from various documents. However, assessing the quality of AI-generated summaries remains a challenge. Traditional evaluation methods, such as ROUGE and BERTScore, rely on human-generated references, which are not always available or practical for large-scale financial services applications.

    Introducing QAG-based automated summarization evaluation

    Question-Answer Generation (QAG) for automated summarization evaluation provides a breakthrough, offering a reference-free approach to ensuring both completeness and accuracy in AI-generated summaries. Instead of comparing summaries to predefined references, QAG-based evaluation gauges summarization quality by generating factual questions from the original document and checking whether the AI-generated summary provides correct answers.

    Experimental results

    Optimization techniques for QAG were implemented that included limiting truth extraction and using custom question templates to improve evaluation performance.

    This enhanced QAG-based evaluation approach was then tested on four real-world transcripts. In each test, both the default QAG model and our optimized approach were implemented. The following table summarizes the results:

    Overall, the experimental results reveal a significant leap in alignment scores, rising from a baseline of 56% to over 70%, while coverage scores experienced an even greater boost, increasing from 70% to 90%. These enhancements demonstrate the effectiveness of the refined approach in producing more accurate and comprehensive AI-generated summaries.

    Wide-ranging use cases in banking and insurance

    By implementing QAG-based evaluation, financial institutions can improve the reliability and accuracy of GenAI-powered summarization across multiple business functions. In banking, it ensures that compliance reports, customer interactions, and financial risk assessments maintain factual integrity. In insurance, it enhances underwriting decisions, policy management, and claim evaluations. The following is a sample of several key use cases in financial services.

    Banking use cases

    • Call center interaction summarization: Customer service teams manage a high volume of customer interactions, often recorded in call center transcripts, chat logs, and emails. GenAI can summarize these conversations, extracting key themes, customer concerns, and sentiment trends, enabling more efficient issue resolution. With QAG-based evaluation, AI-generated summaries ensure that no critical customer concerns are overlooked, allowing for more personalized and proactive customer support.
    • Audit report summarization: Internal audits are a critical part of risk management in banking, yet the process is often time-consuming and labor-intensive. AI-powered summarization helps highlight key discrepancies, compliance violations, and recommended actions from audit reports, improving the efficiency of risk and compliance teams. With QAG-based evaluation, banks can ensure that summarized audit findings remain aligned with the original reports, reducing the chances of oversight in risk assessments.
    • Credit risk assessment: Evaluating a borrower’s financial health requires the review of credit reports, financial statements, and loan histories, often spread across multiple documents. GenAI can consolidate key financial indicators into a structured summary, allowing risk analysts to make faster and more informed lending decisions. By applying QAG-based evaluation, banks can verify that these summaries accurately reflect the borrower’s financial status, reducing errors in credit risk assessments.

    Insurance use cases

    • Underwriting and risk assessment: Insurance underwriting requires the evaluation of extensive data, including health records, financial documents, and previous policy claims. GenAI-generated summaries allow underwriters to quickly assess risk factors, policy eligibility, and pricing considerations. With QAG-based evaluation, insurers can confirm that these summaries capture the full scope of risk assessment criteria, reducing underwriting errors and improving decision-making efficiency.
    • Policy management: Managing policies involves handling a large amount of unstructured documentation throughout the policy lifecycle. Any modifications initiated by insurers or customers require careful reassessment. GenAI streamlines this process by efficiently condensing information from various sources. By applying QAG-based evaluation, insurers can confirm that AI-generated summaries align with policy terms and regulatory requirements, enabling them to allocate more time to strategic tasks such as customer service and relationship management.
    • Claims processing: Whether for auto, healthcare, or commercial policies, claims processing is a complex, documentation-heavy task that demands significant time and effort when done manually. GenAI automates the extraction of critical details from diverse records. QAG-based evaluation ensures that all necessary claim details are preserved, reducing operational costs, expediting claim settlements, and improving overall customer satisfaction.

    These use cases highlight just a few of the many ways QAG-based evaluation can be applied in financial services. Potential applications extend far beyond these examples. Depending on an organization’s specific needs, QAG-based evaluation can be adapted to review AI-generated summaries across a wide range of business functions, including regulatory reporting, contract analysis, investment research, internal policy compliance, and more.

    Driving accuracy, efficiency, and trust in AI-generated summarization

    As financial institutions increasingly rely on GenAI to streamline document processing, ensuring the accuracy and reliability of AI-generated summaries is paramount. QAG-based automated summarization evaluation provides a reference-free, scalable, and precise method to assess summarization quality, addressing one of the key challenges in AI adoption. By evaluating summaries based on factual correctness and content coverage, QAG-based evaluation offers a structured approach to verifying AI outputs without the need for human-generated reference summaries.

    The benefits of integrating this approach in banking and insurance are far-reaching. Banks can enhance decision-making by quickly extracting key insights from financial reports, compliance documents, and customer interactions. This leads to faster responses to regulatory changes, improved operational efficiency, and a more seamless customer experience. In the insurance sector, QAG-based evaluation improves underwriting accuracy and claims processing efficiency, ensuring that AI-generated summaries are both comprehensive and aligned with business objectives.  

    Now is the time for financial institutions to embrace AI-powered summarization with QAG-based evaluation. To explore how this approach can elevate your organization’s AI-driven summarization efforts, contact Capgemini’s Financial Services Insights & Data team today.  

    Author

    Sangeeta Ron

    Senior Director, Financial Services Insights & Data

      Can nuclear provide the power that drives the AI revolution?

      Capgemini
      Capgemini
      Mar 18, 2025

      The race to develop and exploit the extraordinary capabilities of AI and other breakthrough technologies is accelerating at a dizzying pace. But while governments, businesses and citizens are scrambling to take advantage of the seemingly limitless ability of AI to transform almost every aspect of our lives, there’s another challenge looming on the horizon.

      As economies in general, and tech companies in particular, are striving to transition to renewable energy sources and to reduce carbon footprint, the boom in AI-related data processing is producing a huge surge in demand for power. But, as the need for clean and secure electricity supplies soars, could nuclear be set to play a vital role in bridging the potential energy gap?

      Powering AI will require 9% of US grid capacity by 2030

      Powering the world’s rapidly expanding network of data centres has already had significant impacts on society and public policy. In Europe, major data centre clusters, around Dublin and Amsterdam for example, require so much electricity that further data centre expansion in those cities is on hold until new, additional sources of energy come on stream.

      As recently as 2020, UK data centres used just over 1% of the nation’s electricity. By 2030 this figure is forecast to reach 7%. Demand is set to be even greater in the US, the global centre of AI innovation, with predictions that, by 2030, 9% of all grid capacity will be used to power AI technologies alone. It’s a monumental challenge that traditional energy utility organisations cannot meet alone.

      SMRs will change the game for businesses transitioning to low carbon energy

      New research published by Capgemini to coincide with the 2025 World Economic Forum in Davos reveals that 72% of business leaders say they will increase investment in climate technologies, including hydrogen, renewables, nuclear, batteries, and carbon capture, with nuclear energy in their top three climate technology investment priorities for 2025.

      This direction of travel chimes with statements made during Davos by the International Energy Agency (IEA). The IEA heralds “a new era for nuclear energy, with new projects, policies and investments increasing, including in advances such as small modular reactors (SMRs)”.

      According to IEA Director General Rafael Mariano Grossi: “one after another, technology companies looking for reliable low-carbon electricity to power AI and data centres are turning to nuclear energy, both in the form of traditional large reactors and SMRs.”

      Around 60 new reactors are currently under construction in 15 countries around the world, with 20 more countries, including Ghana, Poland, and the Philippines, developing policies to enable construction of their first nuclear power plants. The US Energy Information Administration (EIA) estimates that, by 2025, global nuclear capacity could have increased by up to 250% compared to the end of 2023.

      Clean, reliable, available – and safe

      It’s easy to understand why nuclear is set to play an increasingly significant dual role in both powering the AI revolution and decarbonising industry. Its 99.999% guarantee of stable energy availability compares with just 30-40% from weather-dependent wind or solar generation.

      Decades of continuous improvements in reactor design and operation make nuclear the second safest source of energy in the world after solar, according to the International Atomic Energy Authority (IAEA), although the Agency also points out that large scale solar power systems need 46 times as much land as nuclear to produce one unit of energy.

      But it’s the potential to rapidly deploy SMRs that could have the most significant impact in preventing the looming energy gap, as AI-driven data processing requirements grow exponentially. It’s important to remember that most light-water SMRs are simply smaller versions of the large-scale GEN III+ technology with proven safety and operational records, with small generally defined as having a maximum output of 300 MWe. The underlying scientific and operational principles are not technologically new in themselves.

      As the name suggests, SMR’s modular design enables major components to be constructed at speed in a factory environment, for bespoke assembly on site, located flexibly close to consumers. With a footprint the size of a sports stadium, they can easily be placed near the demand, such as data centres or industrial estates.

      Reduced construction times, lower investment and running costs and the ability to add or reduce capacity as demand increases or decreases, are just some of SMRs’ obvious advantages. They’re ready-made replacements for fossil-fuel based generation, and as nuclear is less vulnerable to price fluctuations, owners and consumers of SMR generated power have more budget certainty and can plan more accurately for the long-term accordingly.

      SMRs, specifically the advanced reactor designs, can also be adapted to supply heat for industrial applications, district heating systems and the production of hydrogen, and are increasingly regarded as catalysts for economic development and job creation.

      Tech giants at the front of the queue

      Many of the global tech giants are actively working on plans to develop their own SMR-based generating capabilities, to provide their own independent sources of safe, stable, low-carbon power, protected from the increasingly volatile open market.

      It’s a race that’s not only vital that they win to ensure that we fuel the AI revolution, but by doing so we will accelerate the transition to a low carbon world economy.

      Smart business transformations
      …from a practitioner’s point of view

      Stewart Hicks, Global Offer Lead for Generative Business Services (GBS), Capgemini’s Business Services
      Stewart Hicks, Wojciech Mróz
      Mar 17, 2025

      According to Capgemini’s recent survey “AI-led Generative Business Services: The future of Global Business Services (GBS)” conducted in partnership with HFS Research, over 80% of respondents agree it is time to rethink Global Business Services as Generative Business Services – better defined as AI-led data-driven services focused on driving growth and the enterprise innovation agenda.

      It is worth remembering, however, that business transformations with tangible business outcomes are enabled through a comprehensive approach, i.e., applying suitable technology platforms, together with operating model and process transformation, not necessarily just AI alone.

      The key to their effectiveness is a thorough diagnosis of the company’s needs, a strategic approach and individually tailored and industry specific solutions that collectively transform business operating models, processes, technology, and people.

      In business transformations, we are seeing significant reliance and focus on the latest technology solutions, but the basic principles remain the same. The customer, i.e., the end recipient of products or services, must always be at the centre of the design. It is unlikely the transformation will be successful if we forget to identify their needs and solve them.

      An increasingly popular and effective method of building a transformation strategy is the Outcomes Based Model, which focuses more on the business impact of the transformation program, rather than typical process performance measures and fixed or variable fee pricing models. Transformation initiatives or services provided are aligned with business outcomes, e.g., working capital improvements through reduction of aged debt, and increasing revenue through revenue leakage detection, prevention, and recovery. We are seeing cases where such models are applied are resulting in significant cashflow improvements, and outcomes realized in millions of euros for our clients.

      This approach significantly improves the effectiveness of business transformation and goes beyond traditional priorities focused on productivity or labour arbitrage. It is then much easier to get the attention of C-level Executives, who are typically the decision makers and buyers of business transformation services.

      We choose to work in this model because we are confident that well-planned transformations will deliver the expected results. We need to have a deep understanding of the clients we work with to enable the development of optimal strategies for them. We can then create tailored and industry specific solutions and prepare and support them through the change journey. We also rely on detailed data analysis and insights to drive informed decision making. This is coupled with an outcome-based commercial model which incentivizes the clients and Capgemini. This is what makes this model an interesting and beneficial formula for both parties.

      In the context of strategic business transformation, very often the key role is played by organisations known as Global Business Services (GBS) or Business Process Outsourcing (BPO) Providers. This sector is strongly represented in Poland, and other countries in the region such as Romania, due to the availability of highly qualified specialists, expertise, and still relatively low wage costs in comparison to other countries. While the traditional roles and benefits of GBS/BPO remain vital and relevant, there is an urgent need to redefine the GBS/BPO narrative to appeal more to Business Leaders who are demanding more than just cost reduction.

      Capgemini is no longer just a transactional services vendor, it is an ecosystem orchestrator that brings new skills, technologies, and capabilities to its clients, and thus is not just providing support but drives the strategic objectives of modern enterprises. Capgemini and its services and business transformation programs more often are expanding their scope of responsibility and expansion into more business lines and functional areas of their clients. which is allowing for greater bottom- and top-line financial impact for clients.

      The power of simplicity

      Today, technology is evolving at a dizzying pace, leaving companies constantly bombarded with innovative solutions that have the potential to improve their operations. This rapid pace of change often prevents full adaptation, resulting in technologies not used to their best advantage, which in turn inhibits the maximization of business impact. Many organizations implement only partial solutions, and do not always exhaust the possibilities of the standard technology deployed, limiting their effectiveness and return on investment.

      Technology should be used to its fullest extent and that allows a greater part of the organisation to leverage the solution and its benefits. Such extensive use optimizes costs. Simply put, it means our clients can make the most of what they pay for. Companies should also look for technology platforms that allow them to fulfil many diverse needs, moving away from a multi-tool approach, and focusing on full and proper adoption.

      The company’s growth is based on the development of the teams’ competencies

      The key to successful business transformations, apart from good strategic planning, is change management & communication. Change is only effective if the people working in companies understand it, are convinced of it, and ideally when they have the chance to co-create it.

      Business transformation also means developing competencies for the people the company employs. It is important to let people know from the very beginning of the process what role they will play during and after the change. In parallel with changes to business processes, it is important to plan and deliver robust training and equip staff with the right tools and resources to ensure there is no or limited disruption to business as usual and people can excel in their roles.

      At Capgemini, we are focused on developing our teams in Gen AI and Industry specific certifications. People working for us have the opportunity, and sometimes even the obligation, to obtain key certifications in this area. This is to ensure we stay up to date with the current trends and changes and apply tailored solutions that are optimal for our clients. This is the only way we can be a dependable partner for our clients.

      One of the best-performing ways of implementing change is through a “pilot” approach. This allows for testing a solution in a selected and sometimes isolated area before a wider roll out. This method works most effectively in large companies with a regional and global reach. The choice can be made on a geographical level, e.g., starting in a particular country or city according to business lines or business functions, and where people are most suited and willing to participate in the change process. The success of operating on a smaller scale allows you to de-risk, and with proven and positive results, to convince people who are less supportive of change before proceeding on an organisation-wide scale.

      Twilight of the old technologies

      Even the best implemented changes take time. In the case of large platforms such as S/4HANA, for example, the process can take years. Business Managers and their teams need to be prepared for a period of operating in different realities simultaneously. This is necessary to ensure business continuity, and it is worth taking the time to act in a comprehensive way because well-planned transformations, based on clearly defined business goals, produce long-term, measurable results and outcomes.

      Meet our experts

      Stewart Hicks, Global Offer Lead for Generative Business Services (GBS), Capgemini’s Business Services

      Stewart Hicks

      Global Offer Lead for Generative Business Services (GBS), Capgemini’s Business Services
      As the Global Offer Lead for Generative Business Services (GBS) at Capgemini’s Business Services, Stewart helps clients assess, design, transform, and implement world-class GBS operating models. He is passionate about helping clients leverage the opportunities GBS can offer. Stewart has held leadership roles in Consulting, GBS and Outsourcing operations, Sales management, Project & change Management, and Process excellence. He has extensive experience in end-to-end client captive shared services, BPO engagements, and GBS transformation programs across enterprise domains and technologies.
      Wojciech Mróz

      Wojciech Mróz

      Strategy & Transformation Director, Capgemini’s Business Services
      Wojciech is a senior leader with extensive experience in BPO/SSC delivery and transformation. He is actively engaged in the Generative Business Services (GBS) offer evolution at Capgemini’s Business Services and has held various positions across GBS transformation, F&A transformation and Business development. As a subject matter expert in automation, Wojciech has helped clients across the globe develop automation strategies and has delivered efficient automation programs. With a proven track-record of leading successful transition and transformation projects, Wojciech has a continuous improvement mindset and drive for optimizing business processes.

        Women’s Day special: Cyber angel

        Capgemini
        Mar 6, 2025

        Leading the charge: Puneeta on cybersecurity, inclusion, and building a future at Capgemini

        In today’s ever-evolving digital landscape, cybersecurity is crucial for safeguarding information and building trust. At Capgemini, leaders like Puneeta Chellaramani are at the forefront of this mission, bringing a unique blend of expertise, passion, and vision. In this Q&A, she shares her journey, the value of inclusion in cyber leadership, and her advice for those looking to join the Capgemini Cybersecurity team.

        1. What makes you proud to work at Capgemini?

        The variety of projects, clients, and cultures at Capgemini keeps my work exciting and fulfilling, and knowing we are helping organizations grow and solve complex cyber challenges is incredibly rewarding. Capgemini fosters an environment where everyone feels relevant and respected. The appreciation of everyone’s personal situation and making a flexible working environment thrive with balance is distinctive to Capgemini’s DNA, making it a unique and supportive place to work.

        2. How are you working towards the future you want?

        I’m diligently working towards the future I want at Capgemini by sticking to my value system and finding the right chord to strike with Capgemini’s values. Whether it’s picking up uncharted territories to grow business, building meaningful connections, or staying laser-focused in accelerating cyber business across APAC, I’m taking small, consistent steps to stay on track. I’m also embracing opportunities that align with my virtues and passions, helping me move closer to where I want to be.

        3. What value does inclusion bring to cyber leadership?

        Inclusion in cyber leadership isn’t just about representation ­– it’s about building a team capable of thinking outside the box and adapting to unforeseen challenges. In a field where threats are constantly evolving, having leaders from different walks of life brings a variety of strategies, insights, and approaches. This inclusion fosters a culture of resilience and innovation, where challenges are seen as opportunities for growth. It also ensures that cybersecurity solutions are well-rounded, addressing the needs of diverse users and creating a stronger and more proactive defense system.

        4. What advice would you give to someone joining Capgemini Cybersecurity?

        Life at Capgemini Cybersecurity is like an exhilarating adventure: you feel a rush of excitement as you reach new heights, followed by a burst of adrenaline that keeps you energized. There’s that light, joyful feeling in your stomach as you navigate through stimulating challenges, and the thrill of new experiences keeps you engaged. It’s a dynamic mix of enthusiasm and learning, making every moment enjoyable and rewarding!

        Empowerment and learning are at the heart of Capgemini Cybersecurity. You’ll find yourself in an environment that encourages you to embrace challenges and grow both personally and professionally. One of the standout initiatives is the Cyber Angels program, which mentors women seeking careers in cybersecurity, fostering a supportive and inclusive community.

        You’ll also have the opportunity to work with CyberPeace, a Geneva-based NGO, supporting non-profits in enhancing their digital security posture and resilience, and making a positive impact on society. This collaboration not only enhances your technical skills but also allows you to contribute to meaningful causes.

        My advice for someone joining Capgemini Cybersecurity is to embrace the challenges and build a community of trusted colleagues and clients. Be proactive – take ownership of your development and contribute your unique perspective to the team. Remember, the journey may be thrilling but it’s also incredibly rewarding and full of opportunities for growth and empowerment.

        If you are looking for a role in cybersecurity at Capgemini, please visit our career page.

        Puneeta Chellaramani

        Senior Director, Head of Cybersecurity Strategy and Growth, APAC
        With over 16 years of cyber experience across Zurich, Singapore, Dubai, and London, Puneeta now proudly calls Australia home. She has a strong management consultant background and extensive experience in accelerating cyber business growth. Puneeta advises clients across diverse industries, advocating a two-speed approach to navigating cyber, risk, legal, and AI-regulated environments. Passionate about cybersecurity mentorship, Puneeta leads many CSR initiatives. Outside of work, she enjoys music festivals and is a dedicated Pilates practitioner and coach.

          Mulder and Scully for fraud prevention:
          Teaming up AI capabilities

          Joakim Nilsson
          March 5, 2025

          While Mulder trusts his gut; Scully trusts the facts – in fraud detection, we need both. Hybrid AI blends the intuition of LLM with the structured knowledge of a knowledge graph, letting agents uncover hidden patterns in real time. The truth is out there—now we have the tools to find it.

          Fraud detection can be revolutionized with hybrid AI. Combining the “intuitive hunches” from LLMs with a fraud-focused knowledge graph, a multi-agent system can identify weak signals and evolving fraud patterns, moving from detection to prevention in real-time. The challenge? Rule sets need to be cast in iron, whereas the system itself must be like water: resilient and adaptive. Historically, this conflict has been unsolvable. But that is about to change.

          A multi-agent setup

          Large language models (LLMs) are often criticized for hallucinating: coming up with results that seem feasible but are plain wrong. In this case though, we embrace the LLM’s gut-feeling-based approach and exploit its capabilities to identify potential signs of fraud. These “hunches” are mapped onto a general ontology and thus made available to symbolic AI components that build on logic and rules. So, rather than constricting the LLM, we are relying on its language capabilities to spot subtle clues in text. Should we act directly on these hunches, we would run into a whole world of problems derived from the inherent unreliability of LLMs. However, this is the task of a highly specialized team of agents, and there are other agents standing by, ready to make sense of the data and establish reliable patterns.

          When we talk about agents, we refer to any entity that acts on behalf of another to accomplish high-level objectives using specialized capabilities. They may differ in degree of autonomy and authority to take actions that can impact their environment. Agents do not necessarily use AI: many non-AI systems are agents, too. (A traditional thermostat is a simple non-AI agent.) Similarly, not all AI systems are agents. In this context, the agents we focus on primarily handle data, following predefined instructions and using specific tools to achieve their tasks.

          We define a multi-agent system as being made up of multiple independent agents. Every agent runs on its own, processing its own data and making decisions, yet staying in sync with the others through constant communication. In a homogeneous system, all agents are the same and their complex behavior solves the problem (as in a swarm). Heterogeneous systems, though, deploy different agents with different capabilities. Systems that use agents (either single or multiple) are sometimes called “agentic” architectures or frameworks.

          For example, specialized agents can dive into a knowledge graph, dig up specific information, spot patterns, and update nodes or relationships based on new findings. The result? A more dynamic, contextually rich knowledge graph that evolves as the agents learn and adapt.

          The power is in the teaming. Think of the agents Mulder and Scully from The X-Files television show: Mulder represents intuitive, open-minded thinking, while Scully embodies rational analysis. In software, there always have been many Scullys but, with LLMs, we now have Mulders too. The challenge, as in The X-Files, is in making them work together effectively.

          The role of a universal ontology

          We employ a universal ontology to act as a shared language or, perhaps a better analogy, a translation exchange, ensuring that both intuitive and analytical agents communicate in terms that can be universally understood. This ontology primarily consists of “flags” –generic indicators associated with potential fraud risks. These flags are intentionally defined broadly, capturing a wide range of behaviors or activities that could hint at fraudulent actions without constraining the agents to specific cases.

          The key to this system lies not in isolating a single flag but in identifying meaningful combinations. A single instance of a flag may not signify fraud; however, when several flags emerge together, they provide a more compelling picture of potential risk.

          “This innovation shifts the approach from simple fraud detection to proactive prevention, allowing authorities to stay ahead of fraudsters with scalable systems that learn and evolve.”

          Hybrid AI adaptability

          The adaptability of the system lies in the bridging between neural and symbolic AI as the LLM distills nuances in texts into hunches. They need to be structured and amplified for our analytical AI to be able to access them. As Igor Stravinsky wrote in his 1970 book Poetics of Music in the Form of Six Lessons, “Thus what concerns us here is not imagination itself, but rather creative imagination: the faculty that helps us pass from the level of conception to the level of realization.” For us, that faculty is the combination of a general ontology and vector-based similarity search. They allow us to connect hunches to flags based on semantic matching and thus address the data using general rules. Because we work in a graph context, we can also explore direct, indirect, and even implicit relations between the data.

          Now let’s explore how our team of agents picks up and amplifies weak signals, and how these signals, once interwoven in the graph, can lead the system to identify patterns spanning time and space, patterns it was not designed to identify.

          A scenario: Welfare agencies have observed a rise in fraudulent behavior, often uncovered only after individuals are exposed for other reasons like media reports. Identifying these fraud attempts earlier, ideally at the application stage, would be extremely important.

          Outcome: By combining intuitive and analytical insights, authorities uncover a well-coordinated fraud ring that would be hard to detect through traditional methods. The agents map amplified weak signals as well as explicit and implicit connections. Note also that the system was not trained on detecting this pattern; it emerged thanks to the weak signal amplification.

          One of the powers of hybrid AI lies in its ability to amplify weak signals and adapt in real time, uncovering hidden fraud patterns that traditional methods often miss. By blending the intuitive insights of LLMs with the analytical strength of knowledge graphs and multi-agent systems, we’re entering a new era of fraud detection and prevention – one that’s smarter, faster, and more effective. As Mulder might say, the truth is out there, and with the right team, we’re finally close to finding it.

          Start innovating now –

          Implement a universal ontology

          Create a shared ontology to bridge neural (intuitive) and symbolic (analytical) AI agents, transforming weak signals for deeper analysis by expert systems and graph-based connections.

          Form specialized multi-agent teams

          Build teams of neural (real-time detection) and symbolic (rule-based analysis) AI agents, each specialized with tools for their role.

          Leverage graph technology for cross-referencing

          Use graph databases to link signals over time and across data sources, uncovering patterns like fraud faster, earlier, and at a lower cost than current methods.

          Interesting read?

          Capgemini’s Innovation publication, Data-powered Innovation Review – Wave 9 features 15 captivating innovation articles with contributions from leading experts from Capgemini, with a special mention of our external contributors fromThe Open Group, AWS and UNESCO. Explore the transformative potential of generative AI, data platforms, and sustainability-driven tech. Find all previous Waves here.

          Meet the authors

          Joakim Nilsson

          Knowledge Graph Lead, Insights & Data, Client Partner Lead – Neo4j Europe, Capgemini 
          Joakim is part of both the Swedish and European CTO office where he drives the expansion of Knowledge Graphs forward. He is also client partner lead for Neo4j in Europe and has experience running Knowledge Graph projects as a consultant both for Capgemini and Neo4j, both in private and public sector – in Sweden and abroad.

          Johan Müllern-Aspegren

          Emerging Tech Lead, Applied Innovation Exchange Nordics, and Core Member of AI Futures Lab, Capgemini
          Johan Müllern-Aspegren is Emerging Tech Lead at the Applied Innovation Exchange (AIE) Nordics, where he explores, drives and applies innovation, helping organizations navigate emerging technologies and transform them into strategic opportunities. He is also part of Capgemini’s AI Futures Lab, a global centre for AI research and innovation, where he collaborates with industry and academic partners to push the boundaries of AI development and understanding.

            Are data spaces the future?

            Capgemini
            Peter Kraemer, Phil Fuerst, Debarati Ganguly
            Mar 5, 2025

            Europe is building a data-driven economy in a changing geopolitical context. As it strives for both innovation and sovereignty, decentralized ecosystems offer a way to create value with data, while safeguarding freedom of choice.

            Data has the potential to transform processes, businesses, economies, and society by unlocking new kinds of value creation. It’s also how we are going to make AI work as a crucial component of the future European data economy—but only if that data is built on strong foundations that ensure its quality and relevance.

            Of course, value creation depends on the data that’s available to you, and you might not have all the data you need. That’s why data needs to be shared and combined. In this article, we consider how data spaces meet this need, offering what the Data Spaces Support Centre (DSSC) describes as the “ability to provide the essential foundations for secure and efficient data sharing”. While our focus in this article is on European data spaces, we recognize that this is becoming a relevant topic around the world.

            Why a decentralized data economy makes sense for Europe

            Data spaces are, in effect, decentralized ecosystems that have a powerful resonance in the world today. Indeed, recognizing their huge potential, the European Commission established a series of domain-specific/sectoral common European data spaces designed to help “unleash the enormous potential of data-driven innovation”.

            We see three main drivers for these common data spaces in Europe: geopolitics, commercials, and choice. In the first instance, in light of the unstable geopolitical landscape, data spaces give you assurance that all your (data) eggs aren’t in one basket.  You select the datasets you want to reside in what data space. Interoperability and portability can help avoid the dreaded lock-in effect where changing from one service provider to another might be prohibitively complicated. Commercially, data spaces address any exposure to potential monopolistic lock-in effects by individual companies cornering the market in data platforms. Then there’s the matter of choice. You choose who you interact with in a common data space, which puts you in control of who to share data with.

            Why we need data spaces

            Sharing data is key to data-driven growth. Indeed, it’s a vital aspect of the European strategy for data. But over-reliance on data platforms predominantly controlled by a limited number of international technology firms introduces potential vulnerabilities regarding data security, access, and strategic autonomy. We may also lose the ability to share data on our own terms, in accordance with our own values—freedom, privacy, control.

            An alternative future for Europe is to share data on a sovereign basis. And that implies across industries. That’s why we’re so excited to be working on the DSSC and on Simpl, the open source, smart and secure middleware platform that supports data access and interoperability among European data spaces.

            Beyond technology to value creation

            Let’s not forget that a data space is only an instrument. It’s what you do with it that matters. In a data space you will be able to aggregate, combine and correlate data that you can’t today because it is stored in different places. And that’s where we begin to create significant value from data, specifically in a number of areas, as follows.

            1. Global challenges: Data spaces will prove inordinately useful in tackling grand challenges that cut across sectors and geographies. Here we’re talking about achieving mission-oriented policy goals, such as reducing healthcare inequality and achieving net zero/carbon neutrality targets. For example, the European Health Data Space (EHDS) will be an enabler of patient empowerment, with better access to and control over health data. Further, increased reuse of EHDS data for research and policy making will improve public health interventions. A 2025 report from the World Economic Forum in collaboration with Capgemini suggests the EHDS could generate €5.5 billion savings over ten years. We’ve already seen the huge value of data sharing in a global crisis when, in the Covid 19 pandemic, our governments needed data from many areas at once to form policy—healthcare systems, pharma, mobility, employment and economic data. There will be future pandemics.
            2. Innovation: Data spaces will undoubtedly contribute to data-driven innovation across the EU as it continues on its mission to build the Single Market for Data. The European Commission states, “Common European Data Spaces will enhance the development of new data-driven products and services in the EU, forming the core tissue of an interconnected and competitive European data economy”.  In this respect, the combination of data from different sources across sectors can produce fascinating new applications. Think, for example, of the traffic flow in a city, where the observation of vehicle movement and a subsequent adjustment of traffic lights can help avoid congestion, and the monitoring of parking lots eases the burden of finding a parking spot, possibly connected to a recommendation of a charging port for the car’s battery. The energy grid could then be supplied with better anticipation of demand peaks and control energy distribution accordingly.  The seamless integration of real-time public transport data can then be used to recommend the best option for getting from A to B.
            3. Efficiency: Data spaces will help in the more efficient use of resources and improve public services. A great example here is that of road surface observation. By correlating data from cars’ electrical sensors, it becomes possible to monitor, in real time, the deterioration of the road, and carry out preventative maintenance to optimize spend / return. And returning to the healthcare sector, access to comprehensive patient histories in a shared data ecosystem has the potential to lead to better and faster diagnosis and treatment.
            4. Science and research: Shared data can create new evidence bases for scientific and medical research. Let’s consider the following scenario—I drive to work in a convertible most days; the farmer of the field sprays an experimental fertilizer; later I develop  neurological issues but doctors are unsure how to treat them. In the future we might be able to correlate this illness with the exposure to the fertilizer by aggregating mobility data, air quality data, times that the farmer used the fertilizer, and the contents of that fertilizer.

            Questions at the edges of our data economy

            The potential for value is clear, but there are numerous challenges still to overcome—and they are not principally digital ones. One unknown factor is what it will cost to set up and run a common data space. At this point we don’t have an adequate way to price data, so this question remains unanswered. Other questions include: How can we quantify the value of new data-driven business models vs traditional business models? And how can we pinpoint the strengths and weaknesses of data ecosystems and technologies?

            The answer to all of these questions at present is that we are all on a journey with common data spaces. We improve every day and the answers will come. But it is hard to imagine that the massive contribution of sharing data to the common good will not outweigh the costs and barriers that need to be overcome.

            Above all, the decentralized model depends on participants’ willingness to share data. That means they must trust the other participants and the infrastructure. There is no other way to build trust except enabling people to say no. Letting people choose in itself invites trust.

            Europe can do data differently

            Data spaces are a way for Europe to reap the benefits of data for economic growth and positive societal outcomes, while affirming European values in the digital domain. They remain an integral part of the European strategy aiming to make the EU a leader in a data-driven society.

            Find out more

            Peter Kraemer will speak about the future of data sharing in Europe at the Data Spaces Symposium in Warsaw on 11-12 March. Register at https://www.data-spaces-symposium.eu/

            Authors

            Person in a suit and light blue shirt with a blurred face, standing in front of trees.

            Peter Kraemer

            Director Data Sovereignty Solutions, Capgemini
            “A European data economy based on openness, fairness and transparency is possible, and we are determined to help make it a reality. In a flourishing data economy, all sectors will have new ways to generate value. Sovereignty means making independent and well-informed decisions about our digital interactions: where data is stored, how it is processed, and who can access it. Data spaces make these principles concrete, and we are committed to helping them grow.”
            Person with glasses on and no hair.

            Dr. Philipp Fuerst

            VP Data-Driven Government & Offer Leader, Global Public Sector
            “Government CIOs and IT experts barely need convincing of the benefits of interoperability. What has been missing is explicit guidance on the necessary non-technical requirements. The Interoperable Europe Act helps with exactly that. What’s more, with a critical mass of collaborators, individual public sector agencies will find that their investments into interoperable and sharable solutions will result in much bigger returns.”
            portrait of a person with dark hair and earrings against a dark background.

            Debarati Ganguly

            Director, Data & AI – Global Public Sector
            Debarati is a seasoned expert in Data-Driven Government, specializing in data ecosystems, governance, and AI-driven analytics for the public sector worldwide. She collaborates with leaders and AI specialists to drive strategic initiatives, ensuring ethical, sovereign, and anonymized data solutions. Her expertise helps governments and citizens unlock the true value of data, enhancing decision-making, service delivery, and overall public benefit through AI and Generative AI innovations.

              Unlocking the potential of 5G private networks
              Insights from Capgemini and AWS

              Nilanjan Samajdar
              Feb 28, 2025
              capgemini-engineering

              Improved enterprise connectivity with enhanced control, reliability, and customization tailored to unique business needs. Capgemini and AWS showcase 5G Network Go at MWC 2025.

              The advent of 5G private networks is transforming enterprise connectivity, offering tailored services that meet unique business needs. Unlike public 5G networks, private networks provide enterprises with enhanced control, reliability, and customization. For Communications Service Providers (CSPs), this represents a significant opportunity. McKinsey estimates the global market for 5G private networks could reach $20–30 billion by 2030.

              However, CSPs face challenges in selling 5G private networks to enterprises. Industries like manufacturing expect seamless integration with their existing operational systems. Furthermore, the rise of AI and software-based approaches in manufacturing and other sectors is paving the way for Industry 4.0. Enterprises are looking for pre-integrated solutions that offer clear ROI, not just connectivity. This creates an opportunity for CSPs to collaborate with system integrators and technology providers, like Capgemini.

              The shift in CSP service models

              Integrating 5G private networks into enterprise operations marks a significant shift from traditional service models for CSPs. By partnering with technology experts and system integrators, CSPs can offer comprehensive solutions that go beyond mere connectivity. This collaborative approach allows CSPs to leverage their network expertise while benefiting from their partners’ specialized knowledge in digital transformation and cloud services.

              However, despite the clear benefits, enterprises have been cautious about adopting 5G and edge technologies. Key challenges include complex integration requirements, unclear ROI, security concerns, and scalability limitations.

              Introducing ‘5G Network GO’

              To address these challenges, Capgemini and AWS have developed 5G Network GO, a solution that simplifies the adoption of 5G private networks and edge solutions for enterprises.

              5G Network Go focuses on cross-industry use cases and practical value creation, helping enterprises understand and benefit from 5G – without being overwhelmed by its technical aspects.

              Key features of 5G Network GO

              Figure 1: 5G Network GO – Combining the best of CSP Tech, AWS, Capgemini

              • Capgemini’s expertise in digital transformation: Leveraging years of experience in IT/OT convergence, Capgemini helps enterprises identify high-value use cases where 5G and edge technologies can drive tangible business outcomes. Capgemini’s Intelligent Edge Application Platform (IEAP) is combined with the CSP’s choice of 5G core and RAN to provide a flexible and scalable connectivity and orchestration platform. The solution also comes with a pre-integrated Capgemini 5G core and partner (HTC) RAN, in an ‘All-in-a-Box’ offering.
              • AWS’s cloud leadership: AWS enhances the solution with its scalable cloud offerings, particularly AWS Outposts, which bring AWS capabilities into organizations’ own data centers. This enables secure, local operation of AI and generative AI tools, while maintaining full analytical capabilities. The AWS Outpost compute platform allows the solution to offer a hybrid compute model, with the flexibility to host enterprise use-cases and applications on-premise, but with the added flexibility of the AWS cloud.

              Case study: enhancing manufacturing with 5G

              Consider an electronics manufacturer aiming to enhance their production line by detecting printed circuit board (PCB) defects before pre-wave soldering using 5G technology. Post-soldering defect fixes are five times costlier, and final testing fixes are ten times costlier. Traditional automated optical inspection (AOI) machines are expensive and scale linearly with production.

              5G Network GO uses 5G camera rigs and a computer vision application on an AWS Outpost server, achieving 99% defect detection accuracy. Scaling only requires more camera rigs, reducing costs. New PCB models can be introduced by updating the application, benefiting the manufacturer with lower costs and faster model integration.

              Driving value for CSPs

              For CSPs, deploying 5G private networks offers more than immediate revenue opportunities. By positioning themselves as key enablers of digital transformation, CSPs can build long-term relationships with enterprises, offering ongoing support and additional services, like network management, security solutions, and advanced analytics. This strategic positioning allows CSPs to differentiate themselves in a competitive market and drive sustained growth.

              Conclusion: it’s so much more than coverage

              While 5G private networks hold significant potential, realizing their full benefits demands partnership, creative solutions, and a dedicated focus on business requirements. Capgemini and AWS are streamlining the technology while transforming how enterprises implement and leverage 5G and edge capabilities. As 5G adoption continues to grow, solutions like 5G Network GO will be instrumental in driving industrial transformation and reshaping enterprise connectivity.

              The next generation of connectivity isn’t just about coverage; it’s about generating tangible business value.

              Find out what you can do with 5G Network Go

              Meet us at Mobile World Congress 2025 between March 3-6 at booth 2K21 in Hall 2 to experience the demo, or reach out to:

              Capgemini Engineering: Nilanjan Samajdar 

              AWS: Arun Selvaraj or April Scoville

              TelcoInsights is a series of posts about the latest trends and opportunities in the telecommunications industry – powered by a community of global industry experts and thought leaders.

              MWC Barcelona 2025

              Meet us at Mobile World Congress 2025 between March 3-6 at booth 2K21 in Hall 2 to experience the demo. Register today!

              Meet the author

              Nilanjan Samajdar

              Senior Director – Technology, CTO Connectivity office, Capgemini Engineering
              Nilanjan is a seasoned architect with over 20 years of experience in wireless telecom software development and R&D. As part of the of the CTO Connectivity Team, Technology and Architecture group, he architects solutions for “applied” use-cases around 5G private networks and edge computing.

                The rise of the Dark NOC
                A new era in network operations

                Nikhil Gulati
                Feb 27, 2025

                The growing complexity of enterprise and consumer demands necessitates an urgent transformative approach to network operations.

                Discover how the use of Agentic AI in Network Operations will dawn a new era to:

                1. Manage, optimize, and secure complex network infrastructures
                2. Improving network resilience and serviceability
                3. Simplification and drive down operating cost.

                In this context, “agentic” refers to the ability of AI systems to act independently, make decisions, and carry out tasks with minimal human intervention.

                Market Context:

                Growing focus on B2B Enterprise Business for Industry 4.0 (Intelligent Industry/ Digital Transformation) and evolving consumer demands are reshaping how Communication Service Providers (CSPs) approach Network / Connectivity Services Business and service delivery.

                Consider a financial institution that requires an on-demand, secure multi-cloud interconnect capable of scaling with trading volumes to ensure uninterrupted transactions. On the consumer side, the explosive growth of metaverse applications, cloud gaming, and immersive streaming has set new expectations for seamless, adaptive connectivity with near-zero latency.

                Today’s customers expect on-the-fly configurability—the ability to choose, modify, and scale services instantly, akin to an à la carte experience. This shift places an immense pressure to deliver Network as a service “NaaS” and also on network Operations teams to deliver tailored offerings while ensuring that Network Operations Center (NOCs) can maintain consistent, high-performance that are more agile, flexible, and resilient than ever before.

                Enterprises seek bespoke network solutions with dynamic bandwidth allocation, ultra-low latency, and seamless multi-cloud connectivity—all underpinned by stringent SLAs.

                Navigating the Complexity of Modern Networks:

                The accelerating pace of network technology advancements introduces both unprecedented flexibility and operational complexity which means CSP’s have to respond with greater speed and agility:

                • Cloud-native service models requiring real-time orchestration of microservices across hybrid infrastructures.
                  • IDC forecasts that by 2025, 90% of enterprises will rely on hybrid cloud environments.
                • 5G and 6G network slicing, enabling hyper-customized connectivity with rigorous SLA management demands.
                  • The GSMA projects that 5G connections will surpass 2 billion globally by 2025, driving demand for advanced network slicing capabilities
                • MEO/LEO satellite constellations, extending connectivity to remote regions while introducing new orchestration challenges.
                • IoT and edge computing proliferation, creating distributed network intelligence that requires robust management and security.

                CSPs are tasked with ensuring seamless service delivery and real-time assurance on an unprecedented scale across the heterogenous technology stacks. This often presents challenges that traditional NOCs, designed for static, monolithic networks, simply cannot meet. The need for a transformative approach to network operations is urgent.

                Hence the need for a ‘Dark NOC’. For the uninitiated, a Dark NOC is an autonomous network management system that automates critical functions, eliminating the need for constant human oversight. It enhances reliability in complex, multi-vendor networks and offers network / communication providers a cost-effective way to monitor and delivery superior performance and customer experience with reduced operational strain.

                Capgemini’s Dark NOC: Redefining Network Operations

                Capgemini’s Dark NOC solution represents a paradigm shift in how networks are managed.

                By harnessing Agentic AI-driven automation and an intelligent operational framework, Dark NOC ensures proactive network management with:

                • End-to-end visibility for comprehensive oversight across complex, multi-domain environments.
                • Zero-touch resolution capabilities, enabling autonomous incident detection and remediation.
                • Resilient, self-sustaining network assurance, reducing operational inefficiencies and mitigating the risk of service disruptions.

                These AI agents will not just interact with humans or devices directly, but will also be able to discover, learn, and collaborate with each other to form complex workflows to analyse and automate network / business functions.

                In network operations, agentic AI can provide significant value across various tasks, such as:

                1. Network Monitoring and Management:
                  • AI agents can autonomously monitor network traffic, performance, and health, identifying anomalies or patterns that could indicate issues like congestion, failures, or security breaches.
                  • With predictive capabilities, these agents can foresee potential network disruptions or capacity issues before they arise, allowing for preemptive adjustments.
                2. Dynamic Routing and Traffic Optimization:
                  • AI agents can dynamically adjust routing paths based on real-time data, optimizing network traffic flow for efficiency and cost-effectiveness.
                  • This includes automatically selecting the best routes and managing traffic to minimize latency or packet loss.
                1. Security and Threat Detection:
                  • Network security can benefit from agentic AI through continuous monitoring for potential cybersecurity threats (like DDoS attacks, data breaches, or malware).
                  • AI can autonomously apply mitigation techniques such as firewall rule updates, intrusion detection/prevention, and threat intelligence sharing.
                2. Fault Diagnosis and Recovery:
                  • When a network component fails, AI agents can quickly identify the root cause and initiate remediation actions, such as rerouting traffic, applying patches, or coordinating with other systems for repair.
                  • The goal is to minimize downtime and maintain service continuity with as little human involvement as possible.
                3. Automation of Routine Tasks:
                  • Routine tasks such as configuring devices, scaling up or down resources, and applying patches or updates can be automated by agentic AI systems, freeing up human operators for higher-level strategic work.
                4. Machine Learning for Optimization:
                  • With continuous learning, AI agents can optimize network performance by adapting to changing conditions. For example, over time, they can learn the optimal network configurations based on historical data and usage patterns.
                5. Self-Healing Networks:
                  • One of the ultimate goals of agentic AI in network operations is to create self-healing networks. These networks can automatically detect and resolve issues, reconfigure their architecture, and optimize performance without manual intervention.

                Dark NOC Agents that will operationalize the augmented Network operations of the future:

                Dark NOC Agents

                We’ll delve into the architecture and deployment strategies behind Dark NOC at MWC’25 Hall 2 booth K21. Join us to experience live demonstrations of Dark NOC in action and engage with our experts on how Capgemini can help future-proof your network operations. Click here to plan a visit and see the demo.

                MWC Barcelona 2025

                March 3 – 6 | Booth #2K21, Hall 2 | Fira Gran Via, Barcelona

                MWC 2025 banner

                Meet the author

                Nikhil Gulati

                Head of Intelligent support and services
                Nikhil is a results-oriented professional with extensive experience in IT/Telecom, Project Management, Software Development/support, Client Rela-tionship Management, Business development and operations, and Pre-Sales.