Skip to Content

Welcome to the agentic era

Herschel Parikh
21 Mar 2025

Forget chatbots. The age of the agent is here. Imagine a digital workforce that understands, empathizes, and anticipates customer needs as a trusted advisor – a network of AI agents collaborating to deliver truly human-centric experiences.

This isn’t science fiction; it’s the dawn of the Agentic AI era, and it’s poised to revolutionize customer interactions. Market.us is projecting the global Agentic AI market will be valued at $196.6 billion by 2034, a dramatic leap from $5.2 billion last year. This exponential growth is not just exciting; it signals a fundamental shift. While the possibilities are vast, companies must move beyond simply creating “cool agents” to building robust, collaborative systems.

Agentic AI is rapidly evolving, and the conversation needs to shift towards building networks of interconnected AI agents. This next stage, focusing on multiagent systems, is where real value will be unlocked. 

Next-level hyper-personalization: The game changer 

The true power of multiagent systems lies in their ability to deliver hyper-personalized experiences. Imagine AI agents seamlessly orchestrating across different business areas, instantly accessing client information to tailor interactions in real-time. This level of hyper-personalization, incorporating individual preferences, creates a genuine sense of personal connection. 

Multiagent systems represent the next evolution in personalized interactions. We’ve moved beyond deterministic chatbots and automated processes to a realm where embedded generative AI enables faster, more personalized interactions that build loyalty and connection. The impact is already evident: according to the Capgemini Research Institute, 31 percent of organizations using generative AI see faster response times, and 58 percent anticipate further improvements. 

Efficiency and beyond: Connecting agents across departments 

Beyond enhancing customer experience, connecting agents across departments drives efficiency and productivity through automated, complex workflows. The ability for agents to communicate and operate seamlessly at faster speeds across departments unlocks significant potential. 

This also expands service capabilities. For example, overcoming language barriers in global call centers becomes possible with multilingual digital agents. Research indicates that 60 percent of consumers would pay more for premium customer service, highlighting the value of these enhanced capabilities. Google’s Customer Engagement Suite (CES) provides the AI technology and natural language processing (NLP) that can provide enhanced customer experiences.  

Connecting agents and data: Unlocking deeper insights 

Multiagent systems generate valuable data on information and conversations, which, when shared, provides a deeper understanding of customer behavior and trends. 

This data spans various departments – sales, order management, supply chain, ERP, and marketing – highlighting that inquiries rarely fit neatly into departmental silos. Agents need to be able to access data across these silos is crucial for providing cohesive responses to complex customer questions. 

This is why cross-department collaboration is crucial. Agents need seamless handoffs and access to different departments so that when a person engages with them, the conversation continues without waiting for the next agent to be updated. 

However, simply opening up data is not enough. Robust security protocols are necessary to ensure that not all information is accessible to every agent. Agents must pull information in a way that maintains visibility, requiring a deep understanding of systems for effective deployment. Data security and privacy are paramount. Accessing various data sources requires clear guidelines and governance to ensure compliance with existing data rules. 

Agentic change management: Blending the human workforce with the “digital workforce” 

Ideally, digital and human workforces will seamlessly blend, working in unison on daily tasks and customer interactions. Generative AI will continuously learn from feedback and algorithms, while large language models adapt. However, potential biases must be addressed to ensure fairness. 

Companies must also address the impact of multiagent systems on the human workforce. Clear communication early in the process can prevent resentment toward AI agents. Reassuring employees is a crucial part of change management. If employees fear job losses, they will be less inclined to engage with companies using AI agents. Multiagent systems offer exciting possibilities, but everyone must be part of the solution to maximize the benefits. 

Building a resilient agentic infrastructure 

Agentic AI does not mean creating a single, all-encompassing agent. Companies must prioritize resilience. Humans have bad days, and so can AI agents. If a single agent fails, the entire operation can grind to a halt. A multiagent system allows agents to focus on specific areas, ensuring that if one fails, others remain unaffected. 

The challenge for companies lies in the complexity of the infrastructure required for seamless agent communication. While technology is increasingly sophisticated, the talent to make it work is scarce. Companies need the right skills to build and effectively operate these agentic systems. 

Google’s Agentspace is an orchestration platform that allows companies to deploy agents easily. The Google ecosystem integrates seamlessly with any system, ensuring smooth information flow, regardless of whether a company is using Google applications and infrastructure. 

Working with Google Cloud, Capgemini can support customer service transformation that creates seamless, quality interactions that deliver an exceptional level of service, support, and delight to all stakeholders. Advanced AI capabilities and scalable infrastructure means Google Cloud can build and deploy intelligent virtual agents, enhance agent productivity, and personalize customer experiences easily. We can leverage the power of Google’s Customer Experience Suite to innovate for growth and reinvent business models to unleash what is possible. 

Author

Herschel Parikh

Global Google Cloud Partner Executive
Herschel is Capgemini’s Global Google Cloud Partner Executive. He has over 12 years’ experience in partner management, sales strategy & operations, and business transformation consulting.

    Explore our Google Cloud Partnership

    Navigating the roadmap to AI agents

    James Housteau
    Mar 21, 2025

    Call centers are seeing gains but reliability and consistency need to be a focus. Adopting a copilot approach is the best way to ensure real efficiencies and positive customer experience. 

    It has been said that AI agents could be a multi-trillion dollar opportunity. Intelligent software agents capable of learning to manage actions and tasks have the potential to transform almost everything. Work and life will be impacted by the drive for productivity and efficiency. But AI agents will also democratize access and help overcome barriers to empower more people and drive innovation.

    The road to agentic AI is still being built and there will be many routes to explore but, today, one of the biggest pushes is coming from the telecommunications industry. Call centers have been early adopters for this kind of generative AI because it is a natural evolution from bots. Existing chat features were interesting but they do not always work well, or customers were so annoyed by the menu systems on phones that they were unhappy before they even spoke to someone.

    Moving beyond bots

    Generative AI brings a much better experience to the call center structure, and it enhances existing technology. For example, Google Customer Experience Suite (CES) was built on its Contact Center AI and enhanced with generative AI technology. It has better engagement with customers in both the chat channel and live. With the emerging capabilities of large language models (LLMs) and the growth of companies like OpenAI, AI agents can take on expanded tasks.

    Creating a multi-modal experience allows an AI agent like Google Gemini to intake text, visuals, and audio, and add a communications layer through actual voice and text-to-voice features that are extremely realistic.

    Combining Gen AI, language features, the ability to understand a vast amount of context instantly, and better and more human communication with text-to-voice capabilities creates systems with huge potential.

    Enhancing the AI agent

    We have recently launched the concept of thinking models that are capable of handling much more complex tasks. This is achieved through reinforcement learning based on human feedback, which means these models can actually think, process, and approach problems from multiple angles and explore different paths to find the best solution. It is very reminiscent of how a human would work to solve a problem.

    Agentic AI has the capability to not only understand what a customer needs but to communicate in our own language with the right nuances and even slang. Communication is there. The thought process is there. The ability of AI agents to think through problems at length is there. And they bring the ability to use tools during interactions.

    For example, a customer calls in with multiple inquiries. The AI agent can quickly understand the intent of the call so there is no longer a need to sift through menus or listen to a bunch of options. Because the intent is read in the early stages of the call, the problem resolution process operates better, as the AI agent has the information to solve the problem and the tools to execute it.

    The right AI team

    After detecting the true intent of a customer call, a master AI agent can act as the interactive layer with a customer, while simultaneously accessing a team of subagents to delegate tasks. The subagents can specialize in different areas, like billing issues or new installations. There is no more waiting on hold to be transferred to a different department or a manager. The master agent can access a whole host of tools and know what it needs to take action.

    For example, a customer may want to process a payment. The master agent can identify the request and decide how to proceed. It can give a credit, research a billing discrepancy, or initiate other searches to complete the request. It can call different APIs to get information, update the account, and process the bill. With access to tools, there is really no end to what an AI agent can do.

    These reasoning capabilities and tools mean agents can do very similar things to humans. However, it is still early days in the process and there are concerns to be addressed. Reliability and consistency are two factors. The monitoring and evaluating are improving to help ensure the responses and decisions by AI agents are correct.

    Improving the call center experience

    We worked with one telco client to deliver better knowledge searching, to leverage LLMs to use new methods of data acquisition summarization. The goal was to make technical documentation more accessible so when someone calls to troubleshoot a modem, for example, the answer is readily available.

    Call centers are also a common sales channel. Agents can provide additional information or offer specific deals. That requires the agent to understand the needs of the customer, align them with a product, make an offer, address objections, and close the deal. Now an AI sales agent can interview the customer to understand the needs and wants and match them with potential solutions. They can even address objections and concerns to help get to the sale.

    Copilots: Finding the agentic balance

    According to a recent Capgemini Research Institute report, being an agent is not an overly satisfying career choice, with only 16 percent of human agents surveyed report overall satisfaction with their roles. They face a number of pressures, from rising customer expectations to inefficient systems and a high attrition rate. There are efficiency gains to be made by employing AI agents that can help humans do a better job. In addition, AI agents can help resolve issues more quickly so the customer and employee have positive experiences.

    This is why the copilot effect is a popular AI agent option. Google has Agent Assist to support live agents to resolve queries and issues more quickly. It is like having an expert in the room at all times with a call center agent. For example, the human agent can use it to help digest what is being said, with information automatically appearing on dashboards to assist with the call resolution. The copilot can also provide real-time assessments of the sentiment of the caller. Now the human agent has prompts with potential resolutions, rather than having to bounce between different systems for information or consult with a manager.

    The human in the middle

    So the concept of the human in the middle is very important. AI agents are a powerful tool meant to enhance experience, but sometimes a model can hallucinate or produce an error – and a company is responsible for an AI agent’s output. That means companies have to own the net result. So employing copilots with the human in the middle is happening even in new call centers. Once a system is proven, the role of AI agents can expand but, since call centers have a major impact on customer experience, there needs to be a high level of comfort with the system.

    Call centers that use Google Customer Experience Suite (CES) engage customers with generative AI for many tasks, like determining what a client needs and other lower-level processes, to make calls more efficient and get to resolution quicker. AI agents can, for example, engage with back-office operations so humans can focus on more high-value tasks.

    It takes time for companies to be comfortable with exploring generative AI solutions.  Companies need to focus on the business case and ensure innovation results in efficiency and savings.

    Working with Google Cloud, Capgemini can help companies move into the agentic future. We can help companies build a competitive edge with agents to drive real customer service transformation. Google Cloud’s advanced AI capabilities enable businesses to build and deploy intelligent virtual agents easily. It is time to create, frictionless environment to scale agents where everything supports the needs of the organization and its customers.

    Join us at Google Cloud Next to discover how we’re helping companies embrace the agentic era and benefit from the intersection of innovation and intelligence.

    Author

    James Housteau

    Head of AI | Google Cloud Center of Excellence
    Over two decades in the tech world, and every day feels like a new beginning. I’ve been privileged to dive deep into the universe of data, transforming raw information into actionable insights for B2C giants in retail, e-commerce, and consumer packaged goods sectors. Currently pioneering the application of Generative AI at Capgemini, I believe in the unlimited potential this frontier holds for businesses.

      Explore our Google Cloud Partnership

      Unlocking the power of data with SAP Business Data Cloud and Databricks

      Frank Gundlich and David Allison
      14 Apr 2025

      Capgemini, a data- and analytics-first organization and global launch partner for SAP Business Data Cloud (BDC), understands the value of integrating data and AI foundations from the earliest stages of a business transformation.

      With the recent acquisition of Syniti, we further empower organizations in utilizing data and AI as the cornerstone of their business transformation.

      Enterprises across the globe are leveraging data and AI to drive insights, enable intelligent processes, and foster innovation to improve customer intimacy, drive new routes to market, expand business models, and reduce total cost of ownership (TCO).

      As businesses strive to stay competitive, the integration of advanced data management and AI capabilities becomes paramount. SAP, a leader in enterprise software, has taken a significant step forward with the launch of SAP Business Data Cloud and the SAP Databricks solution, propelling them ahead of their competitors with a 360-degree view of enterprise data and AI.

      The importance of data in business transformation

      SAP has long been at the forefront of helping businesses manage their data. With the introduction of SAP Business Data Cloud, SAP is redefining how enterprises harness their data. The SAP Business Data Cloud solution unifies and governs all SAP data while seamlessly connecting with third-party data. By integrating SAP Datasphere, SAP Analytics Cloud (SAC), and SAP Business Warehouse (BW) alongside Databricks, SAP Business Data Cloud delivers a unified experience that empowers businesses to make informed decisions.

      SAP Business Data Cloud: A new era in data management

      SAP Business Data Cloud represents a paradigm shift in enterprise data management. Together with Capgemini’s data-first methodology, it provides a trusted and harmonized data foundation, ensuring high-quality data that businesses can rely on. This foundation is crucial for driving impactful decisions and fostering innovation.

      One of the standout features of SAP Business Data Cloud is its ability to deliver fully managed SAP data products across all business processes. These curated data products align with a highly optimized and unified “one domain” model, maintaining their original business context and semantics. This means businesses get immediate access to high-quality data without the hidden costs of rebuilding and maintaining data(base) extracts.

      Additionally, SAP Business Data Cloud offers a suite of pre-built analytical applications, known as Insights Apps. As a global launch partner for SAP Business Data Cloud, Capgemini has been working closely with SAP and partners like Syniti, Databricks, and Collibra to integrate our knowledge into these apps. Insight Apps incorporate pre-defined metrics, AI models, and planning capabilities, simplifying how businesses connect and integrate every part of their operations. This accelerates use cases aligned with critical business functions, including ERP, spend, supply chain, HR, customer experience, and finance.

      SAP Databricks: Enhancing AI and data engineering

      As a Databricks Partner of the Year award winner, we are excited by the integration of Databricks into SAP Business Data Cloud, as it marks a significant milestone in enterprise data management. This partnership brings the power of Databricks directly into the SAP ecosystem, enabling businesses to leverage advanced data engineering and AI capabilities to support the integration of SAP data into the enterprise ecosystem, both internally and externally.

      Databricks empowers data professionals to accelerate AI models and generative AI applications on their business data. Native capabilities like Delta Sharing harmonize SAP data products with existing lakehouses bidirectionally. This zero-copy approach allows businesses to apply advanced AI and machine learning models to various use cases, such as predicting payment dates on open receivables, without the need for complicated ETL pipelines.

      Moreover, SAP Business Data Cloud with the included Databricks capabilities facilitates the modernization of SAP Business Warehouse, providing additional migration options for existing BW customers under one license. On-premises SAP BW customers can easily transition to an SAP BW Private Cloud Edition, accessing their data as a data product with the object store via Delta Share. This simplifies the modernization journey and maximizes the value of existing SAP BW investments.

      Driving innovation with AI and machine learning

      AI and machine learning are at the heart of SAP’s new offerings. The integration of Joule AI Copilot into SAP Business Data Cloud exemplifies this commitment. Joule AI leverages a knowledge graph to connect data, metadata, and business processes, enabling AI agents and large language models (LLMs) to understand data within its business context.

      This mapping creates clear data links, making insights more reliable for users and applications. Training AI agents and Joule on business knowledge and context drives increased productivity. For instance, users can use AI to complete cross-functional tasks, uncover insights, and summarize critical information across the business – without heavy reliance on IT support. This empowers the usage of AI to automate complex analytics and planning tasks, such as risk assessment, forecasting, and other advanced scenario simulations.

      Partner ecosystem and open data integration

      SAP Business Data Cloud is built to prioritize openness and customer choice. It supports an open data ecosystem, integrating natively with leading data and AI partners like Collibra, Confluent, and DataRobot. This openness simplifies the data landscape and unleashes transformative insights from all data sources.

      SAP has also announced partnerships with the likes of Capgemini. These partners bring deep business process and industry domain expertise, building insight apps on SAP Business Data Cloud. From data enrichment to data activation, partner insight apps build on top of the data products and core services provided by SAP Business Data Cloud.

      Conclusion

      Data is the lifeblood of modern enterprises. It fuels decision-making, drives operational efficiency, and enables businesses to respond swiftly to market changes. For many organizations, this means integrating data from various sources, ensuring that it is high quality, and applying advanced analytics and AI to uncover hidden patterns and trends.

      The launch of SAP Business Data Cloud and SAP Databricks marks a new era in enterprise data management. By unifying and governing data, integrating advanced AI capabilities, and fostering an open data ecosystem, SAP is empowering businesses to unlock the full potential of their data. As enterprises continue to navigate the complexities of digital transformation, these new offerings provide a robust foundation for driving innovation, enhancing decision-making, and enabling intelligent, AI-driven processes.

      Author

      Frank Gundlich

      Global Head SAP Data & AI
      Fuelled by a deep passion for SAP Data & AI, Frank leads with a unique blend of strengths that turn vision into reality. As an activator, maximizer, and futurist, he thrives on driving innovation, elevating performance, and shaping bold strategies that push the boundaries of what’s possible in data transformation.
      David Allison

      David Allison

      European SAP Data & Analytics Lead
      As Capgemini’s SAP Data & Analytics lead for Europe David works closely with his clients to integrate a data first approach to SAP that sets the foundation for enabling intelligent processes with data from across the ecosystem, both internally and externally.

        The grade-AI generation:
        Revolutionizing education with generative AI

        Dr. Daniel Kühlwein
        March 19, 2025

        Our Global Data Science Challenge is shaping the future of learning. In an era when AI is reshaping industries, Capgemini’s 7th Global Data Science Challenge (GDSC) tackled education.

        By harnessing cutting-edge AI and advanced data analysis techniques, participants, from seasoned professionals to aspiring data scientists, are building tools to empower educators and policy makers worldwide to improve teaching and learning.

        The rapidly evolving landscape of artificial intelligence presents a crucial question: how can we leverage its power to solve real life challenges? Capgemini’s Global Data Science Challenge (GDSC) has been answering this question for years and, in 2024, it took on its most significant mission yet – revolutionizing education through smarter decision making.

        The need for innovation in education is undeniable. Understanding which learners are making progress, which are not, and why is critically important for education leaders and policy makers to prioritize the interventions and education policies effectively. According to UNESCO, a staggering 251 million children worldwide remain out of school. Among those who do attend, the average annual improvement in reading proficiency at the end of primary education is alarmingly slow—just 0.4 percentage points per year. This presents a sheer challenge in global foundational learning hampering efforts made to achieve the learning goal as set forth in the Sustainable Development Agenda.

        The grade-AI generation: A collaborative effort

        The GDSC 2024, aptly named “The Grade-AI Generation,” brought together a powerful consortium. Capgemini offered its data science expertise, UNESCO contributed its deep understanding of global educational challenges, and Amazon Web Services (AWS) provided access to cutting-edge AI technologies. This collaboration unlocks the hidden potential within vast learning assessment datasets, transforming raw data into actionable insights for decision making that could change the future of millions of children worldwide.

        At the heart of this year’s challenge lies the PIRLS 2021 dataset – a comprehensive global survey encompassing over 30 million data points on 4th grade children’s reading achievement. This dataset is particularly valuable because it provides a rich and standardized data that allows participants to identify patterns and trends across different regions and education systems. By analyzing factors like student performance, demographics, instructional approaches, curriculum, home environment, etc. the AI-powered education policy expert can offer insights that would take much longer time and resources to gain from traditional methods. Participants were tasked with creating an AI-powered education policy expert capable of analyzing this rich data and providing data-driven advice to policymakers, education leaders, teachers, but also parents, and students themselves.

        Building the future: Agentic AI systems

        The challenge leveraged state-of-the-art AI technologies, particularly focusing on agentic systems built with advanced Large Language Models (LLMs) such as Claude, Llama, and Mistral. These systems represent a significant leap forward in AI capabilities, enabling more nuanced understanding and analysis of complex educational data.

        “Generative AI is the most revolutionary technology of our time,” says Mike Miller, Senior Principal Product Lead at AWS, “enabling us to leverage these massive amounts of complicated data to capture for analysis, and present knowledge in more advanced ways. It’s a game-changer and it will help make education more effective around the world and enable our global community to commit to more sustainable development.“

        The transformative potential of AI in education

        The potential impact of this challenge extends far beyond the competition itself. As Gwang-Chol Chang, Chief, Section of Education Policy at UNESCO, explains, “Such innovative technology is exactly what this hackathon has accomplished. Not just only do we see the hope for lifting the reading level of young children around the world, we also see a great potential for a breakthrough in education policy and practice.”

        The GDSC has a proven track record of producing innovations with real-world impact. In the 2023 edition, “The Biodiversity Buzz,” participants developed a new state-of-the-art model for insect classification. Even more impressively, the winning model from the 2020 challenge, “Saving Sperm Whale Lives,” is now being used in the world’s largest public whale-watching site, happywhale.com, demonstrating the tangible outcomes these challenges can produce. 

        Aligning with a global goal

        This year’s challenge aligns perfectly with Capgemini’s belief that data and AI can be a force for good. It embodies the company’s mission to help clients “get the future you want” by applying cutting-edge technology to solve pressing global issues.

        Beyond the competition: A catalyst for change

        The GDSC 2024 is more than just a competition; it’s a global collaboration that brings together diverse talents to tackle one of the world’s most critical challenges. By bridging the gap between complex, costly collected learning assessment data and actionable insights, participants have the opportunity to make a lasting impact on global education.

        A glimpse into the future

        The winning team ‘insAIghtED’ consists of Michal Milkowski, Serhii Zelenyi, Jakub Malenczuk, and Jan Siemieniec, based in Warsaw Poland. They developed an innovative solution aimed at enhancing actionable insights using advanced AI agents. Their model leverages the PIRLS 2021 dataset, which provides structured, sample-based data on reading abilities among 4th graders globally. However, recognizing the limitations of relying solely on this dataset, the team expanded their model to incorporate additional data sources such as GDP, life expectancy, population statistics, and even YouTube content. This multi-agent AI system is designed to provide nuanced insights for educators and policymakers, offering short answers, data visualizations, yet elaborated explanations, and even a fun section to engage users.

        The architecture of their solution involves a lead data analyst, data engineer, chart preparer, and data scientist, each contributing to different aspects of the model’s functionality. The system is capable of querying databases, aggregating data, performing internet searches, and preparing elaborated answers. By integrating various data sources and employing state-of-the-art AI technologies like Langchain and crewAI, the insAIghtED model delivers impactful, real-world, actionable insights that go beyond the numbers, helping to address complex educational challenges and trends.

        Example:

        Figure 1: Show an example of the winning model. The image has the model answering the following prompt “Visualize the number of students who participated in the PIRLS 2021 study per country”

        As we stand on the brink of an AI-powered educational revolution, the Grade-AI Generation challenge serves as a beacon of innovation and hope. It showcases how the combination of data science, AI, and human creativity and passion can pave the way for a future where quality education is accessible to all, regardless of geographical or socioeconomic barriers.

        Start innovating now –

        Dive into AI for good
        Explore how AI can be applied to solve societal challenges in your local community or industry.

        Embrace agentic AI systems
        Start experimenting with multi-agent AI systems to tackle complex, multi-faceted problems in your field.

        Collaborate globally
        Seek out international partnerships and datasets to bring diverse perspectives to your AI projects.

        Interesting read?Capgemini’s Innovation publication,Data-powered Innovation Review – Wave 9 features 15 captivating innovation articles with contributions from leading experts from Capgemini, with a special mention of our external contributors fromThe Open Group, AWS andUNESCO. Explore the transformative potential of generative AI, data platforms, and sustainability-driven tech. Find all previous Waves here.

        Meet our authors

        Dr. Daniel Kühlwein

        Managing Data Scientist, AI Center of Excellence, Capgemini

        Mike Miller

        Senior Principal Product Lead, Generative AI, AWS

        Gwang-Chol Chang

        Chief, Section of Education Policy, Education Sector, UNESCO

        James Aylen

        Head of Wealth and Asset Management Consulting, Asia

        James Aylen

        Head of Wealth and Asset Management Consulting, Asia

        Question-Answer Generation (QAG) for automated summarization evaluation: A reference-free approach

        Sangeeta Ron
        21 Mar 2025

        The challenge of text summarization in financial services

        The financial services industry generates an immense volume of documentation daily. From customer interactions and regulatory filings to legal proceedings and risk assessments, organizations must process, interpret, and act upon large amounts of unstructured data. Traditionally, this has been a time-consuming and labor-intensive process, often susceptible to human error and inconsistencies. As regulatory frameworks evolve and customer expectations rise, the demand for accurate, efficient, and standardized document summarization has never been more critical.

        In banking, institutions must navigate a constantly shifting regulatory landscape. Compliance teams are responsible for reviewing extensive regulatory filings, risk reports, and audit documents—any misinterpretation can result in significant financial and legal consequences. Beyond compliance, customer service operations require rapid access to key insights from call center interactions to enhance service efficiency. Additionally, loan and credit risk assessment teams manually analyze financial statements, credit histories, and other documents to determine creditworthiness, a process that is both time-intensive and costly.

        The insurance sector faces similar challenges, particularly in underwriting, policy management, and claims processing. Insurance providers must constantly interpret complex regulatory changes while ensuring accurate policy underwriting and risk assessment. Claims processing teams review medical reports, legal documents, and third-party assessments to determine coverage and fraud risk. Manual document reviews in these areas not only slow down operations but also introduce inconsistencies that can impact decision-making.

        The increasing complexity of financial services documentation makes manual summarization an unsustainable approach. Generative AI (GenAI) offers a powerful solution by enabling automated summarization of key insights from various documents. However, assessing the quality of AI-generated summaries remains a challenge. Traditional evaluation methods, such as ROUGE and BERTScore, rely on human-generated references, which are not always available or practical for large-scale financial services applications.

        Introducing QAG-based automated summarization evaluation

        Question-Answer Generation (QAG) for automated summarization evaluation provides a breakthrough, offering a reference-free approach to ensuring both completeness and accuracy in AI-generated summaries. Instead of comparing summaries to predefined references, QAG-based evaluation gauges summarization quality by generating factual questions from the original document and checking whether the AI-generated summary provides correct answers.

        Experimental results

        Optimization techniques for QAG were implemented that included limiting truth extraction and using custom question templates to improve evaluation performance.

        This enhanced QAG-based evaluation approach was then tested on four real-world transcripts. In each test, both the default QAG model and our optimized approach were implemented. The following table summarizes the results:

        Overall, the experimental results reveal a significant leap in alignment scores, rising from a baseline of 56% to over 70%, while coverage scores experienced an even greater boost, increasing from 70% to 90%. These enhancements demonstrate the effectiveness of the refined approach in producing more accurate and comprehensive AI-generated summaries.

        Wide-ranging use cases in banking and insurance

        By implementing QAG-based evaluation, financial institutions can improve the reliability and accuracy of GenAI-powered summarization across multiple business functions. In banking, it ensures that compliance reports, customer interactions, and financial risk assessments maintain factual integrity. In insurance, it enhances underwriting decisions, policy management, and claim evaluations. The following is a sample of several key use cases in financial services.

        Banking use cases

        • Call center interaction summarization: Customer service teams manage a high volume of customer interactions, often recorded in call center transcripts, chat logs, and emails. GenAI can summarize these conversations, extracting key themes, customer concerns, and sentiment trends, enabling more efficient issue resolution. With QAG-based evaluation, AI-generated summaries ensure that no critical customer concerns are overlooked, allowing for more personalized and proactive customer support.
        • Audit report summarization: Internal audits are a critical part of risk management in banking, yet the process is often time-consuming and labor-intensive. AI-powered summarization helps highlight key discrepancies, compliance violations, and recommended actions from audit reports, improving the efficiency of risk and compliance teams. With QAG-based evaluation, banks can ensure that summarized audit findings remain aligned with the original reports, reducing the chances of oversight in risk assessments.
        • Credit risk assessment: Evaluating a borrower’s financial health requires the review of credit reports, financial statements, and loan histories, often spread across multiple documents. GenAI can consolidate key financial indicators into a structured summary, allowing risk analysts to make faster and more informed lending decisions. By applying QAG-based evaluation, banks can verify that these summaries accurately reflect the borrower’s financial status, reducing errors in credit risk assessments.

        Insurance use cases

        • Underwriting and risk assessment: Insurance underwriting requires the evaluation of extensive data, including health records, financial documents, and previous policy claims. GenAI-generated summaries allow underwriters to quickly assess risk factors, policy eligibility, and pricing considerations. With QAG-based evaluation, insurers can confirm that these summaries capture the full scope of risk assessment criteria, reducing underwriting errors and improving decision-making efficiency.
        • Policy management: Managing policies involves handling a large amount of unstructured documentation throughout the policy lifecycle. Any modifications initiated by insurers or customers require careful reassessment. GenAI streamlines this process by efficiently condensing information from various sources. By applying QAG-based evaluation, insurers can confirm that AI-generated summaries align with policy terms and regulatory requirements, enabling them to allocate more time to strategic tasks such as customer service and relationship management.
        • Claims processing: Whether for auto, healthcare, or commercial policies, claims processing is a complex, documentation-heavy task that demands significant time and effort when done manually. GenAI automates the extraction of critical details from diverse records. QAG-based evaluation ensures that all necessary claim details are preserved, reducing operational costs, expediting claim settlements, and improving overall customer satisfaction.

        These use cases highlight just a few of the many ways QAG-based evaluation can be applied in financial services. Potential applications extend far beyond these examples. Depending on an organization’s specific needs, QAG-based evaluation can be adapted to review AI-generated summaries across a wide range of business functions, including regulatory reporting, contract analysis, investment research, internal policy compliance, and more.

        Driving accuracy, efficiency, and trust in AI-generated summarization

        As financial institutions increasingly rely on GenAI to streamline document processing, ensuring the accuracy and reliability of AI-generated summaries is paramount. QAG-based automated summarization evaluation provides a reference-free, scalable, and precise method to assess summarization quality, addressing one of the key challenges in AI adoption. By evaluating summaries based on factual correctness and content coverage, QAG-based evaluation offers a structured approach to verifying AI outputs without the need for human-generated reference summaries.

        The benefits of integrating this approach in banking and insurance are far-reaching. Banks can enhance decision-making by quickly extracting key insights from financial reports, compliance documents, and customer interactions. This leads to faster responses to regulatory changes, improved operational efficiency, and a more seamless customer experience. In the insurance sector, QAG-based evaluation improves underwriting accuracy and claims processing efficiency, ensuring that AI-generated summaries are both comprehensive and aligned with business objectives.  

        Now is the time for financial institutions to embrace AI-powered summarization with QAG-based evaluation. To explore how this approach can elevate your organization’s AI-driven summarization efforts, contact Capgemini’s Financial Services Insights & Data team today.  

        Author

        Sangeeta Ron

        Senior Director, Financial Services Insights & Data

          Can nuclear provide the power that drives the AI revolution?

          Capgemini
          Capgemini
          Mar 18, 2025

          The race to develop and exploit the extraordinary capabilities of AI and other breakthrough technologies is accelerating at a dizzying pace. But while governments, businesses and citizens are scrambling to take advantage of the seemingly limitless ability of AI to transform almost every aspect of our lives, there’s another challenge looming on the horizon.

          As economies in general, and tech companies in particular, are striving to transition to renewable energy sources and to reduce carbon footprint, the boom in AI-related data processing is producing a huge surge in demand for power. But, as the need for clean and secure electricity supplies soars, could nuclear be set to play a vital role in bridging the potential energy gap?

          Powering AI will require 9% of US grid capacity by 2030

          Powering the world’s rapidly expanding network of data centres has already had significant impacts on society and public policy. In Europe, major data centre clusters, around Dublin and Amsterdam for example, require so much electricity that further data centre expansion in those cities is on hold until new, additional sources of energy come on stream.

          As recently as 2020, UK data centres used just over 1% of the nation’s electricity. By 2030 this figure is forecast to reach 7%. Demand is set to be even greater in the US, the global centre of AI innovation, with predictions that, by 2030, 9% of all grid capacity will be used to power AI technologies alone. It’s a monumental challenge that traditional energy utility organisations cannot meet alone.

          SMRs will change the game for businesses transitioning to low carbon energy

          New research published by Capgemini to coincide with the 2025 World Economic Forum in Davos reveals that 72% of business leaders say they will increase investment in climate technologies, including hydrogen, renewables, nuclear, batteries, and carbon capture, with nuclear energy in their top three climate technology investment priorities for 2025.

          This direction of travel chimes with statements made during Davos by the International Energy Agency (IEA). The IEA heralds “a new era for nuclear energy, with new projects, policies and investments increasing, including in advances such as small modular reactors (SMRs)”.

          According to IEA Director General Rafael Mariano Grossi: “one after another, technology companies looking for reliable low-carbon electricity to power AI and data centres are turning to nuclear energy, both in the form of traditional large reactors and SMRs.”

          Around 60 new reactors are currently under construction in 15 countries around the world, with 20 more countries, including Ghana, Poland, and the Philippines, developing policies to enable construction of their first nuclear power plants. The US Energy Information Administration (EIA) estimates that, by 2025, global nuclear capacity could have increased by up to 250% compared to the end of 2023.

          Clean, reliable, available – and safe

          It’s easy to understand why nuclear is set to play an increasingly significant dual role in both powering the AI revolution and decarbonising industry. Its 99.999% guarantee of stable energy availability compares with just 30-40% from weather-dependent wind or solar generation.

          Decades of continuous improvements in reactor design and operation make nuclear the second safest source of energy in the world after solar, according to the International Atomic Energy Authority (IAEA), although the Agency also points out that large scale solar power systems need 46 times as much land as nuclear to produce one unit of energy.

          But it’s the potential to rapidly deploy SMRs that could have the most significant impact in preventing the looming energy gap, as AI-driven data processing requirements grow exponentially. It’s important to remember that most light-water SMRs are simply smaller versions of the large-scale GEN III+ technology with proven safety and operational records, with small generally defined as having a maximum output of 300 MWe. The underlying scientific and operational principles are not technologically new in themselves.

          As the name suggests, SMR’s modular design enables major components to be constructed at speed in a factory environment, for bespoke assembly on site, located flexibly close to consumers. With a footprint the size of a sports stadium, they can easily be placed near the demand, such as data centres or industrial estates.

          Reduced construction times, lower investment and running costs and the ability to add or reduce capacity as demand increases or decreases, are just some of SMRs’ obvious advantages. They’re ready-made replacements for fossil-fuel based generation, and as nuclear is less vulnerable to price fluctuations, owners and consumers of SMR generated power have more budget certainty and can plan more accurately for the long-term accordingly.

          SMRs, specifically the advanced reactor designs, can also be adapted to supply heat for industrial applications, district heating systems and the production of hydrogen, and are increasingly regarded as catalysts for economic development and job creation.

          Tech giants at the front of the queue

          Many of the global tech giants are actively working on plans to develop their own SMR-based generating capabilities, to provide their own independent sources of safe, stable, low-carbon power, protected from the increasingly volatile open market.

          It’s a race that’s not only vital that they win to ensure that we fuel the AI revolution, but by doing so we will accelerate the transition to a low carbon world economy.

          Smart business transformations
          …from a practitioner’s point of view

          Stewart Hicks, Global Offer Lead for Generative Business Services (GBS), Capgemini’s Business Services
          Stewart Hicks, Wojciech Mróz
          Mar 17, 2025

          According to Capgemini’s recent survey “AI-led Generative Business Services: The future of Global Business Services (GBS)” conducted in partnership with HFS Research, over 80% of respondents agree it is time to rethink Global Business Services as Generative Business Services – better defined as AI-led data-driven services focused on driving growth and the enterprise innovation agenda.

          It is worth remembering, however, that business transformations with tangible business outcomes are enabled through a comprehensive approach, i.e., applying suitable technology platforms, together with operating model and process transformation, not necessarily just AI alone.

          The key to their effectiveness is a thorough diagnosis of the company’s needs, a strategic approach and individually tailored and industry specific solutions that collectively transform business operating models, processes, technology, and people.

          In business transformations, we are seeing significant reliance and focus on the latest technology solutions, but the basic principles remain the same. The customer, i.e., the end recipient of products or services, must always be at the centre of the design. It is unlikely the transformation will be successful if we forget to identify their needs and solve them.

          An increasingly popular and effective method of building a transformation strategy is the Outcomes Based Model, which focuses more on the business impact of the transformation program, rather than typical process performance measures and fixed or variable fee pricing models. Transformation initiatives or services provided are aligned with business outcomes, e.g., working capital improvements through reduction of aged debt, and increasing revenue through revenue leakage detection, prevention, and recovery. We are seeing cases where such models are applied are resulting in significant cashflow improvements, and outcomes realized in millions of euros for our clients.

          This approach significantly improves the effectiveness of business transformation and goes beyond traditional priorities focused on productivity or labour arbitrage. It is then much easier to get the attention of C-level Executives, who are typically the decision makers and buyers of business transformation services.

          We choose to work in this model because we are confident that well-planned transformations will deliver the expected results. We need to have a deep understanding of the clients we work with to enable the development of optimal strategies for them. We can then create tailored and industry specific solutions and prepare and support them through the change journey. We also rely on detailed data analysis and insights to drive informed decision making. This is coupled with an outcome-based commercial model which incentivizes the clients and Capgemini. This is what makes this model an interesting and beneficial formula for both parties.

          In the context of strategic business transformation, very often the key role is played by organisations known as Global Business Services (GBS) or Business Process Outsourcing (BPO) Providers. This sector is strongly represented in Poland, and other countries in the region such as Romania, due to the availability of highly qualified specialists, expertise, and still relatively low wage costs in comparison to other countries. While the traditional roles and benefits of GBS/BPO remain vital and relevant, there is an urgent need to redefine the GBS/BPO narrative to appeal more to Business Leaders who are demanding more than just cost reduction.

          Capgemini is no longer just a transactional services vendor, it is an ecosystem orchestrator that brings new skills, technologies, and capabilities to its clients, and thus is not just providing support but drives the strategic objectives of modern enterprises. Capgemini and its services and business transformation programs more often are expanding their scope of responsibility and expansion into more business lines and functional areas of their clients. which is allowing for greater bottom- and top-line financial impact for clients.

          The power of simplicity

          Today, technology is evolving at a dizzying pace, leaving companies constantly bombarded with innovative solutions that have the potential to improve their operations. This rapid pace of change often prevents full adaptation, resulting in technologies not used to their best advantage, which in turn inhibits the maximization of business impact. Many organizations implement only partial solutions, and do not always exhaust the possibilities of the standard technology deployed, limiting their effectiveness and return on investment.

          Technology should be used to its fullest extent and that allows a greater part of the organisation to leverage the solution and its benefits. Such extensive use optimizes costs. Simply put, it means our clients can make the most of what they pay for. Companies should also look for technology platforms that allow them to fulfil many diverse needs, moving away from a multi-tool approach, and focusing on full and proper adoption.

          The company’s growth is based on the development of the teams’ competencies

          The key to successful business transformations, apart from good strategic planning, is change management & communication. Change is only effective if the people working in companies understand it, are convinced of it, and ideally when they have the chance to co-create it.

          Business transformation also means developing competencies for the people the company employs. It is important to let people know from the very beginning of the process what role they will play during and after the change. In parallel with changes to business processes, it is important to plan and deliver robust training and equip staff with the right tools and resources to ensure there is no or limited disruption to business as usual and people can excel in their roles.

          At Capgemini, we are focused on developing our teams in Gen AI and Industry specific certifications. People working for us have the opportunity, and sometimes even the obligation, to obtain key certifications in this area. This is to ensure we stay up to date with the current trends and changes and apply tailored solutions that are optimal for our clients. This is the only way we can be a dependable partner for our clients.

          One of the best-performing ways of implementing change is through a “pilot” approach. This allows for testing a solution in a selected and sometimes isolated area before a wider roll out. This method works most effectively in large companies with a regional and global reach. The choice can be made on a geographical level, e.g., starting in a particular country or city according to business lines or business functions, and where people are most suited and willing to participate in the change process. The success of operating on a smaller scale allows you to de-risk, and with proven and positive results, to convince people who are less supportive of change before proceeding on an organisation-wide scale.

          Twilight of the old technologies

          Even the best implemented changes take time. In the case of large platforms such as S/4HANA, for example, the process can take years. Business Managers and their teams need to be prepared for a period of operating in different realities simultaneously. This is necessary to ensure business continuity, and it is worth taking the time to act in a comprehensive way because well-planned transformations, based on clearly defined business goals, produce long-term, measurable results and outcomes.

          Meet our experts

          Stewart Hicks, Global Offer Lead for Generative Business Services (GBS), Capgemini’s Business Services

          Stewart Hicks

          Global Offer Lead for Generative Business Services (GBS), Capgemini’s Business Services
          As the Global Offer Lead for Generative Business Services (GBS) at Capgemini’s Business Services, Stewart helps clients assess, design, transform, and implement world-class GBS operating models. He is passionate about helping clients leverage the opportunities GBS can offer. Stewart has held leadership roles in Consulting, GBS and Outsourcing operations, Sales management, Project & change Management, and Process excellence. He has extensive experience in end-to-end client captive shared services, BPO engagements, and GBS transformation programs across enterprise domains and technologies.
          Wojciech Mróz

          Wojciech Mróz

          Strategy & Transformation Director, Capgemini’s Business Services
          Wojciech is a senior leader with extensive experience in BPO/SSC delivery and transformation. He is actively engaged in the Generative Business Services (GBS) offer evolution at Capgemini’s Business Services and has held various positions across GBS transformation, F&A transformation and Business development. As a subject matter expert in automation, Wojciech has helped clients across the globe develop automation strategies and has delivered efficient automation programs. With a proven track-record of leading successful transition and transformation projects, Wojciech has a continuous improvement mindset and drive for optimizing business processes.

            Women’s Day special: Cyber angel

            Capgemini
            Mar 6, 2025

            Leading the charge: Puneeta on cybersecurity, inclusion, and building a future at Capgemini

            In today’s ever-evolving digital landscape, cybersecurity is crucial for safeguarding information and building trust. At Capgemini, leaders like Puneeta Chellaramani are at the forefront of this mission, bringing a unique blend of expertise, passion, and vision. In this Q&A, she shares her journey, the value of inclusion in cyber leadership, and her advice for those looking to join the Capgemini Cybersecurity team.

            1. What makes you proud to work at Capgemini?

            The variety of projects, clients, and cultures at Capgemini keeps my work exciting and fulfilling, and knowing we are helping organizations grow and solve complex cyber challenges is incredibly rewarding. Capgemini fosters an environment where everyone feels relevant and respected. The appreciation of everyone’s personal situation and making a flexible working environment thrive with balance is distinctive to Capgemini’s DNA, making it a unique and supportive place to work.

            2. How are you working towards the future you want?

            I’m diligently working towards the future I want at Capgemini by sticking to my value system and finding the right chord to strike with Capgemini’s values. Whether it’s picking up uncharted territories to grow business, building meaningful connections, or staying laser-focused in accelerating cyber business across APAC, I’m taking small, consistent steps to stay on track. I’m also embracing opportunities that align with my virtues and passions, helping me move closer to where I want to be.

            3. What value does inclusion bring to cyber leadership?

            Inclusion in cyber leadership isn’t just about representation ­– it’s about building a team capable of thinking outside the box and adapting to unforeseen challenges. In a field where threats are constantly evolving, having leaders from different walks of life brings a variety of strategies, insights, and approaches. This inclusion fosters a culture of resilience and innovation, where challenges are seen as opportunities for growth. It also ensures that cybersecurity solutions are well-rounded, addressing the needs of diverse users and creating a stronger and more proactive defense system.

            4. What advice would you give to someone joining Capgemini Cybersecurity?

            Life at Capgemini Cybersecurity is like an exhilarating adventure: you feel a rush of excitement as you reach new heights, followed by a burst of adrenaline that keeps you energized. There’s that light, joyful feeling in your stomach as you navigate through stimulating challenges, and the thrill of new experiences keeps you engaged. It’s a dynamic mix of enthusiasm and learning, making every moment enjoyable and rewarding!

            Empowerment and learning are at the heart of Capgemini Cybersecurity. You’ll find yourself in an environment that encourages you to embrace challenges and grow both personally and professionally. One of the standout initiatives is the Cyber Angels program, which mentors women seeking careers in cybersecurity, fostering a supportive and inclusive community.

            You’ll also have the opportunity to work with CyberPeace, a Geneva-based NGO, supporting non-profits in enhancing their digital security posture and resilience, and making a positive impact on society. This collaboration not only enhances your technical skills but also allows you to contribute to meaningful causes.

            My advice for someone joining Capgemini Cybersecurity is to embrace the challenges and build a community of trusted colleagues and clients. Be proactive – take ownership of your development and contribute your unique perspective to the team. Remember, the journey may be thrilling but it’s also incredibly rewarding and full of opportunities for growth and empowerment.

            If you are looking for a role in cybersecurity at Capgemini, please visit our career page.

            Puneeta Chellaramani

            Senior Director, Head of Cybersecurity Strategy and Growth, APAC
            With over 16 years of cyber experience across Zurich, Singapore, Dubai, and London, Puneeta now proudly calls Australia home. She has a strong management consultant background and extensive experience in accelerating cyber business growth. Puneeta advises clients across diverse industries, advocating a two-speed approach to navigating cyber, risk, legal, and AI-regulated environments. Passionate about cybersecurity mentorship, Puneeta leads many CSR initiatives. Outside of work, she enjoys music festivals and is a dedicated Pilates practitioner and coach.

              Mulder and Scully for fraud prevention:
              Teaming up AI capabilities

              Joakim Nilsson
              March 5, 2025

              While Mulder trusts his gut; Scully trusts the facts – in fraud detection, we need both. Hybrid AI blends the intuition of LLM with the structured knowledge of a knowledge graph, letting agents uncover hidden patterns in real time. The truth is out there—now we have the tools to find it.

              Fraud detection can be revolutionized with hybrid AI. Combining the “intuitive hunches” from LLMs with a fraud-focused knowledge graph, a multi-agent system can identify weak signals and evolving fraud patterns, moving from detection to prevention in real-time. The challenge? Rule sets need to be cast in iron, whereas the system itself must be like water: resilient and adaptive. Historically, this conflict has been unsolvable. But that is about to change.

              A multi-agent setup

              Large language models (LLMs) are often criticized for hallucinating: coming up with results that seem feasible but are plain wrong. In this case though, we embrace the LLM’s gut-feeling-based approach and exploit its capabilities to identify potential signs of fraud. These “hunches” are mapped onto a general ontology and thus made available to symbolic AI components that build on logic and rules. So, rather than constricting the LLM, we are relying on its language capabilities to spot subtle clues in text. Should we act directly on these hunches, we would run into a whole world of problems derived from the inherent unreliability of LLMs. However, this is the task of a highly specialized team of agents, and there are other agents standing by, ready to make sense of the data and establish reliable patterns.

              When we talk about agents, we refer to any entity that acts on behalf of another to accomplish high-level objectives using specialized capabilities. They may differ in degree of autonomy and authority to take actions that can impact their environment. Agents do not necessarily use AI: many non-AI systems are agents, too. (A traditional thermostat is a simple non-AI agent.) Similarly, not all AI systems are agents. In this context, the agents we focus on primarily handle data, following predefined instructions and using specific tools to achieve their tasks.

              We define a multi-agent system as being made up of multiple independent agents. Every agent runs on its own, processing its own data and making decisions, yet staying in sync with the others through constant communication. In a homogeneous system, all agents are the same and their complex behavior solves the problem (as in a swarm). Heterogeneous systems, though, deploy different agents with different capabilities. Systems that use agents (either single or multiple) are sometimes called “agentic” architectures or frameworks.

              For example, specialized agents can dive into a knowledge graph, dig up specific information, spot patterns, and update nodes or relationships based on new findings. The result? A more dynamic, contextually rich knowledge graph that evolves as the agents learn and adapt.

              The power is in the teaming. Think of the agents Mulder and Scully from The X-Files television show: Mulder represents intuitive, open-minded thinking, while Scully embodies rational analysis. In software, there always have been many Scullys but, with LLMs, we now have Mulders too. The challenge, as in The X-Files, is in making them work together effectively.

              The role of a universal ontology

              We employ a universal ontology to act as a shared language or, perhaps a better analogy, a translation exchange, ensuring that both intuitive and analytical agents communicate in terms that can be universally understood. This ontology primarily consists of “flags” –generic indicators associated with potential fraud risks. These flags are intentionally defined broadly, capturing a wide range of behaviors or activities that could hint at fraudulent actions without constraining the agents to specific cases.

              The key to this system lies not in isolating a single flag but in identifying meaningful combinations. A single instance of a flag may not signify fraud; however, when several flags emerge together, they provide a more compelling picture of potential risk.

              “This innovation shifts the approach from simple fraud detection to proactive prevention, allowing authorities to stay ahead of fraudsters with scalable systems that learn and evolve.”

              Hybrid AI adaptability

              The adaptability of the system lies in the bridging between neural and symbolic AI as the LLM distills nuances in texts into hunches. They need to be structured and amplified for our analytical AI to be able to access them. As Igor Stravinsky wrote in his 1970 book Poetics of Music in the Form of Six Lessons, “Thus what concerns us here is not imagination itself, but rather creative imagination: the faculty that helps us pass from the level of conception to the level of realization.” For us, that faculty is the combination of a general ontology and vector-based similarity search. They allow us to connect hunches to flags based on semantic matching and thus address the data using general rules. Because we work in a graph context, we can also explore direct, indirect, and even implicit relations between the data.

              Now let’s explore how our team of agents picks up and amplifies weak signals, and how these signals, once interwoven in the graph, can lead the system to identify patterns spanning time and space, patterns it was not designed to identify.

              A scenario: Welfare agencies have observed a rise in fraudulent behavior, often uncovered only after individuals are exposed for other reasons like media reports. Identifying these fraud attempts earlier, ideally at the application stage, would be extremely important.

              Outcome: By combining intuitive and analytical insights, authorities uncover a well-coordinated fraud ring that would be hard to detect through traditional methods. The agents map amplified weak signals as well as explicit and implicit connections. Note also that the system was not trained on detecting this pattern; it emerged thanks to the weak signal amplification.

              One of the powers of hybrid AI lies in its ability to amplify weak signals and adapt in real time, uncovering hidden fraud patterns that traditional methods often miss. By blending the intuitive insights of LLMs with the analytical strength of knowledge graphs and multi-agent systems, we’re entering a new era of fraud detection and prevention – one that’s smarter, faster, and more effective. As Mulder might say, the truth is out there, and with the right team, we’re finally close to finding it.

              Start innovating now –

              Implement a universal ontology

              Create a shared ontology to bridge neural (intuitive) and symbolic (analytical) AI agents, transforming weak signals for deeper analysis by expert systems and graph-based connections.

              Form specialized multi-agent teams

              Build teams of neural (real-time detection) and symbolic (rule-based analysis) AI agents, each specialized with tools for their role.

              Leverage graph technology for cross-referencing

              Use graph databases to link signals over time and across data sources, uncovering patterns like fraud faster, earlier, and at a lower cost than current methods.

              Interesting read?

              Capgemini’s Innovation publication, Data-powered Innovation Review – Wave 9 features 15 captivating innovation articles with contributions from leading experts from Capgemini, with a special mention of our external contributors fromThe Open Group, AWS and UNESCO. Explore the transformative potential of generative AI, data platforms, and sustainability-driven tech. Find all previous Waves here.

              Meet the authors

              Joakim Nilsson

              Knowledge Graph Lead, Insights & Data, Client Partner Lead – Neo4j Europe, Capgemini 
              Joakim is part of both the Swedish and European CTO office where he drives the expansion of Knowledge Graphs forward. He is also client partner lead for Neo4j in Europe and has experience running Knowledge Graph projects as a consultant both for Capgemini and Neo4j, both in private and public sector – in Sweden and abroad.

              Johan Müllern-Aspegren

              Emerging Tech Lead, Applied Innovation Exchange Nordics, and Core Member of AI Futures Lab, Capgemini
              Johan Müllern-Aspegren is Emerging Tech Lead at the Applied Innovation Exchange (AIE) Nordics, where he explores, drives and applies innovation, helping organizations navigate emerging technologies and transform them into strategic opportunities. He is also part of Capgemini’s AI Futures Lab, a global centre for AI research and innovation, where he collaborates with industry and academic partners to push the boundaries of AI development and understanding.

                Are data spaces the future?

                Capgemini
                Peter Kraemer, Phil Fuerst, Debarati Ganguly
                Mar 5, 2025

                Europe is building a data-driven economy in a changing geopolitical context. As it strives for both innovation and sovereignty, decentralized ecosystems offer a way to create value with data, while safeguarding freedom of choice.

                Data has the potential to transform processes, businesses, economies, and society by unlocking new kinds of value creation. It’s also how we are going to make AI work as a crucial component of the future European data economy—but only if that data is built on strong foundations that ensure its quality and relevance.

                Of course, value creation depends on the data that’s available to you, and you might not have all the data you need. That’s why data needs to be shared and combined. In this article, we consider how data spaces meet this need, offering what the Data Spaces Support Centre (DSSC) describes as the “ability to provide the essential foundations for secure and efficient data sharing”. While our focus in this article is on European data spaces, we recognize that this is becoming a relevant topic around the world.

                Why a decentralized data economy makes sense for Europe

                Data spaces are, in effect, decentralized ecosystems that have a powerful resonance in the world today. Indeed, recognizing their huge potential, the European Commission established a series of domain-specific/sectoral common European data spaces designed to help “unleash the enormous potential of data-driven innovation”.

                We see three main drivers for these common data spaces in Europe: geopolitics, commercials, and choice. In the first instance, in light of the unstable geopolitical landscape, data spaces give you assurance that all your (data) eggs aren’t in one basket.  You select the datasets you want to reside in what data space. Interoperability and portability can help avoid the dreaded lock-in effect where changing from one service provider to another might be prohibitively complicated. Commercially, data spaces address any exposure to potential monopolistic lock-in effects by individual companies cornering the market in data platforms. Then there’s the matter of choice. You choose who you interact with in a common data space, which puts you in control of who to share data with.

                Why we need data spaces

                Sharing data is key to data-driven growth. Indeed, it’s a vital aspect of the European strategy for data. But over-reliance on data platforms predominantly controlled by a limited number of international technology firms introduces potential vulnerabilities regarding data security, access, and strategic autonomy. We may also lose the ability to share data on our own terms, in accordance with our own values—freedom, privacy, control.

                An alternative future for Europe is to share data on a sovereign basis. And that implies across industries. That’s why we’re so excited to be working on the DSSC and on Simpl, the open source, smart and secure middleware platform that supports data access and interoperability among European data spaces.

                Beyond technology to value creation

                Let’s not forget that a data space is only an instrument. It’s what you do with it that matters. In a data space you will be able to aggregate, combine and correlate data that you can’t today because it is stored in different places. And that’s where we begin to create significant value from data, specifically in a number of areas, as follows.

                1. Global challenges: Data spaces will prove inordinately useful in tackling grand challenges that cut across sectors and geographies. Here we’re talking about achieving mission-oriented policy goals, such as reducing healthcare inequality and achieving net zero/carbon neutrality targets. For example, the European Health Data Space (EHDS) will be an enabler of patient empowerment, with better access to and control over health data. Further, increased reuse of EHDS data for research and policy making will improve public health interventions. A 2025 report from the World Economic Forum in collaboration with Capgemini suggests the EHDS could generate €5.5 billion savings over ten years. We’ve already seen the huge value of data sharing in a global crisis when, in the Covid 19 pandemic, our governments needed data from many areas at once to form policy—healthcare systems, pharma, mobility, employment and economic data. There will be future pandemics.
                2. Innovation: Data spaces will undoubtedly contribute to data-driven innovation across the EU as it continues on its mission to build the Single Market for Data. The European Commission states, “Common European Data Spaces will enhance the development of new data-driven products and services in the EU, forming the core tissue of an interconnected and competitive European data economy”.  In this respect, the combination of data from different sources across sectors can produce fascinating new applications. Think, for example, of the traffic flow in a city, where the observation of vehicle movement and a subsequent adjustment of traffic lights can help avoid congestion, and the monitoring of parking lots eases the burden of finding a parking spot, possibly connected to a recommendation of a charging port for the car’s battery. The energy grid could then be supplied with better anticipation of demand peaks and control energy distribution accordingly.  The seamless integration of real-time public transport data can then be used to recommend the best option for getting from A to B.
                3. Efficiency: Data spaces will help in the more efficient use of resources and improve public services. A great example here is that of road surface observation. By correlating data from cars’ electrical sensors, it becomes possible to monitor, in real time, the deterioration of the road, and carry out preventative maintenance to optimize spend / return. And returning to the healthcare sector, access to comprehensive patient histories in a shared data ecosystem has the potential to lead to better and faster diagnosis and treatment.
                4. Science and research: Shared data can create new evidence bases for scientific and medical research. Let’s consider the following scenario—I drive to work in a convertible most days; the farmer of the field sprays an experimental fertilizer; later I develop  neurological issues but doctors are unsure how to treat them. In the future we might be able to correlate this illness with the exposure to the fertilizer by aggregating mobility data, air quality data, times that the farmer used the fertilizer, and the contents of that fertilizer.

                Questions at the edges of our data economy

                The potential for value is clear, but there are numerous challenges still to overcome—and they are not principally digital ones. One unknown factor is what it will cost to set up and run a common data space. At this point we don’t have an adequate way to price data, so this question remains unanswered. Other questions include: How can we quantify the value of new data-driven business models vs traditional business models? And how can we pinpoint the strengths and weaknesses of data ecosystems and technologies?

                The answer to all of these questions at present is that we are all on a journey with common data spaces. We improve every day and the answers will come. But it is hard to imagine that the massive contribution of sharing data to the common good will not outweigh the costs and barriers that need to be overcome.

                Above all, the decentralized model depends on participants’ willingness to share data. That means they must trust the other participants and the infrastructure. There is no other way to build trust except enabling people to say no. Letting people choose in itself invites trust.

                Europe can do data differently

                Data spaces are a way for Europe to reap the benefits of data for economic growth and positive societal outcomes, while affirming European values in the digital domain. They remain an integral part of the European strategy aiming to make the EU a leader in a data-driven society.

                Find out more

                Peter Kraemer will speak about the future of data sharing in Europe at the Data Spaces Symposium in Warsaw on 11-12 March. Register at https://www.data-spaces-symposium.eu/

                Authors

                Person in a suit and light blue shirt with a blurred face, standing in front of trees.

                Peter Kraemer

                Director Data Sovereignty Solutions, Capgemini
                “A European data economy based on openness, fairness and transparency is possible, and we are determined to help make it a reality. In a flourishing data economy, all sectors will have new ways to generate value. Sovereignty means making independent and well-informed decisions about our digital interactions: where data is stored, how it is processed, and who can access it. Data spaces make these principles concrete, and we are committed to helping them grow.”
                Person with glasses on and no hair.

                Dr. Philipp Fuerst

                VP Data-Driven Government & Offer Leader, Global Public Sector
                “Government CIOs and IT experts barely need convincing of the benefits of interoperability. What has been missing is explicit guidance on the necessary non-technical requirements. The Interoperable Europe Act helps with exactly that. What’s more, with a critical mass of collaborators, individual public sector agencies will find that their investments into interoperable and sharable solutions will result in much bigger returns.”
                portrait of a person with dark hair and earrings against a dark background.

                Debarati Ganguly

                Director, Data & AI – Global Public Sector
                Debarati is a seasoned expert in Data-Driven Government, specializing in data ecosystems, governance, and AI-driven analytics for the public sector worldwide. She collaborates with leaders and AI specialists to drive strategic initiatives, ensuring ethical, sovereign, and anonymized data solutions. Her expertise helps governments and citizens unlock the true value of data, enhancing decision-making, service delivery, and overall public benefit through AI and Generative AI innovations.