Skip to Content

Navigating the roadmap to AI agents

James Housteau
Mar 21, 2025

Call centers are seeing gains but reliability and consistency need to be a focus. Adopting a copilot approach is the best way to ensure real efficiencies and positive customer experience. 

It has been said that AI agents could be a multi-trillion dollar opportunity. Intelligent software agents capable of learning to manage actions and tasks have the potential to transform almost everything. Work and life will be impacted by the drive for productivity and efficiency. But AI agents will also democratize access and help overcome barriers to empower more people and drive innovation.

The road to agentic AI is still being built and there will be many routes to explore but, today, one of the biggest pushes is coming from the telecommunications industry. Call centers have been early adopters for this kind of generative AI because it is a natural evolution from bots. Existing chat features were interesting but they do not always work well, or customers were so annoyed by the menu systems on phones that they were unhappy before they even spoke to someone.

Moving beyond bots

Generative AI brings a much better experience to the call center structure, and it enhances existing technology. For example, Google Customer Experience Suite (CES) was built on its Contact Center AI and enhanced with generative AI technology. It has better engagement with customers in both the chat channel and live. With the emerging capabilities of large language models (LLMs) and the growth of companies like OpenAI, AI agents can take on expanded tasks.

Creating a multi-modal experience allows an AI agent like Google Gemini to intake text, visuals, and audio, and add a communications layer through actual voice and text-to-voice features that are extremely realistic.

Combining Gen AI, language features, the ability to understand a vast amount of context instantly, and better and more human communication with text-to-voice capabilities creates systems with huge potential.

Enhancing the AI agent

We have recently launched the concept of thinking models that are capable of handling much more complex tasks. This is achieved through reinforcement learning based on human feedback, which means these models can actually think, process, and approach problems from multiple angles and explore different paths to find the best solution. It is very reminiscent of how a human would work to solve a problem.

Agentic AI has the capability to not only understand what a customer needs but to communicate in our own language with the right nuances and even slang. Communication is there. The thought process is there. The ability of AI agents to think through problems at length is there. And they bring the ability to use tools during interactions.

For example, a customer calls in with multiple inquiries. The AI agent can quickly understand the intent of the call so there is no longer a need to sift through menus or listen to a bunch of options. Because the intent is read in the early stages of the call, the problem resolution process operates better, as the AI agent has the information to solve the problem and the tools to execute it.

The right AI team

After detecting the true intent of a customer call, a master AI agent can act as the interactive layer with a customer, while simultaneously accessing a team of subagents to delegate tasks. The subagents can specialize in different areas, like billing issues or new installations. There is no more waiting on hold to be transferred to a different department or a manager. The master agent can access a whole host of tools and know what it needs to take action.

For example, a customer may want to process a payment. The master agent can identify the request and decide how to proceed. It can give a credit, research a billing discrepancy, or initiate other searches to complete the request. It can call different APIs to get information, update the account, and process the bill. With access to tools, there is really no end to what an AI agent can do.

These reasoning capabilities and tools mean agents can do very similar things to humans. However, it is still early days in the process and there are concerns to be addressed. Reliability and consistency are two factors. The monitoring and evaluating are improving to help ensure the responses and decisions by AI agents are correct.

Improving the call center experience

We worked with one telco client to deliver better knowledge searching, to leverage LLMs to use new methods of data acquisition summarization. The goal was to make technical documentation more accessible so when someone calls to troubleshoot a modem, for example, the answer is readily available.

Call centers are also a common sales channel. Agents can provide additional information or offer specific deals. That requires the agent to understand the needs of the customer, align them with a product, make an offer, address objections, and close the deal. Now an AI sales agent can interview the customer to understand the needs and wants and match them with potential solutions. They can even address objections and concerns to help get to the sale.

Copilots: Finding the agentic balance

According to a recent Capgemini Research Institute report, being an agent is not an overly satisfying career choice, with only 16 percent of human agents surveyed report overall satisfaction with their roles. They face a number of pressures, from rising customer expectations to inefficient systems and a high attrition rate. There are efficiency gains to be made by employing AI agents that can help humans do a better job. In addition, AI agents can help resolve issues more quickly so the customer and employee have positive experiences.

This is why the copilot effect is a popular AI agent option. Google has Agent Assist to support live agents to resolve queries and issues more quickly. It is like having an expert in the room at all times with a call center agent. For example, the human agent can use it to help digest what is being said, with information automatically appearing on dashboards to assist with the call resolution. The copilot can also provide real-time assessments of the sentiment of the caller. Now the human agent has prompts with potential resolutions, rather than having to bounce between different systems for information or consult with a manager.

The human in the middle

So the concept of the human in the middle is very important. AI agents are a powerful tool meant to enhance experience, but sometimes a model can hallucinate or produce an error – and a company is responsible for an AI agent’s output. That means companies have to own the net result. So employing copilots with the human in the middle is happening even in new call centers. Once a system is proven, the role of AI agents can expand but, since call centers have a major impact on customer experience, there needs to be a high level of comfort with the system.

Call centers that use Google Customer Experience Suite (CES) engage customers with generative AI for many tasks, like determining what a client needs and other lower-level processes, to make calls more efficient and get to resolution quicker. AI agents can, for example, engage with back-office operations so humans can focus on more high-value tasks.

It takes time for companies to be comfortable with exploring generative AI solutions.  Companies need to focus on the business case and ensure innovation results in efficiency and savings.

Working with Google Cloud, Capgemini can help companies move into the agentic future. We can help companies build a competitive edge with agents to drive real customer service transformation. Google Cloud’s advanced AI capabilities enable businesses to build and deploy intelligent virtual agents easily. It is time to create, frictionless environment to scale agents where everything supports the needs of the organization and its customers.

Join us at Google Cloud Next to discover how we’re helping companies embrace the agentic era and benefit from the intersection of innovation and intelligence.

Author

James Housteau

Head of AI | Google Cloud Center of Excellence
Over two decades in the tech world, and every day feels like a new beginning. I’ve been privileged to dive deep into the universe of data, transforming raw information into actionable insights for B2C giants in retail, e-commerce, and consumer packaged goods sectors. Currently pioneering the application of Generative AI at Capgemini, I believe in the unlimited potential this frontier holds for businesses.

    Explore our Google Cloud Partnership

    Welcome to the agentic era

    Herschel Parikh
    21 Mar 2025

    Forget chatbots. The age of the agent is here. Imagine a digital workforce that understands, empathizes, and anticipates customer needs as a trusted advisor – a network of AI agents collaborating to deliver truly human-centric experiences.

    This isn’t science fiction; it’s the dawn of the Agentic AI era, and it’s poised to revolutionize customer interactions. Market.us is projecting the global Agentic AI market will be valued at $196.6 billion by 2034, a dramatic leap from $5.2 billion last year. This exponential growth is not just exciting; it signals a fundamental shift. While the possibilities are vast, companies must move beyond simply creating “cool agents” to building robust, collaborative systems.

    Agentic AI is rapidly evolving, and the conversation needs to shift towards building networks of interconnected AI agents. This next stage, focusing on multiagent systems, is where real value will be unlocked. 

    Next-level hyper-personalization: The game changer 

    The true power of multiagent systems lies in their ability to deliver hyper-personalized experiences. Imagine AI agents seamlessly orchestrating across different business areas, instantly accessing client information to tailor interactions in real-time. This level of hyper-personalization, incorporating individual preferences, creates a genuine sense of personal connection. 

    Multiagent systems represent the next evolution in personalized interactions. We’ve moved beyond deterministic chatbots and automated processes to a realm where embedded generative AI enables faster, more personalized interactions that build loyalty and connection. The impact is already evident: according to the Capgemini Research Institute, 31 percent of organizations using generative AI see faster response times, and 58 percent anticipate further improvements. 

    Efficiency and beyond: Connecting agents across departments 

    Beyond enhancing customer experience, connecting agents across departments drives efficiency and productivity through automated, complex workflows. The ability for agents to communicate and operate seamlessly at faster speeds across departments unlocks significant potential. 

    This also expands service capabilities. For example, overcoming language barriers in global call centers becomes possible with multilingual digital agents. Research indicates that 60 percent of consumers would pay more for premium customer service, highlighting the value of these enhanced capabilities. Google’s Customer Engagement Suite (CES) provides the AI technology and natural language processing (NLP) that can provide enhanced customer experiences.  

    Connecting agents and data: Unlocking deeper insights 

    Multiagent systems generate valuable data on information and conversations, which, when shared, provides a deeper understanding of customer behavior and trends. 

    This data spans various departments – sales, order management, supply chain, ERP, and marketing – highlighting that inquiries rarely fit neatly into departmental silos. Agents need to be able to access data across these silos is crucial for providing cohesive responses to complex customer questions. 

    This is why cross-department collaboration is crucial. Agents need seamless handoffs and access to different departments so that when a person engages with them, the conversation continues without waiting for the next agent to be updated. 

    However, simply opening up data is not enough. Robust security protocols are necessary to ensure that not all information is accessible to every agent. Agents must pull information in a way that maintains visibility, requiring a deep understanding of systems for effective deployment. Data security and privacy are paramount. Accessing various data sources requires clear guidelines and governance to ensure compliance with existing data rules. 

    Agentic change management: Blending the human workforce with the “digital workforce” 

    Ideally, digital and human workforces will seamlessly blend, working in unison on daily tasks and customer interactions. Generative AI will continuously learn from feedback and algorithms, while large language models adapt. However, potential biases must be addressed to ensure fairness. 

    Companies must also address the impact of multiagent systems on the human workforce. Clear communication early in the process can prevent resentment toward AI agents. Reassuring employees is a crucial part of change management. If employees fear job losses, they will be less inclined to engage with companies using AI agents. Multiagent systems offer exciting possibilities, but everyone must be part of the solution to maximize the benefits. 

    Building a resilient agentic infrastructure 

    Agentic AI does not mean creating a single, all-encompassing agent. Companies must prioritize resilience. Humans have bad days, and so can AI agents. If a single agent fails, the entire operation can grind to a halt. A multiagent system allows agents to focus on specific areas, ensuring that if one fails, others remain unaffected. 

    The challenge for companies lies in the complexity of the infrastructure required for seamless agent communication. While technology is increasingly sophisticated, the talent to make it work is scarce. Companies need the right skills to build and effectively operate these agentic systems. 

    Google’s Agentspace is an orchestration platform that allows companies to deploy agents easily. The Google ecosystem integrates seamlessly with any system, ensuring smooth information flow, regardless of whether a company is using Google applications and infrastructure. 

    Working with Google Cloud, Capgemini can support customer service transformation that creates seamless, quality interactions that deliver an exceptional level of service, support, and delight to all stakeholders. Advanced AI capabilities and scalable infrastructure means Google Cloud can build and deploy intelligent virtual agents, enhance agent productivity, and personalize customer experiences easily. We can leverage the power of Google’s Customer Experience Suite to innovate for growth and reinvent business models to unleash what is possible. 

    Join us at Google Cloud Next to discover how we’re helping companies embrace the agentic era and benefit from the intersection of innovation and intelligence.

    Author

    Herschel Parikh

    Global Google Cloud Partner Executive
    Herschel is Capgemini’s Global Google Cloud Partner Executive. He has over 12 years’ experience in partner management, sales strategy & operations, and business transformation consulting.

      Explore our Google Cloud Partnership

      Gen AI for Intelligent Industry: a new revolution for R&D and operations

      Capgemini
      Charlotte Pierron-Perlès, Alex Marandon, Hugo Cascarigny, Yasmine Oukrid
      Jun 14, 2024
      capgemini-invent

      Generative AI’s (Gen AI) power to transform every aspect of our lives is now common knowledge

      But business leaders are still unsure exactly how to make this revolutionary technology an integral part of their activities. The truth is that the integration of generative AI for operations requires a very different approach than the integration of traditional AI. This is evident in the sphere of R&D and operations.

      In recent years, AI has demonstrated concrete impact across the entire operations value chain. However, the most common and widespread use cases consistently focus on optimizing core processes. One notable development is the way industry leaders use AI to optimize the “physical” manufacturing and delivery of goods. Significant changes include time series analysis to improve process yield or scrap rate, operational research optimizing goods inventory norms, flows of goods or transportation, and computer vision to detect non-conformities.

      With its unmatched ability to navigate, digest, and interact with unstructured information and documentation, Gen AI reoriented the focus from the “physical” to the “information” world. This means a shift from sensors and connected assets data to documents, which in turn leads to a retreat from manufacturing and supply chain core processes to R&D and industry enabling functions (e.g., sourcing, maintenance, quality and regulatory, etc.).

      With this new paradigm in mind, leaders in operations see a new world of opportunities opening up before them, all powered by Gen AI .

      Key capabilities for R&D and operations

      Generative AI for research and operations is a gamechanger for all industries worldwide, optimizing organizations by automating data analysis and customer service tasks. Right now, many organizations are rightly experimenting, developing best practices, and identifying scalable solutions. We at Capgemini Invent were some of the first movers in this space, using our expertise to envision applications.

      GenAI for R&D and Operations infographic
      Six Gen AI capabilities that will make a difference in operations

      Even though we are only at the dawn of this transformation, several concrete use cases are already shining bright with more emerging every week. All that remains is to explore the untold opportunities.

      Gen AI for operations: use cases already identified embrace the whole operations value chain

      Smart products

      The ability of Gen AI to emulate a human conversation makes it a prime candidate for enriching user experience with “companion apps”. These apps leverage data collected by connected devices and expose it through a virtual assistant, enabling interactions in natural language and access to insights generated in real time. In the near future, we foresee the emergence of autonomous, edged Large Language Models (LLMs) embedded directly within products, enabling a new range of usage.

      Engineering and R&D

      As it is probably the most document-intensive area of the industry, Gen AI offers a multitude of opportunities for engineering and R&D. The ability of Gen AI to digest and synthetize complex information, combined with Retrieval-Augmented Generation (RAG) , enables engineers to easily search and query knowledge base or technical documentation using natural language. LLMs have multiple applications all along the technical documentation lifecycle, accelerating creation of a draft, proofreading or consistency checks against existing standards.

      From the innovative use of AI algorithms for Automated Molecule Design in Drug Discovery to the Fragrance Formulation Generator streamlining perfume creation, the applications are far-reaching and go way beyond LLMs. It’s all down to Gen AI’s ability to craft detailed 3D simulation scenarios.

      Manufacturing quality and maintenance

      Leveraging its abilities to extract, synthesize, and classify information, Gen AI can optimize and turbocharge several manufacturing, quality, and maintenance processes. For instance, it can accelerate classification, summarization, search and analysis of quality incidents. It can also automate generation of quality documentation in domains with heavy compliance requirements (e.g., life sciences) and automate the documentation of non-conformities.

      Gen AI-powered software enables human operators in maintenance or engineering to rapidly navigate through documentation of assets in a targeted way. Furthermore, generative AI in manufacturing and operations will be instrumental in the consolidation of information from other external sources, such as weather forecasts, market insights, and assessment of geopolitical risks. This capacity to integrate data from multiple sources is why Gen AI maximizes maintenance planning, accelerates operations, and improves efficiency.

      This is all the truer in the case of distributed operations, where data coming from different systems is often heterogeneous and sometimes inconsistent. Gen AI’s ability to automatically harmonize and retreat data from distributed environments to ensure it meets quality standards will be key to enable seamless utilization of these data flows and to de-risk associated field operations.

      Supply chain and purchasing

      Gen AI can also significantly increase productivity of interactions across a network of partners, assets, and inventory. More specifically, Gen AI can boost the resiliency and efficiency of supply chains in the following ways:

      • Enriched demand planning: Complementary to “standards” ML-based forecasting, by analyzing various exogeneous data sources (customer reviews, social-media trends, articles, etc.) to better understand demand drivers.
      • Efficient sourcing: For instance, through improved upstream suppliers’ intelligence, by crossing external information regarding the upstream supply chain with the analysis of internal suppliers’ documentation and deriving insights from these data. Gen AI can also improve procurement process efficiency by automating transactional tasks involving interactions with suppliers or processing of external documentation (contract analysis, CSR compliance check, automated review of tenders, etc.).
      • Improved Customer interactions: Automating and augmenting customer service and back-office operations, by accelerating search, summarization, classification and processing of trade, logistics and customer claims.

      Three detailed use cases

      The following three case studies provide an idea of how AI will change R&D and operations in the near future.

      1. Search, synthesis, and reconciliation of engineering documentation

      In the engineering industry, accelerating development processes while securing the quality on most complex products or major industrial projects are key challenges.

      The main difficulties usually encountered are:

      i) Limited access to engineering knowledge, and/or inefficiencies inherent to collaboration across various business entities involved in one given project.

      ii) Time-consuming – or even occasionally unattainable – compliance requirements (e.g., traceability demonstration standards) generating significant complexity to retrieve and match key information from contracts, engineering specs, test procedures and results.

      Generative AI in research is proving to be profoundly transformative. It can be used to research and analyze extensive amounts of documentation, both internal and external, such as engineering standards, design practices checklists, papers, and industry benchmarks, encompassing various formats, languages, and structures. It can identify and extract the relevant information and find the best way to display the requested information. It can also document user feedback, measuring engineers’ satisfaction and contributing to enhanced model performance.

      Gen AI can also support the reconciliation of information across the V-cycle, by extracting key elements, such as specifications and technical requirements, from technical documents (even when these are not properly referenced). Once retrieved, Gen AI correlates the data to improve requirements traceability (e.g., high-level requirements with low-level specifications).

      Finally, Gen AI can accelerate the generation of test procedures and reports.

      2. Smart search for technical documents

      Engineering teams face the rising complexity of new or changing regulations, extended commercial and industrial ecosystems, technological constraints, customer expectations, and even their own organizational structure and business process management systems. As such, relevant information is often managed by different stakeholders, widely dispersed, and non-homogeneous in various regards (e.g., format, granularity, languages, etc.).

      Engineering teams struggle to manage this complexity. Despite being a critical task, information management is highly time-consuming and potentially a source of new risks or missed opportunities.

      Gen AI can help human operators face these challenges, by leveraging its ability to research and analyze the extensive amounts of documentation required for maintenance and engineering, both internal and external, such as maintenance report, operating protocols.

      Additionally, it can facilitate interactions between human operators and databases by writing and executing queries in natural language, which can prove decisive in accessing certain information.

      3. Customer service efficiency

      Within the consumer products industry, customer service faces several recurring challenges related to searching, summarizing, and classifying clients claims, as well as process logistics:

      i) Customer service processes involve interacting with multiple IT systems and requires communication with various stakeholders to respond to customers’ requests, making it complex and tedious to gather relevant information.

      ii) These processes are still largely manual, which is time-consuming and increases the risk of errors. As a result, end-to-end claims resolution can take months to complete, impacting both customer satisfaction and cash flow optimization.

      To address these challenges, Gen AI can be used at several levels:

      • Data retrieval and synthesis: To search for and retrieve relevant information related to the claim from various data sources, such as invoices and delivery receipts and contracts, all of which exist in different formats.
      • Proposition of insights and validation recommendations: Comparing collected information and received claims, Gen AI can swiftly detect inconsistencies, highlight discrepancies, provide insights, assess whether the customer’s claim is well-founded, and make recommendations on potential outcomes to suggest to customers.
      • Document processed claims: Capitalization is of the essence. Documenting processed claims, connecting the claim, its outcome, and the evidence used paves the way for easier information retrieval and decision making, should similar cases arise.

      From patterns to trends: Key Gen AI considerations

      We expect the use of Gen AI in all industries to scale up at an accelerated pace in the coming months. Below are some of the more exciting zones of development:

      Multimodality refers to the capability of Gen AI models to process and generate outputs across multiple types of data, such as text, images, and audio. It facilitates more comprehensive and integrated interactions with human users or with other software, enabling the AI to understand and respond seamlessly to complex inputs combining different modalities.

      With the release of ChatGPT4o, multimodality is set to augment and empower human operators not specifically trained to interact with Gen AI, such as many blue-collar workers. It will radically improve training, upskilling, safety, and eventually the optimization of processes.
      More importantly, multimodality paves the way for intelligent systems to interact autonomously with the physical world. There is currently a major push in R&D to develop a new generation of robots able to communicate with humans via speech and imitated gestures.
      In short, multimodality is key to unlocking the full potential of Gen AI on the shopfloor, having a major impact on the deployment of Industry 4.0 use cases as it matures.

      As of today, Gen AI for R&D and operations is still essentially addressed as standalone custom use cases, handled separately from the “legacy” systems that support R&D, Manufacturing, and operations – namely, from Enterprise Resource Planning (ERP), Manufacturing Execution Systems (MES), Advanced Planning and Scheduling (APS), and Product Lifecycle Management (PLM) systems, which are key to efficiently manage the shopfloor and connect it to the rest of the business. But the first initiatives to bridge the two worlds are emerging.

      Early initiatives, for example, include the generation of content or of code interpretable by MES systems or Programmable Logic Controllers (PLCs). Another example is the generation of component designs, sub-assemblies, or complex systems, all based on human guidance communicated through prompts. However, these initiatives still lack full and seamless integration within the core systems.
      In the upcoming months, we can expect IT vendors in operations (ERP, PLM, MES, APS, etc.) to progressively integrate Gen AI embedded features in their solutions, like Microsoft has done with Copilot in its Office suite. Considering how widespread these solutions are, this may in return fuel a wider and more systemic adoption of Gen AI within Operations – even if standalone custom deployment of Gen AI will probably remain a frequent pattern, for client-specific use cases.

      With applications in many industries, Gen AI can supercharge design generation in many different contexts: molecule generation, product design, chipset conception, or designing parts and components. For instance, in the automotive industry, Gen AI models can support the creation of tire design, considering performance requirements and engineering constraints. The consumer products sector is another interesting example, where combinations of Gen AI models are used to accelerate the discovery and selection of optimal formulas for new fragrances.

      Gen AI can completely automate the creation of 2D or 3D designs, concepts, and product architectures. Moreover, it can supercharge computer-aided design or computer-aided engineering models based on requirements and constraints, decide to launch relevant simulations, analyze their results, and adjust simulations if needed. More importantly, Gen AI can do all this while also automatically creating design models, including suggest suppliers and logistics schemes, drafting documentation for in-service, end-of-life, and redesign loops. It can also boost the development of eco-design, by providing designers with the latest international regulations and compliances.

      RAG: a powerful tool to improve accuracy and limit hallucinations

      Retrieval-Augmented Generation (RAG) is an approach combining two basic capabilities of Gen AI (information retrieval and content generation), both proven to be very efficient at producing highly accurate responses.

      To put it simply, RAG essentially consists of restricting the search field in which a Gen AI model will look for relevant information to answer users’ requests to a given, predefined, and limited set of documents. By doing so, this ensures the model will only look within a curated collection of information, the accuracy and quality of which can be guaranteed. This targeted retrieval is then used to generate a more precise and informative response.

      RAG is particularly helpful in limiting or eliminating hallucinations, which are instances where the model might generate incorrect or nonsensical information. This is achieved by grounding responses in data coming from reliable and relevant documents. This approach is currently implemented in a vast range of use cases, in various industries where the reliability of responses is particularly important.

      3 vectors of success for Gen AI in Research, Development, and Operations

      To lift the potential of generative AI in Operations, like for any other digital innovation, companies must integrate the technology in their digitalization strategy, build scalable tools and upskill their staff on how to best use these AI tools in their data ecosystem. But Gen AI also comes with its specificities. So, how do you avoid the pitfalls and find the steppingstones?

      Even more than standard AI, it goes without saying that Gen AI is a technology often easy to implement in a Proof of Concept (PoC), but very challenging to scale up: most advanced customer uses are specific with deeply integrated Gen AI with legacy IT systems at the core of operations processes. Thus, business leaders looking at the potential of Gen AI need to be very pragmatic, adopting a “fail fast” mindset whilst being prepared to iterate. This is the most efficient way to reach achievable targets.

      Sandboxes experimentation can be hugely beneficial on a limited operational scope, decoupled from legacy IT. Here, concepts can be tested without first needing to rethink the entire system and perhaps needing to scrap the lot.

      We work with our clients to make digital continuity, standardization, documentation of projects, and unification of data a part of enterprise-wide models and infrastructure. And even though Gen AI can query and manipulate massive amounts of unstructured data, this should not be an excuse to curtail or stop this support for the quality and structuring of data.

      As for traditional AI, large and high-quality datasets are needed to train Gen AI. The quality and accuracy of training datasets determine the models’ outputs, and here, robust data foundations remain a necessity. This is particularly true in distributed environments, where data models from different sites are generally heterogeneous, leading to difficulties in replicating experiments and scaling up.
      That is why we absolutely recommend keeping data structured in organized semantic models, such as product lifecycle management models. We also believe in maintaining investments in digital continuity transformation. Building clean, structured, reliable, and federated manufacturing and operations data models will be instrumental in the support for deployment of packaged and generic solutions combining AI and Gen AI.

      Gen AI ecosystem is moving fast, with hyperscalers at the forefront. In this race for technology, one cannot be the best in every category, and business leaders of the digital ecosystem must reflect deeply on where their added value lies. Developing technology internally for each use case may not always be the best solution. In some cases, the cost of developing a custom solution, scaling it up, and maintaining it in the long run will simply be too high compared to the additional value it brings. In this scenario, many look to integrate an off-the-shelf solution developed by a third party.

      One year ago, “buy” was not always an option, as software editors were not always able to meet demand for Gen AI-powered solutions, and there was no other way than “make”. But things are changing fast: currently, editors are increasingly including Gen AI-based features in their solutions. This trend will only gain momentum, as most suppliers of existing solutions dedicated to operations are working to enhance their products with Gen AI features. In the short-term, we even expect editors to create environments enabling the construction of Gen AI-customized solutions.

      Final thoughts:

      The Gen AI landscape is evolving at a phenomenal pace – too great a pace for some. Barely had ChatGPT 3.5 dropped from media headlines when ChatGPT 4o was released and made free to the general public. With this in mind, it is vital that business leaders stay up to date on the technological roadmaps of software providers. This is the only way to know if solutions already exist or should be customized for a specific need. Additionally, be sure to systematically assess and then monitor the value creation of any custom solution developed in-house, all the while asking yourself one simple question: is it worth it?

      Authors

      Charlotte Pierron-Perlès

      Charlotte Pierron-Perlès

      EVP, Managing Director of Intelligent Industry, Capgemini Invent
      Charlotte is the Managing Director of Capgemini Invent’s Intelligent Industry global practice. She drives CxO agenda on R&D and operations transformation. Charlotte has 20 years of experience and acts as trusted advisor to leading large scale end to end transformation for global companies where data, GenAI and advanced technologies are helping to drive significant revenue growth and enhancing their competitiveness, while meeting sustainability imperatives.
      main author of large language models chatgpt

      Alex Marandon

      Vice President & Global Head of Generative AI Accelerator, Capgemini Invent
      Alex brings over 20 years of experience in the tech and data space,. He started his career as a CTO in startups, later leading data science and engineering in the travel sector. Eight years ago, he joined Capgemini Invent, where he has been at the forefront of driving digital innovation and transformation for his clients. He has a strong track record in designing large-scale data ecosystems, especially in the industrial sector. In his current role, Alex crafts Gen AI go-to-market strategies, develops assets, upskills teams, and assists clients in scaling AI and Gen AI solutions from proof of concept to value generation.

      Hugo Cascarigny

      Vice President & Global Head of Data & AI for Intelligent Industry, Capgemini Invent
      Hugo Cascarigny has been passionate about AI, data, and analytics since he joined Invent 12 years ago. As a long-time member of the industries and operations teams, he is dedicated to transforming AI into practical efficiency levers within Engineering, Supply Chain, and Manufacturing. In his role as Global Data & AI Leader, he spearheads the development of AI and generative AI offerings across Invent.

      Yasmine Oukrid

      Senior Manager, Intelligent Industry, Capgemini Invent
      Yasmine is a key member of the Intelligent Industry Group Accelerator, where she focuses on defining and executing Intelligent Industry strategies to establish a unique market positioning. She is involved in CxO-level business development, strategic deal shaping, and partnership building. Yasmine supports companies in accelerating their Intelligent Industry digital transformation, addressing challenges related to scaling Smart Factory implementations, software-driven transformation, and utilizing Data and generative AI for operations. Her expertise spans across various industries, with a specific focus on Life Sciences, Automotive, Telco, and High-tech sectors.

        Stay informed

        Subscribe to get notified about the latest articles and reports from our experts at Capgemini Invent

        The next industrial revolution – Multi-agent systems and small Gen AI models are transforming factories

        Jonathan Kirk, Data Scientist, I&D Insight Generation, Capgemini’s Insights & Data
        Jonathan Aston
        Jan 23, 2025

        Factories are transforming and becoming smarter through the introduction of powerful multi-agent AI systems.

        In this blog, we’ll take a close look at how these revolutionary AI-powered systems can help drive the factories of tomorrow. 

        A lesson from history 

        The industrial revolutions of the past can be described in two ways: firstly, as the emergence of new types of power. The transition from using humans and animals to using steam power in the 18th century was a significant revolution that enabled huge productivity gains as well as transportation innovations and urbanization. Secondly, the industrial revolutions marked the emergence of specialization: splitting up work into smaller tasks, with dedicated humans or machines for each part of the process. This enabled standardization and mass production. 

        Coinciding with this, education and knowledge became specialized as well – people were only trained on their individual part of a process. Eventually, the innovation of machinery introduced automated reactivity to factory processes. Machines could now use condition-based “if this, then that” actions to complete a task. 

        In today’s factories, we are seeing the emergence of innovative multi-agent AI systems, which reflect the above themes in many ways, while also exhibiting some differences. In this blog, we’ll take a closer look at some of these new developments. 

        Antique photograph of the British Empire: Lancashire cotton mill

        What are multi-agent AI systems? 

        Multi-agent AI systems consist of autonomous agents or bots equipped with AI capabilities that work together to achieve a desired outcome. An agent in this context can be defined as  “an entity which acts on another entity’s behalf.” In these multi-agent systems, AI agents cooperate to achieve the goals of people who own certain processes and tasks. 

        Multi-agent systems can be thought of as having five dimensions in terms of complexity when compared to a single agent system: 

        1. Single to multi – adding more agents. 
        1. Homogenous to differentiated – having fundamentally different roles between agents. 
        1. Centralized to decentralized – removing the need for a single/central point of orchestration. 
        1. Generic to specialized – adding in different backgrounds and knowledge to create different expert agents. 
        1. Reactive to proactive – agents that can act independently in response to changes in the environment, without needing to be prompted. 

        Are there parallels with the previous industrial revolutions that suggest agents might accelerate the next one? 

        Let’s take the principles of multi-agent AI systems and apply them to a smart factory.  

        • Each machine can have its own AI agent, while multiple machines or types of work can be managed by supervisor agents.  
        • Most industrial tasks require multiple machines to work together, either in a streamlined, one-piece flow or in batches. Even machines working in “islands” need to be coordinated for the work in progress to be controlled, with no idle time. This requires that many different roles need to be assigned to different agents.   
        • Adding a decentralized AI management layer can be very beneficial for a factory. There are many advantages to having sub-teams of agents with the ability to act independently of each other and run different areas of a factory to meet objectives.  
        • Each machine works in a different way, and each area of a factory requires specialized knowledge. Therefore, each agent needs its own pertinent information to be able to act effectively. Higher levels of agent specialization would be very valuable to a smart factory. 
        • Agents would benefit from autonomously determining when and how they need to act, rather than waiting for permission or being told when to do so. If agents were connected to the market, they could independently decide what to do. For example, an agent might exhibit this reasoning: “although the plan says that we have to produce this mix, I will change it because I think that there will be an increase in that particular product due to X and Y.”  

        Multi-agent AI systems deliver clear improvements to factory processes and outcomes, including reduced downtime and increased optimization and efficiency. We also have the ability to add AI agents to data processing tasks, such as image and video analysis. This unlocks the potential of understanding input data in ways that were not possible before.  

        Unlocking new ways of understanding data in smart factories 

        In-line process control (IPC) is an approach that provides immediate feedback and adjustments based on real-time monitoring to maintain desired performance, quality, or output. If this is done well, it improves efficiency and reduces waste. However, the approach is difficult to implement, especially in systems based around humans. There are many data sources that need to be reviewed and understood in real time, and very experienced individuals tend to be the ones relied upon for this task. This experience is hard to acquire, potentially expensive, and still may not be sufficient to get the best results. This is, therefore, a great area of opportunity for multi-agent AI systems, which are very good at taking in lots of information, understanding what it means, and making real-time adjustments.  

        Let’s look at two examples of how this works. First, let’s say you are making potato crisps, and need to understand how the cooking time of the chrisps differs depending upon the size and growing conditions of the potatoes. This can be a complex problem involving lots of disparate data sources that a multi-agent AI system could cope with well. The system could also help to determine the root cause of any problems that arise. 

        A second example: if you are processing rubber in an extrusion line, the composition of the raw materials, their current mechanical and thermal characteristics, and the line parameters all influence the quality and speed of extrusion. This is a very complex problem, and in-line process control performed by an AI multi-agent system could add a lot of value. 

        Another advantage of this application is that it can be integrated into factories of varying levels of infrastructure quality. Sensors may not be perfect, and information from outside the factory may have data quality issues, but removing even some of the problems will give great productivity and quality benefits. This can be especially true if costly manual inspections could be streamlined, alongside the more obvious benefits of reduced waste.

        Businessman using tablet PC at industry

        Multi-agent AI systems are revolutionary for factories 

        We see parallels between the industrial revolutions of the past and what we are seeing today in multi-agent AI systems being adopted into factories. The difference now is that we are not transitioning power sources from people or animals to steam, or substituting humans in physical parts of processes. Instead, we’re allowing AI to perform tasks where it is beneficial to do so, and where it can perform the task better than the human. It is also worth bearing in mind that the real world is messy, and multi-agent AI systems can help us have more resilience and be more flexible.  

        New innovations like real-time AI processing on edge can accelerate the next AI-powered industrial revolution, and give similar productivity benefits as seen in the first one. The edge component is critical, as it is more responsive than cloud, permitting real-time control. It also offers higher levels of data security, enables off-line operations (which are critical to factories), and significantly reduces the cost of the operation. 

        However, AI will likely not be operating alone. I believe we will have human-AI hybrid systems for quite some time, and this is in no way a bad thing. It will be essential that humans and AI work effectively together – because for AI systems to bring value, they need to empower people, rather than replace them.  

        This blog article was written in collaboration with Ramon Antelo (Capgemini Engineering)

        About the Generative AI Lab 

        We are the Generative AI Lab, expert partners that help you confidently visualize and pursue a better, sustainable, and trusted AI-enabled future. We do this by understanding, pre-empting, and harnessing emerging trends and technologies. Ultimately, making possible trustworthy and reliable AI that triggers your imagination, enhances your productivity, and increases your efficiency. We will support you with the business challenges you know about and the emerging ones you will need to know to succeed in the future.  

        We have three key focus areas: multi-agent systems, small language models (SLM) and hybrid AI. We create blogs, like this one, points of view (POVs) and demos around these focus areas to start a conversation about how AI will impact us in the future. For more information on the AI Lab and more of the work we have done, visit this page: AI Lab

          

        Meet the author

        Ramon Antelo

        CTO Manufacturing and Industrial Operations, Capgemini Engineering
        Jonathan Kirk, Data Scientist, I&D Insight Generation, Capgemini’s Insights & Data

        Jonathan Aston

        Data Scientist, AI Lab, Capgemini Invent
        Jonathan Aston specialized in behavioral ecology before transitioning to a career in data science. He has been actively engaged in the fields of data science and artificial intelligence (AI) since the mid-2010s. Jonathan possesses extensive experience in both the public and private sectors, where he has successfully delivered solutions to address critical business challenges. His expertise encompasses a range of well-known and custom statistical, AI, and machine learning techniques.

          Top five key trends shaping employee experience in 2025

          James McMahon
          Jan 28, 2025

          2024 was a year where fresh emerging technologies highlighted the potential to change the way people work. At Capgemini we believe 2025 will be a breakthrough year, where promising technologies mature and proofs of concept evolve, making step changes in digital experiences possible.

          2025 will be all about how organizations use technology to maximize the potential of their employees , focusing on delivering improvements in both workplace efficiency and employee performance

          By eliminating friction and enabling people to focus on making a difference as key contributors and team players, organizations can excel at what they do, in a sustainable way.

          1. AI and Gen AI will personalize the way we all work

          How can we start without Gen AI! As we look forward into 2025, Gen AI will continue to transform the way we work. Use cases and improvement measurement must be the focus, in order to unlock value. Our research shows that nearly 82 percent of companies plan to integrate AI agents in the next one to three years. We will begin to see employees working alongside their Gen AI counterparts – AI-powered robots and cobots – on a day-to-day basis, to drive business outcomes.

          Gen AI will become increasing embedded, getting further integrated into platforms such as Microsoft 365, Google Workspace, and ServiceNow. Tools such as Copilot, for instance, will become a more targeted, integral part of the workplace, with a clearer role as the modern-day interface for AI. They will be increasingly personalized and integrated to meet specific business and user needs, enhancing employee experience.

          The maturing of agentic AI and multi-agent collaboration will make it easier for employees to turn their ideas into reality. Every employee interaction will be different. Assistive technologies and inclusive design will be key factors in the deployment of modern technologies, ensuring an intuitive, context-driven, and personalized experience for all preferences and work styles.

          To build this modern human-centric and AI-powered workplace, organizations will need to focus on training and upskilling their employees. Furthermore, the establishment of Centers of Excellence (CoEs) will facilitate the alignment of AI-ready data with specific use cases, governance frameworks, knowledge sharing, and overall adoption.

          2. Support will never be the same again: AI agents will transform how we get help

          AI agents are set to transform support services by offering hyper-personalized, fast, and reliable assistance at any time, in any location, and on any device. Employees will be able to easily obtain information or request help in their preferred language, thereby making support more accessible and efficient, and improving employee engagement. Insight and analytics will make support proactive, personal, and employee-centric.

          AI agents can effectively address common concerns of employees across various functions, including IT, HR, and finance, thereby ensuring a consistent experience throughout the entire employee journey. It will elevate IT service management (ITSM) by automating and streamlining processes for everyday tasks and requests.

          AI-powered virtual assistants will continue to evolve, helping employees work smarter and better. They will be able to handle increasingly complex requests and offer improved quality for real-time voice interactions. Real-time translation for certain use cases is anticipated to become a reality this year.

          3. Is your workforce ready? Digital adoption and coaching will unlock productivity gains

          The business case for adopting new technology in the workplace hinges on improvements in productivity and performance. It’s not just about adding another tool. Emphasis must also be placed on helping employees embrace new ways of working to unlock experience and productivity enhancements. For instance, most employees only utilize basic features of Copilot such as transcribing calls, when the technology can be used to do so much more with the right coaching.

          Within today’s multi-generational workforces there is a mix of aptitudes for technology, differing work styles, and return-to-office mandates dominating the headlines. Leadership must seek change management techniques, and training that is customized to user profiles, to remove barriers to technology adoption.

          New standards will emerge to measure employee experience more intelligently and proactively. Experience metrics and experience level agreements (XLAs) will become increasingly linked to the performance of the individual, with well-being and perception of service quality remaining core tenets of the approach. 

          Conversely, we anticipate that there will be an emphasis on usage (FinOps for productivity tools) in conjunction with a focus on adoption and optimization of value. Organizations will strive to understand and manage employee experience to maximize the value of their software investments.

          4. Re-engineering processes and journeys is the need of the hour, to support new ways of working

          Both efficiency and performance are underpinned by underlying processes and the experience journey of the employee. To fully exploit the potential of Gen AI and other emerging technologies, businesses leaders will need to systematically prioritize and re-evaluate existing processes, and establish policies and guidelines for Gen AI, AI agents, and other workplace tools to ensure data security, ethical practices, and robust governance . This is nothing new, but must take center stage once again.

          Organizations will look to unify employee services and ensure that their enterprise service management approach is a fit for the future workplace environment. This is pivotal for delivering impactful moments and experiences for employees. This experience-centric approach will require crossover conversations across departments and functions such as IT, finance, HR, and facilities, and this will be enabled by layers of technologies like agentic AI. Deploying circularity and other sustainable practices will also require transformation of the entire ecosystem and its underlying processes. Optimizing processes will be key to delivering a seamless experience throughout the employee journey.

          5. Embedded sustainability equals efficiency. It must be part of the overall efficiency drive

          More organizations must integrate sustainable practices into offices and workplace device supply chains, using unified endpoint management (UEM) tools and device-as-a-service models to improve efficiency and meet net-zero goals. The principle of circularity will be integrated across the value chain, extending device lifespans, reducing material usage, and yielding efficiency gains and savings.

          This will necessitate active employee participation. To engage and empower employees, organizations must incorporate technology into their sustainability initiatives. For example, applications equipped with real-time environmental tracking will enable employees to monitor how their actions contribute to the organization’s sustainability goals, fostering a sense of accountability and ownership. Gamification methods such as scoring systems, leaderboards, and badges can serve as effective tools to promote environmentally friendly behaviors. Learning must be fun and seamless across platforms to keep employees informed and motivated.

          Leadership must leverage data and analytics to analyze employee behaviors and take steps to fine-tune their sustainability programs. Linking sustainability to efficiency and savings will compel organizations to take action, even if they operate in regions of the world where sustainability is not high on the agenda.

          AI agents, Gen AI, sustainability, and unlocking tangible value from an experience-centric workplace will all require end-to-end workplace transformation. 2025 can be an incredible year where promise becomes reality, and the technologies and data that can make a real difference are all in play.  It is now a case where focus, prioritization, and integration will make a real difference in the way we all work, regardless of role and location in today’s dynamic work environment.

          Are looking to support employees with an experience-centric, AI-driven workplace?

          Talk to us.

          Meet our expert

          James McMahon

          Global Head of Employee Experience – Cloud Infrastructure Services
          Global Head of Employee Experience at Capgemini, James has over 20 years’ experience in the field of employee experience and digital workplace services.

            Can nuclear provide the power that drives the AI revolution?

            Paul Shoemaker
            Mar 10, 2025

            The race to develop and exploit the extraordinary capabilities of AI and other breakthrough technologies is accelerating at a dizzying pace. But while governments, businesses and citizens are scrambling to take advantage of the seemingly limitless ability of AI to transform almost every aspect of our lives, there’s another challenge looming on the horizon.

            As economies in general, and tech companies in particular, are striving to transition to renewable energy sources and to reduce carbon footprint, the boom in AI-related data processing is producing a huge surge in demand for power. But, as the need for clean and secure electricity supplies soars, could nuclear be set to play a vital role in bridging the potential energy gap?

            Powering AI will require 9% of US grid capacity by 2030

            Powering the world’s rapidly expanding network of data centres has already had significant impacts on society and public policy. In Europe, major data centre clusters, around Dublin and Amsterdam for example, require so much electricity that further data centre expansion in those cities is on hold until new, additional sources of energy come on stream.

            As recently as 2020, UK data centres used just over 1% of the nation’s electricity. By 2030 this figure is forecast to reach 7%. Demand is set to be even greater in the US, the global centre of AI innovation, with predictions that, by 2030, 9% of all grid capacity will be used to power AI technologies alone. It’s a monumental challenge that traditional energy utility organisations cannot meet alone.

            SMRs will change the game for businesses transitioning to low carbon energy

            New research published by Capgemini to coincide with the 2025 World Economic Forum in Davos reveals that 72% of business leaders say they will increase investment in climate technologies, including hydrogen, renewables, nuclear, batteries, and carbon capture, with nuclear energy in their top three climate technology investment priorities for 2025.

            This direction of travel chimes with statements made during Davos by the International Energy Agency (IEA). The IEA heralds “a new era for nuclear energy, with new projects, policies and investments increasing, including in advances such as small modular reactors (SMRs)”.

            According to IEA Director General Rafael Mariano Grossi: “one after another, technology companies looking for reliable low-carbon electricity to power AI and data centres are turning to nuclear energy, both in the form of traditional large reactors and SMRs.”

            Around 60 new reactors are currently under construction in 15 countries around the world, with 20 more countries, including Ghana, Poland, and the Philippines, developing policies to enable construction of their first nuclear power plants. The US Energy Information Administration (EIA) estimates that, by 2025, global nuclear capacity could have increased by up to 250% compared to the end of 2023.

            Clean, reliable, available – and safe

            It’s easy to understand why nuclear is set to play an increasingly significant dual role in both powering the AI revolution and decarbonising industry. Its 99.999% guarantee of stable energy availability compares with just 30-40% from weather-dependent wind or solar generation.

            Decades of continuous improvements in reactor design and operation make nuclear the second safest source of energy in the world after solar, according to the International Atomic Energy Authority (IAEA), although the Agency also points out that large scale solar power systems need 46 times as much land as nuclear to produce one unit of energy.

            But it’s the potential to rapidly deploy SMRs that could have the most significant impact in preventing the looming energy gap, as AI-driven data processing requirements grow exponentially. It’s important to remember that most light-water SMRs are simply smaller versions of the large-scale GEN III+ technology with proven safety and operational records, with small generally defined as having a maximum output of 300 MWe. The underlying scientific and operational principles are not technologically new in themselves.

            As the name suggests, SMR’s modular design enables major components to be constructed at speed in a factory environment, for bespoke assembly on site, located flexibly close to consumers. With a footprint the size of a sports stadium, they can easily be placed near the demand, such as data centres or industrial estates.

            Reduced construction times, lower investment and running costs and the ability to add or reduce capacity as demand increases or decreases, are just some of SMRs’ obvious advantages. They’re ready-made replacements for fossil-fuel based generation, and as nuclear is less vulnerable to price fluctuations, owners and consumers of SMR generated power have more budget certainty and can plan more accurately for the long-term accordingly.

            SMRs, specifically the advanced reactor designs, can also be adapted to supply heat for industrial applications, district heating systems and the production of hydrogen, and are increasingly regarded as catalysts for economic development and job creation.

            Tech giants at the front of the queue

            Many of the global tech giants are actively working on plans to develop their own SMR-based generating capabilities, to provide their own independent sources of safe, stable, low-carbon power, protected from the increasingly volatile open market.

            It’s a race that’s not only vital that they win to ensure that we fuel the AI revolution, but by doing so we will accelerate the transition to a low carbon world economy.

            Author

            Paul Shoemaker

            Director of Nuclear Transformation, North America
            Evangelist for a clean energy future powered by safe, reliable, nuclear energy.

              The Grade-AI Generation:
              Revolutionizing education with generative AI

              Dr. Daniel Kühlwein
              March 19, 2025

              Our Global Data Science Challenge is shaping the future of learning. In an era when AI is reshaping industries, Capgemini’s 7th Global Data Science Challenge (GDSC) tackled education.

              By harnessing cutting-edge AI and advanced data analysis techniques, participants, from seasoned professionals to aspiring data scientists, are building tools to empower educators and policy makers worldwide to improve teaching and learning.

              The rapidly evolving landscape of artificial intelligence presents a crucial question: how can we leverage its power to solve real life challenges? Capgemini’s Global Data Science Challenge (GDSC) has been answering this question for years and, in 2024, it took on its most significant mission yet – revolutionizing education through smarter decision making.

              The need for innovation in education is undeniable. Understanding which learners are making progress, which are not, and why is critically important for education leaders and policy makers to prioritize the interventions and education policies effectively. According to UNESCO, a staggering 251 million children worldwide remain out of school. Among those who do attend, the average annual improvement in reading proficiency at the end of primary education is alarmingly slow—just 0.4 percentage points per year. This presents a sheer challenge in global foundational learning hampering efforts made to achieve the learning goal as set forth in the Sustainable Development Agenda.

              The Grade-AI Generation: A collaborative effort

              The GDSC 2024, aptly named “The Grade-AI Generation,” brought together a powerful consortium. Capgemini offered its data science expertise, UNESCO contributed its deep understanding of global educational challenges, and Amazon Web Services (AWS) provided access to cutting-edge AI technologies. This collaboration unlocks the hidden potential within vast learning assessment datasets, transforming raw data into actionable insights for decision making that could change the future of millions of children worldwide.

              At the heart of this year’s challenge lies the PIRLS 2021 dataset – a comprehensive global survey encompassing over 30 million data points on 4th grade children’s reading achievement. This dataset is particularly valuable because it provides a rich and standardized data that allows participants to identify patterns and trends across different regions and education systems. By analyzing factors like student performance, demographics, instructional approaches, curriculum, home environment, etc. the AI-powered education policy expert can offer insights that would take much longer time and resources to gain from traditional methods. Participants were tasked with creating an AI-powered education policy expert capable of analyzing this rich data and providing data-driven advice to policymakers, education leaders, teachers, but also parents, and students themselves.

              Building the future: Agentic AI systems

              The challenge leveraged state-of-the-art AI technologies, particularly focusing on agentic systems built with advanced Large Language Models (LLMs) such as Claude, Llama, and Mistral. These systems represent a significant leap forward in AI capabilities, enabling more nuanced understanding and analysis of complex educational data.

              “Generative AI is the most revolutionary technology of our time,” says Mike Miller, Senior Principal Product Lead at AWS, “enabling us to leverage these massive amounts of complicated data to capture for analysis, and present knowledge in more advanced ways. It’s a game-changer and it will help make education more effective around the world and enable our global community to commit to more sustainable development.“

              The transformative potential of AI in education

              The potential impact of this challenge extends far beyond the competition itself. As Gwang-Chol Chang, Chief, Section of Education Policy at UNESCO, explains, “Such innovative technology is exactly what this hackathon has accomplished. Not just only do we see the hope for lifting the reading level of young children around the world, we also see a great potential for a breakthrough in education policy and practice.”

              The GDSC has a proven track record of producing innovations with real-world impact. In the 2023 edition, “The Biodiversity Buzz,” participants developed a new state-of-the-art model for insect classification. Even more impressively, the winning model from the 2020 challenge, “Saving Sperm Whale Lives,” is now being used in the world’s largest public whale-watching site, happywhale.com, demonstrating the tangible outcomes these challenges can produce. 

              Aligning with a global goal

              This year’s challenge aligns perfectly with Capgemini’s belief that data and AI can be a force for good. It embodies the company’s mission to help clients “get the future you want” by applying cutting-edge technology to solve pressing global issues.

              Beyond the competition: A catalyst for change

              The GDSC 2024 is more than just a competition; it’s a global collaboration that brings together diverse talents to tackle one of the world’s most critical challenges. By bridging the gap between complex, costly collected learning assessment data and actionable insights, participants have the opportunity to make a lasting impact on global education.

              A glimpse into the Future

              The winning team ‘insAIghtED’ consists of Michal Milkowski, Serhii Zelenyi, Jakub Malenczuk, and Jan Siemieniec, based in Warsaw Poland. They developed an innovative solution aimed at enhancing actionable insights using advanced AI agents. Their model leverages the PIRLS 2021 dataset, which provides structured, sample-based data on reading abilities among 4th graders globally. However, recognizing the limitations of relying solely on this dataset, the team expanded their model to incorporate additional data sources such as GDP, life expectancy, population statistics, and even YouTube content. This multi-agent AI system is designed to provide nuanced insights for educators and policymakers, offering short answers, data visualizations, yet elaborated explanations, and even a fun section to engage users.

              The architecture of their solution involves a lead data analyst, data engineer, chart preparer, and data scientist, each contributing to different aspects of the model’s functionality. The system is capable of querying databases, aggregating data, performing internet searches, and preparing elaborated answers. By integrating various data sources and employing state-of-the-art AI technologies like Langchain and crewAI, the insAIghtED model delivers impactful, real-world, actionable insights that go beyond the numbers, helping to address complex educational challenges and trends.

              Example:

              Figure 1: Show an example of the winning model. The image has the model answering the following prompt “Visualize the number of students who participated in the PIRLS 2021 study per country”

              As we stand on the brink of an AI-powered educational revolution, the Grade-AI Generation challenge serves as a beacon of innovation and hope. It showcases how the combination of data science, AI, and human creativity and passion can pave the way for a future where quality education is accessible to all, regardless of geographical or socioeconomic barriers.

              Start innovating now –

              Dive into AI for good
              Explore how AI can be applied to solve societal challenges in your local community or industry.

              Embrace agentic AI systems
              Start experimenting with multi-agent AI systems to tackle complex, multi-faceted problems in your field.

              Collaborate globally
              Seek out international partnerships and datasets to bring diverse perspectives to your AI projects.

              Interesting read?Capgemini’s Innovation publication,Data-powered Innovation Review – Wave 9 features 15 captivating innovation articles with contributions from leading experts from Capgemini, with a special mention of our external contributors fromThe Open Group, AWS andUNESCO. Explore the transformative potential of generative AI, data platforms, and sustainability-driven tech. Find all previous Waves here.

              Meet our authors

              Dr. Daniel Kühlwein

              Managing Data Scientist, AI Center of Excellence, Capgemini

              Mike Miller

              Senior Principal Product Lead, Generative AI, AWS

              Gwang-Chol Chang

              Chief, Section of Education Policy, Education Sector, UNESCO

              James Aylen

              Head of Wealth and Asset Management Consulting, Asia

              Use LeanIX to transform the digital nervous system

              Amit Bhattacharya
              Mar 11, 2025

              Improve usage, boost productivity, and deliver significant cost savings

              SAP’s LeanIX is used by architects and data stewards to manage enterprise application portfolios. It incorporates a meta model representing the enterprise architecture and the relationship between the various elements and artifacts.

              LeanIX was recently acquired by SAP.

              LeanIX is hosted on a SaaS platform and so integrates easily with single sign-on products such as Okta or PingID. Its meta model is highly customizable and configurable, although it is better to limit customizations. Built-in connectors also integrate it with popular platforms such as Signavio, ServiceNow, Aptio, etc., and it supports pre-configurable outputs to create client-specific operational reports. Architects can use the Draw.io editor to create solution diagrams and integrate solution shapes with a LeanIX factsheet-type record, such as for applications, IT components, etc.

              These transformations may be challenging. Users may struggle to maintain data quality and integrated process change management, architecture governance, etc. They may focus too much on the tool implementation and overlook the need for a dedicated workforce and process engineering. A client’s overall EA vision may add to the complexity of the transformation, but it is possible to design several solution approaches to realize the vision.

              A good implementation partner will work to  understand the overall EA vision; assess the current state of the LeanIX tool, workforce, and the process around it; and deliver phased solutions aligned to the client’s vision. This generates cost savings, productivity, adoption, and data quality, and enables the digital nervous system.

              Meeting the client’s vision

              Clients can use LeanIX in different ways. Some put LeanIX at the center of enterprise architecture, driving application portfolio management (APM) to become a centralized digital nervous system. Others want LeanIX to act as an enabler, supporting the digital nervous system.

              For example, a client who wants to use LeanIX as an enabler might connect it with a data integration layer. These are often complex implementations, given the need to synchronize data between LeanIX and the host, which acts as a data integration layer. This approach is usually chosen if the client maintains APM data in LeanIX, monitors the data quality, and wants to integrate the different functional data entities into a common data integration layer, and use that as the single source of truth. This has the advantage of allowing applications in the EA to direct calls to the data layer, instead of engaging the individual enablers and creating performance issues.

              If a client needs to use LeanIX as the centralized nervous system, the rest of the publishing apps can be integrated, which has full adoption across the different EA layers and by various actors, including architects. This supports solution diagrams in Visio and raw factsheet-type data in Excel.

              Lastly, if the client’s vision is to move away from LeanIX and use a home-grown APM tool, the transformation approach would be slightly different, with opportunities to finalize the meta model and critical data elements, and ensure the data and solution diagrams can be migrated to the new APM tool.

               LeanIX as an enablerLeanIX as a centralized digital nervous systemMake LeanIX ready to move to home-grown APM
              Pros1. Single source of truth for APM data
              2. Improvement in adoption by architects and non-architects
              3. Productivity improvements
              1. Single source of truth for all data
              2. Improves adoption across the organization
              3. Accessible to all three company layers
              1. Single source of truth for APM or all data
              2. Broad adoption license cost savings
              3. Aligned with a larger corporate strategy
              Cons1. Invest in a data lake (cost)
              2. Sync data with LeanIX (effort)
              1. May need to customize meta model (effort)
              2. Invest in integrating LeanIX with other non-APM data (cost)
              1. Invest significantly to build an APM platform from the ground-up (cost and effort)

              The value proposition is consistent across the options. The common deliverables are:

              • Single source of truth
              • Improved adoption by the architect and non-architect community
              • Fully integrating LeanIX with other platforms in the EA delivers productivity improvements, cost savings, and accessibility to all layers within the organization.

              The business value of transforming LeanIX is in generating a positive net present value (NPV) from the transformation and, ideally, a positive internal rate of return (IRR).

              Find the right partner

              Capgemini understands that the LeanIX transformation journey is not easy. It requires experience in these transformations and understanding of the nuances of the client’s vision, business model, and IT capabilities.

              Capgemini is the right partner. We offer a “value assessment” framework that helps articulate the value of the LeanIX transformation, to justify the business case. Capgemini offers unique differentiators in terms of a thorough understanding of the client’s operating model, alliances with LeanIX and popular middleware solution providers, a dedicated Insights and Data service line, and a solid value assessment methodology. “One Capgemini” is ready to deliver customized solutions for the client.

              Contact me to learn about Capgemini’s LeanIX transformation and how you can apply it.

              Meet our experts

              Amit Bhattacharya

              Senior Enterprise Architect Director
              Amit leads various Enterprise Architecture engagements with multi-national clients based in the United States and has extensive experience in the areas related to architecture modeling platforms, consulting services, architecture assessments, product selection, scoring models, and business value articulation.

                Building a data-driven future: A comprehensive approach to data democratization and organizational growth

                Sharat Bangera
                22 November 2024

                Evolution of data platforms

                During the past few decades, data platforms have evolved to meet the challenges posed by the exponential growth of data and the dynamic demands of data consumers seeking deeper insights. The rise of cloud platforms has accelerated this transformation, offering scalable storage and compute capabilities that make it easier than ever for organizations to harness massive amounts of data for competitive advantage. Today, companies across all sectors are striving to become data-driven to enhance both strategic and operational outcomes. However, despite these efforts, few have fully achieved the status of a truly data-driven organization.

                Data maturity levels

                As data platforms have evolved, so has the level of data maturity within organizations. Data maturity reflects an organization’s ability to effectively integrate and leverage data for informed decision-making.

                Capgemini’s data maturity model provides a structured framework for assessing an organization’s capabilities in gathering, processing, and utilizing data:

                • Informal: Practices are ad hoc, inconsistent, or nonexistent. Data governance (DG) processes lack structure, and there is limited awareness or enforcement of standards, definitions, or policies. Data management is basic, with minimal documentation and no formal accountability.
                • Recognizing: At this level, organizations start recognizing the need for structured data governance. Initial discussions and planning for standards, processes, and accountability begin. Efforts are underway to create baseline policies, identify data stewards, and establish frameworks, though practices are not yet formalized or consistent.
                • Defined: Formal standards, policies, and processes are implemented. Data governance roles and responsibilities are clearly defined, with structured documentation and basic data quality and security measures in place. A data catalog may be introduced, and data governance initiatives are aligned with business objectives, though they are still primarily reactive.
                • Controlled: Data governance is well-established, monitored, and proactive. Comprehensive frameworks and tools are used to manage data consistently across the organization. Data quality is actively tracked, compliance is embedded, and data management processes are aligned with industry standards. There is greater integration of DG into business workflows, ensuring consistent application.
                • Innovative: DG practices are industry-leading, highly automated, and continuously optimized. Governance is proactive, leveraging advanced analytics, AI, and automation. Standards are embedded deeply within the organization’s processes, with DG frameworks driving strategic insights and business value. Practices support innovation and adaptive improvements, setting benchmarks in the industry.

                Data maturity levels enable organizations to transform into data-powered, intelligent enterprises while building trusted data assets that drive informed decisions and foster innovation.

                Becoming truly data-driven requires empowering not only data experts but all employees to work effectively with data within a collaborative ecosystem. This transformation involves democratizing data access, enabling agility, and promoting data-driven decision-making at every level across the organization.

                Data democratization

                Data democratization is an organization’s ability to motivate and empower a broad range of employees—not just data experts—to understand, locate, access, use, and share data securely and in compliance with standards.

                By ensuring that the right individuals have access to the right data at the right time and for the right purpose, using approved tools and receiving necessary training, data democratization enables employees to make informed decisions, anticipate challenges, and identify growth opportunities. Achieving this requires an organization-wide cultural shift, transforming how data is stored, accessed, and shared.

                Data democratization is an ongoing process that must continuously evolve to meet the organization’s emerging data needs. Here are some best practices for organizations beginning their data democratization journey.

                Data maturity assessment

                Before initiating a transformation, it is essential to assess the current state of the data landscape, including data collection (sources), storage (on-premises or cloud), management, and usage. This assessment should also evaluate employees’ data literacy levels (by persona) to identify necessary training. Additionally, it should review the organization’s security framework and compliance protocols to ensure safe data democratization.

                Setting data democratization goals

                A clear definition of data democratization goals helps shape the roadmap for implementation. Aligning data democratization with business objectives—such as providing service agents with 360-degree customer profiles to enhance customer support, boost brand value, and drive revenue—ensures that data initiatives directly support organizational priorities.

                Enabling data accessibility through a data marketplace

                Following the data maturity assessment and goal-setting, the next step is to establish a framework to eliminate data silos. This framework should enable employees to:

                • Search for required data intuitively through user-friendly interfaces
                • Discover information about datasets before requesting access
                • Access the data needed for analysis and insights

                To achieve this, organizations can deploy platforms like data catalogs or metadata hubs that allow users to explore data before shopping for it. Large enterprises increasingly implement self-service platforms, or data marketplaces, that cater to diverse users and use cases, enabling data owners to offer datasets and data consumers to browse and access them. A well-governed data marketplace promotes awareness of available datasets while ensuring compliant, secure access.

                Building a data trust framework

                1. Data governance: Establishes data as a business asset, defines data ownership, ensures engagement of business and IT, develops the data organization and operating model, enforces data policies and processes, and promotes data literacy and culture.
                2. Data catalog: Creates and maintains an inventory of data and their relationships, enabling data stewards, data/business analysts, data engineers, and other data consumers to find and understand relevant data.
                3. Data quality: Defines data quality rules to cleanse, enrich, and improve the quality of data to make it fit for purpose. These rules also help measure the Data Quality (DQ) score that provides confidence in the data.
                4. Data protection and privacy: Regulations such as GDPR and HIPAA require organizations to apply specific controls to personal information regarding protection, consent, and disposal. This component ensures security of such sensitive data through classification, masking, and encryption.

                Establishing business ownership of data

                Instead of relying solely on technical teams to manage and distribute data, business domains should own their datasets, managing data end-to-end through a federated governance framework. This framework provides transparency and control, ensuring compliance both within and across domains.

                Self-service tools for insights delivery

                Traditionally, data reports and models were developed by IT specialists, hindering self-service capabilities. With modern tools like Tableau and Power BI, data consumers can now create their own reports and dashboards for data-driven insights. Expanding these self-service tools to cover data acquisition and distribution with proper access controls empowers users and promotes a culture of data-driven decision-making. Organizations must also provide training to maximize the effectiveness of these tools.

                Enhancing data literacy through targeted training

                Data democratization requires not only technology changes but also a cultural shift. Organizations should identify data consumers based on their data needs and design tailored training programs to build foundational data literacy, empowering employees to confidently interpret and utilize data.

                Leveraging AI and Gen AI for data exploration

                AI and Gen AI have transformed how people analyze and gain insights. For data democratization, these technologies can recommend datasets for specific business use cases, enabling users to prompt the Gen AI model to suggest relevant data rather than performing keyword searches. From there, users can explore recommended datasets within the data marketplace to deliver insights aligned with business objectives.

                The path to strategic and operational transformation

                The journey to becoming a truly data-driven organization involves much more than adopting advanced data platforms or deploying new technologies. It requires a comprehensive approach that combines data maturity assessment, clear democratization goals, accessible data marketplaces, and a robust data trust framework. Empowering employees at all levels to access, understand, and leverage data—supported by strong governance, training, and self-service tools—enables an organization to fully realize the value of its data assets. With AI and Gen AI playing an increasingly pivotal role in data exploration, organizations can provide intuitive, context-aware data insights that align with real business needs. Ultimately, a well-rounded approach to data democratization can transform how an organization operates, setting the foundation for agility, informed decision-making, and long-term growth in today’s data-centric world.

                Sources

                Meet our experts

                Sharat Bangera

                Senior Director, Financial Services Insights & Data 

                  AI Integration Platform as a Service (aiPaaS)

                  Andy Forbes
                  Sep 11, 2023

                  In future enterprise IT landscapes where each is system is represented by an Artificial Intelligence Entity (AIE) and the AIEs continuously engage in negotiations over the sharing of organization Data, Information, Knowledge, and Wisdom, a reengineering of the integration tools and services is needed – AI Integration Platform as a Service (aiPaaS).

                  Integration in an Artificial Intelligence entity based enterprise

                  The development of a modular and scalable aiPaaS based architecture will play a significant role in managing the complexities of integrating AIEs. By breaking down these complexities into manageable components, a streamlined workflow design process will be created. This approach will allow for increased collaboration between different teams and skill levels, encompassing both human and AI-driven participants. The flexibility inherent in this architecture will foster a more efficient and cohesive design environment, adaptable to various needs and objectives.

                  Automation and machine learning will also be integral to the transformation of the AIE integration development process. Utilizing AI-driven automation tools will not only simplify the process but also make it more accessible to a broader range of developers. Machine learning algorithms will further enhance this accessibility by aiding in identifying patterns, making predictions, and generating work products. These advanced technologies will guide the development process, bringing forth a new level of intelligence and adaptability that aligns with the rapidly evolving demands of the industry and allowing human developers to do what they do best – making judgements about the optimal solutions.

                  The emergence of natural language low-code and no-code platforms will mark another significant advancement, particularly in the realm of AI-based integration. These platforms, capable of understanding natural language directions, will enable those without extensive technical expertise to actively participate in integration development. The result will be a democratization of the integration design and development process, allowing for greater inclusivity. By expanding the range of contributors, these platforms will foster innovation and diversity of thought, reflecting a more holistic approach to technological advancement. The combination of these three elements—modular architecture, AI-driven automation, and natural language based low-code/no-code platforms—will offer a compelling vision for the future of aiPaaS, one that is both inclusive and innovative.

                  Specific to Salesforce

                  In the contemporary technological landscape, the utilization of AI Integration Platforms as a Service (aiPaaS) is growing, with a robust market including players such as Mulesoft, Informatica, and Boomi. These products and services offer a variety of tools that simplify and accelerate the delivery of integrations. As these platforms evolve to aiPaaS, they can be expected to take natural language direction and require far less manual configuration and custom coding than today’s platforms. The transformation from traditional methods to AI-driven platforms represents a significant shift in how integrations will be designed and developed, heralding a more efficient and user-friendly era.

                  Alongside these advanced platforms, the collaboration between AI Assistants and human developers will become an essential aspect of integration development. AI Assistants will work hand-in-hand with human developers, providing real-time prediction, guidance and feedback, and automated configuration and code production. Humans will complement this technical prowess with contextual understanding, creativity, and strategic thinking—qualities humans will use to form a symbiotic relationship with AI capabilities. Together, they will work as a team when engaging aiPaas platforms to build integrations, combining the best of human judgement and AI prediction and production.

                  The concept of continuous and just-in-time learning and adaptation adds another layer of sophistication to this new model of development. AI Assistants will likely possess the ability to learn and adapt from previous integration experiences, continuously improving and streamlining future integration tasks. This continuous learning process enables a dynamic and responsive approach to development, where AI systems not only execute tasks but also grow and evolve with each experience, leading to a perpetually enhancing and adapting system.

                  The convergence of these factors—aiPaaS utilization, human-AI collaboration, and continuous learning—paints a promising picture for the future of integration development. This multifaceted approach combines technological innovation with human creativity and ethical responsibility, forming a comprehensive and forward-thinking model that will define the next generation of integration development and delivery.

                  The role of developers

                  In the realm of integration development, human developers will continue to play a crucial role in strategic planning and decision-making. Their expertise and insight into the broader business context are essential in crafting strategies and making key decisions that align with both business goals and program impacts beyond just technology. While automation and AI-driven tools can offer efficiency and precision, the human capacity to understand and act upon complex business dynamics remains vital. Humans’ ability to navigate the multifaceted landscape of organizational needs, politics, and market opportunities will ensure that delivered features align with organization objectives.

                  In addition to their strategic roles, human developers also bring an irreplaceable creative and empathetic approach to problem-solving. While AI can handle complex computations and process large data sets with remarkable speed, it cannot replicate the human ability to think creatively and apply empathetic judgement. Human developers possess the innate ability to see beyond the data, considering the subtleties of human behavior, emotions, and relationships. This creative problem-solving skill is a powerful asset in designing solutions that are not only technically sound but also resonate with end-users and stakeholders.

                  Monitoring and oversight will remain firmly in the human domain. Human oversight ensures that the integration adheres to ethical standards and societal values and aligns with the unique business culture and customer needs. In an increasingly automated world, the importance of ethical consideration, cultural alignment, and a deep understanding of customer requirements cannot be overstated. Human developers act as stewards, maintaining the integrity of the system by ensuring that it reflects the values and needs of the people it serves.

                  Together, these three elements—strategic planning, creative problem-solving, and human oversight—highlight the enduring importance of human involvement in aiPaaS integration development. They underscore the idea that while technology continues to advance, the human touch remains indispensable. It is this harmonious interplay between human ingenuity and technological prowess that promises to drive innovation, efficiency, and success in the future of integration development.

                  Actions for developers to prepare

                  In the rapidly evolving aiPaaS landscape, developers must embrace new technologies and methodologies to remain at the forefront of their field. This includes becoming familiar with AI-driven automation tools, machine learning, and other emerging technologies that are transforming the way integrations are developed and delivered. Understanding how these cutting-edge technologies can be utilized within platforms like Salesforce will be vital. The ability to harness these tools to enhance efficiency, drive innovation, and meet unique business needs will position developers as key players in the digital transformation journey.

                  Investing in continuous learning is another essential step for developers to stay competitive and relevant. Keeping abreast of changes in regulations, best practices, and technological advancements will require a commitment to ongoing education. Pursuing certifications, attending workshops, and participating in conferences will keep skills up-to-date and ensure that developers are well-equipped to adapt to the ever-changing environment. This investment in learning will not only nurture professional growth but also foster a culture of curiosity, agility, and excellence.

                  Monitoring the development of aiPaaS platforms will be an integral part of this ongoing learning process. Gaining proficiency in these platforms will broaden the scope of development opportunities and allow for quicker and more agile integration within Salesforce. As aiPaaS platforms continue to mature and become more pervasive, they will redefine how integrations are conceived and implemented. Understanding these platforms and becoming adept at leveraging their capabilities will enable developers to deliver more innovative and responsive solutions.

                  Collaboration skills will also be paramount in the future landscape of integration development. The emerging paradigm involves close collaboration between humans and AI, where AI assistants augment human abilities rather than replace them. Developing the ability to work synergistically with AI assistants and human colleagues alike will be a valuable asset. Cultivating these collaboration skills will not only enhance individual effectiveness but also contribute to a more cohesive and innovative development ecosystem.

                  Finally, focusing on strategic and creative problem-solving skills will distinguish successful developers in an increasingly automated world. While certain tasks may become automated, the ability to strategize, creatively problem-solve, and think outside of the box will remain uniquely human. These skills will define the role of developers as visionaries and innovators, empowering them to drive change, inspire others, and create solutions that resonate with both business objectives and human needs.

                  Together, these five areas of focus form a roadmap for developers to navigate the exciting and complex world of modern integration development. Embracing new technologies, investing in continuous learning, understanding aiPaaS platforms, cultivating collaboration skills, and nurturing strategic and creative thinking will equip developers to thrive in this dynamic environment. These strategies align perfectly with a future where technology and humanity converge, creating a rich tapestry of possibilities and progress.

                  Conclusion

                  The evolving landscape of aiPaaS within Salesforce represents both challenges and opportunities. Salesforce developers should view this as a chance to grow and contribute uniquely to the organization’s goals. By embracing new technologies, investing in continuous learning, and honing both technical and collaborative skills, Salesforce developers can position themselves at the forefront of this exciting era of technological advancement. This preparation will enable them to continue to be vital contributors to their organizations’ success in an increasingly interconnected and dynamic world.

                  Author

                  Andy Forbes

                  Capgemini America Salesforce CTO
                  Andy is an energetic, results-driven Program Manager and Information Technology Architect who is passionate about ensuring business value drives technology development and operations. He has used waterfall, agile, and scaled agile to successfully manage development and operations for systems based on Salesforce.com’s Sales Cloud, Service Cloud, and Experience Cloud, as well as Microsoft, Oracle, and open-source technologies. He is experienced with extracting value from emerging technologies, with outsourcing/offshoring, and with performing multiple concurrent roles and tasks. He thrives in fast-paced environments and enjoys working with teams that share his commitment to quality and customer satisfaction.