Skip to Content

Redefine your business operations with an award-winning service provider

CLEVELAND SELLERS - Head of Americas for Capgemini’s Business Services Global Business Line
Cleveland Sellers
12 Apr 2023

Capgemini has been recognized as an award-winning service provider for the second consecutive year due to our ability to deliver intelligent, frictionless business operations that drive a higher level of service for shared service organizations.

As the geopolitical upheavals and the pressures of inflation continue to impact business operations across industries, it’s critical for organizations to reimagine their shared services operating model and the technology supporting them.

Indeed, a recent report by the Shared Services & Outsourcing Network (SSON) states that most shared service organization (SSO) leaders have taken steps to pre-empt further fallout by evolving a robust global business services (GBS) model. Combined with a partnership approach, shared services can drive the resiliency and digital transformation needed for businesses to stay ahead of their competition.

But what are some of the key challenges faced by SSOs? And how is Capgemini overcoming them to drive value for its clients?

Mitigating attrition and driving retention

Attrition management directly impacts client success. With changing employee paradigms, employee programs now need to constantly evolve if they want to keep pace with market demands.

We take great pride in our ability to retain our employees, and we’ve developed a number of tools to identify attrition and drive retention. These include:

  • A digital early warning system that identifies dissatisfied employees
  • An employee vulnerability data tool that helps to keep people morale high
  • Skip-level meetings that collect unbiased employee feedback, which can be used to strengthen team spirit across the organization.

Innovation drives delivery excellence

Effective service delivery requires innovation and transformation to remove frictions in business operations, drive process optimization, minimize risk and reduce cost, and drive efficiency.

Over the years, we’ve offered a portfolio of innovative solutions and services to help our clients successfully make the jump into the digital age. Our innovative tools, industry-leading processes, controls, and transparent procedures are recognized by the industry and our clients as key differentiators.

We also benefit from our extensive experience in partnerships and developing solutions for the mutual benefit of all parties.

Putting continuous improvement front and center

Continuous improvement transforms the way an organizations operates and drives ever-higher performance levels through realizing incremental improvements.

Implementing it, however, comes with its own set of challenges. These include creating the right culture, overcoming resistance to change, identifying opportunities, and capturing return on investment.

Continuous improvement is part of our natural, day-to-day company culture at Capgemini. We’ve implemented a number of collaborative delivery excellence initiatives to improve operations in terms of delivery processes, operating models, and drive value to the end customer through innovative technologies such as AI, chatbots, robotics, and blockchain.

Capgemini is a winner!

Our approach to mitigating the challenges of attrition, effective delivery, and continuous improvement is the reason why Capgemini has recently been named Service Provider of the Year in North America in SSON’s Impact Awards 2023. This award is a testament to our teams’ relentless focus on process improvement, value creation, and delivering measurable success.

We are thrilled to be recognized again for our efforts and look forward to driving even more innovative impactful outcomes in the future!

To learn more about how Capgemini can help drive intelligent, frictionless business operations across your shared service organization that, contact: cleveland.sellers@capgemini.com

Cleveland Sellers leads the overall P&L and all aspects of Capgemini’s business services division in the Americas and is a strong proponent of customer success and digital transformation for their enterprise operations.

Author

Cleveland Sellers

Cleveland Sellers

Global Head of Americas Business Area, Capgemini’s Business Services
Cleveland, as Head of Americas for Capgemini’s Business Services Global Business Line, oversees P&L operations. He advocates for customer success and digital transformation. With a background in leading digital services for major firms like IBM, Salesforce, and Capgemini, he excels in driving growth, managing alliances, and advising on digital operations.

    Sustainable mobility 

    Klaus Feldmann
    10 April 2023
    capgemini-engineering

    Tens of millions of cars sell every year. That means every increase in a vehicle’s emissions is multiplied by millions, but equally, so is every reduction. We must therefore make vehicles as sustainable as possible.

    But what does maximum sustainability look like? What fuel and propulsion methods should you use? What raw materials should you pursue? Where should you manufacture?

    These big decisions will set corporate direction for years. They must properly analyse the full life cycle impact of any choice, whilst also considering systems outside of their control, from land, to energy infrastructure, to competition from other industries.

    To take a top-level example, what is the most sustainable vehicle propulsion method – Electric, Hydrogen and E-fuels? We need to understand the full life cycle – by performing an integrative Life Cycle Assessment – in order to reliably make the comparison.

    So we would need to look at the original fuel (eg energy mix of grid, power source for an electrolyser, or biomass) and its emissions profile. Then we’d need to look at the energy efficiency of each step between the energy inputs and the vehicle’s propulsion. Then you can compare how much of each you need to produce the same amount of propulsion.

    We must also look at the inputs of creating the propulsion system itself – such as battery or engine components and materials.

    We can then combine these to work out the most sustainable option. Maximum sustainability will need to address the fuel, the vehicle design and the energy systems that power it. The results will of course vary in different scenarios.

    Making good decisions needs highly sophisticated system-of-systems modeling, combining your own engineering and supply chain models with climate, energy, demographic and macroeconomic models.

    In our new whitepaper offer an introduction to planning strategic decisions for a sustainable transition, and provide top level worked examples of propulsion and battery choices, alongside some initial answers.

    Author

    Klaus Feldmann

    Klaus Feldmann

    CTO for Automotive Sustainability and e-Mobility, Capgemini Engineering
    Klaus Feldmann is the Chief Technical Officer of our sustainability & e-Mobility Offers and Solutions for the Automotive industry supporting our customers in their path to carbon neutrality across their products and footprints and service to fight against climate change and contribute to a decarbonized economy.

      Capgemini’s Quantum Lab participates in BIG Quantum-AI-HPC Hackathon

      Kirill Shiianov
      5 Apr 2023

      Capgemini’s Quantum Lab participates in BIG Quantum-AI-HPC Hackathon and wins the Technical Phase together with students from Ecole Polytechnique de Paris, and the Technical University Munich (TUM)!

      At the beginning of March, Capgemini’s Quantum Lab participated in the BIG HPC-AI-QC Hackathon organized by QuantX in collaboration with PRACEGENCIBCG and under the high patronage of Neil Abroug (Head of the French National Quantum Strategy) in Paris, France. Leading players of international Quantum Computing, HPC and AI ecosystems (e.g., industrial companies, quantum hardware and software providers, HPC centers, VC/PEs and consulting groups, representatives of academia and government) gathered to accelerate the transfer of competencies and advance hybrid HPC-AI-QC solutions and their practical application. The hackathon consisted of two parts: the technical phase and the business phase.

      The solution co-crafted by our Capgemini team on a technical use case provided by BMW Group was crowned winner of the technical phase! The team consisted of Capgemini employees (Camille de Valk, Pierre-Olivier Vanheeckhoet and Kirill Shiianov) and students from Ecole Polytechnique de Paris (Bosco Picot de Moras d’Aligny), and Technical University Munich (TUM) (Fiona Fröhler). They were assisted by technical mentors Elvira Shishenina (BMW Group), Jean-Michel Torres and Elie Bermot (IBM), and Konrad Wojciechowski (PSNC).

      The solution they proposed is the first step to improve the BMW cars acoustics. The international jury of experts was enthusiastic about the team’s technical solution, as well as their excellent presentation. The French minister, Jean-Noël Barrot, and Nobel prize winner, Alain Aspect, joined the awards ceremony to hand out the prizes to the winners.

      Camille de Valk, one of Capgemini’s Quantum Lab Specialists, on the Technical Phase:

      “BMW Group provided us with a challenging use-case for the technical phase of the hackathon. It’s all about optimizing the design of cars to have less irritating sound in the cabin. This involves complicated physics and mathematics, but luckily our team had both physicists and computer scientists. The teamwork was one of the best parts of the hackathon for me.

      We created a toy-demonstration of a differential equation solver using variational quantum circuits and we explored its scaling in an artificial intelligence (AI), high performance computer (HPC) and quantum computing (QC) workflow. This was the first step to experiment with the efficiency of complex simulations around sound propagation, to improve the cabin’s acoustics by optimizing the design of the car. Working in this hackathon with such a talented team and great mentors was a great experience for me!”


      Kirill Shiianov, Consultant at Capgemini Engineering, about the Business Phase:

      “In the business phase of the BIG Hackathon, Capgemini’s Quantum Lab team took on a challenge to build a business case around one of the solutions from the participants from the technical phase. The team showed its best at developing the case of the business phase: uniting people from different backgrounds and business units. The team existed of different areas of expertise, which helped to understand different aspects of the problem and come up with creative solutions.

      The use-case intended to augment Natural Language Processing (NLP) models with a quantum approach. The use-case provider was Merck Group, and the real-world application of the technology was incented to investigate promises of Symbolic AI and as concrete example detect differences between Adverse Event reports (AE, event and drug exposure) and causal Adverse Drug Reactions (ADR, event due to drug exposure) mentioned in textual sources, like medical reports or social networks.

      During two intensive days, we fully immerged in the technology, built a complete business case, and presented it to a jury, consisting of technology VPs of high-tech companies, such as Quantinuum.

      Interaction with the use-case provider (Thomas Ehmer from Merck Group), and technical people from Quantinuum helped us getting unique insights in the respective domains. It was a unique experience, and I am already looking forward to participating in the next editions of the hackathon!

      Kirill Shiianov

      Kirill Shiianov

      Junior Consultant at Capgemini Engineering
      Kirill has a background in experimental physics and quantum systems. He is part of Capgemini’s Quantum Lab and investigates industrial applications of quantum technologies, working on projects such as EQUALITY. In Capgemini’s Quantum Lab Kirill has explored applications of quantum computing for optimization problems, working with different providers, such as IBM Quantum, AWS and D-Wave.
      Camille de Valk

      Camille de Valk

      Quantum optimisation expert
      As a physicist leading research at Capgemini’s Quantum Lab, Camille specializes in applying physics to real-world problems, particularly in the realm of quantum computing. His work focuses on finding applications in optimization with neutral atoms quantum computers, aiming to accelerate the use of near-term quantum computers. Camille’s background in econophysics research at a Dutch bank has taught him the value of applying physics in various contexts. He uses metaphors and interactive demonstrations to help non-physicists understand complex scientific concepts. Camille’s ultimate goal is to make quantum computing accessible to the general public.

        ChatGPT and I have trust issues

        Tijana Nikolic
        30 March 2023

        Disclaimer: This blog was NOT written by ChatGPT, but by a group of human data scientists: Shahryar MasoumiWouter ZirkzeeAlmira PillaySven Hendrikx and myself.

        Stable diffusion generated image with prompt = “an illustration of a human having trust issues with generative AI technology”

        Whether we are ready for it or not, we are currently in the era of generative AI, with the explosion of generative models such as DALL-eGPT-3, and, notably, ChatGPT, which racked up one million users in one day. Recently, on March 14th, 2023, OpenAI released GPT-4, which caused quite a stir and thousands of people lining up to try it.

        Generative AI can be used as a powerful resource to aid us in the most complex tasks. But like with any powerful innovation, there are some important questions to be asked… Can we really trust these AI models? How do we know if the data used in model training is representative, unbiased, and copyright safe? Are the safety constraints implemented robust enough? And most importantly, will AI replace the human workforce?

        These are tough questions that we need to keep in mind and address. In this blog, we will focus on generative AI models, their trustworthiness, and how we can mitigate the risks that come with using them in a business setting.

        Before we lay out our trust issues, let’s take a step back and explain what this new generative AI era means. Generative models are deep learning models that create new data. Their predecessors are Chatbots, VAE, GANs, and transformer-based NLP models, they hold an architecture that can fantasize about and create new data points based on the original data that was used to train them — and today, we can do this all based on just a text prompt!

        The evolution of generative AI, with 2022 and 2023 bringing about many more generative models.

        We can consider chatbots as the first generative models, but looking back we’ve come very far since then, with ChatGPT and DALL-e being easily accessible interfaces that everyone can use in their day-to-day. It is important to remember these are interfaces with generative pre-trained transformer (GPT) models under the hood.

        The widespread accessibility of these two models has brought about a boom in the open-source community where we see more and more models being published, in the hopes of making the technology more user-friendly and enabling more robust implementations.

        But let’s not get ahead of ourselves just yet — we will come back to this in our next blog. What’s that infamous Spiderman quote again?

        With great power…

        The generative AI era has so much potential in moving us closer to artificial general intelligence (AGI) because these models are trained on understanding language but can also perform on a wide variety of other tasks, that in some cases even exceed human capability. This makes them very powerful in many business applications.

        Starting with the most common — text application, which is fueled by GPT and GAN models. Including everything from text generation to summarization and personalized content creation, these can be used in educationhealthcare, marketing, and day-to-day life. The conversational application component of text application is used in chatbots and voice assistants.

        Next, code-based applications are fueled by the same models, with GitHub’s Co-pilot as the most notable example. Here we can use generative AI to complete our code, review it, fix bugs, refactor, and write code comments and documentation.

        On the topic of visual applications, we can use DALL-eStable Diffusion, and Midjourney. These models can be used to create new or improved visual material for marketing, education, and design. In the health sector, we can use these models for semantic translation, where semantic images are taken as input and a realistic visual output is generated. 3D shape generation with GANs is another interesting application in the video game industry. Finally, text-to-video editing with natural language is a novel and interesting application for the entertainment industry.

        GANs and sequence-to-sequence automatic speech recognition (ASR) models (such as Whisper) are used in audio applications. Their text-to-speech application can be used in education and marketing. Speech-to-speech conversion and music generation have advantages for the entertainment and video game industry, such as game character voice generation.

        Some applications of generative AI in industries.

        Although powerful, such models also come with societal limitations and risks, which are crucial to address. For example, generative models are susceptible to unexplainable or faulty behavior, often because the data can have a variety of flaws, such as poor quality, bias, or just straight-up wrong information.

        So, with great power indeed comes great responsibility… and a few trust issues

        If we take a closer look at the risks regarding ethics and fairness in generative models, we can distinguish multiple categories of risk.

        The first major risk is bias, which can occur in different settings. An example of bias is the use of stereotypes such as race, gender, or sexuality. This can lead to discrimination and unjust or oppressive answers generated from the model. Another form of bias is the model’s word choice. Its answers should be formulated without toxic or vulgar content, and slurs.

        One example of a language model that learned a wrong bias is Tay, a Twitter bot developed by Microsoft in 2016. Tay was created to learn, by actively engaging with other Twitter users by answering, retweeting, or liking their posts. Through these interactions, the model swiftly learned wrong, racist, and unethical information, which it included in its own Twitter posts. This led to the shutdown of Tay, less than 24 hours after its initial release.

        Large language models (LLMs) like ChatGPT generate the most relevant answer based on the constraints, but it is not always 100% correct and can contain false information. Currently, such models provide their answers written as confident statements, which can be misleading as they may not be correct. Such events where a model confidently makes inaccurate statements are also called hallucinations.

        In 2023, Microsoft released a GPT-backed model to empower their Bing search engine with chat capabilities. However, there have already been multiple reports of undesirable behavior by this new service. It has threatened users with legal consequences or exposed their personal information. In another situation, it tried to convince a tech reporter he was not happily married and that he was in love with the chatbot (it also proclaimed their love for the reporter) and consequently should leave his wife (you see why we have trust issues now?!).

        Generative models are trained on large corpora of data, which in many cases, is scraped from the internet. This data can contain private information, causing a privacy risk as it can unintentionally be learned and memorized by the model. This private data not only contain people, but also project documents, code bases, and works of art. When using medical models to diagnose a patient, it could also include private patient data. This also ties into copyright when this private memorized data is used in a generated output. For example, there have even been cases where image diffusion models have included slightly altered signatures or watermarks it has learned from their training set.

        The public can also maliciously use generative models to harm/cheat others. This risk is linked with the other mentioned risks, except that it is intentional. Generative models can easily be used to create entirely new content with (purposefully) incorrect, private, or stolen information. Scarily, it doesn’t take much effort to flood the internet with maliciously generated content.

        Building trust takes time…and tests

        To mitigate these risks, we need to ensure the models are reliable and transparent through testing. Testing of AI models comes with some nuances when compared to testing of software, and they need to be addressed in an MLOps setting with data, model, and system tests.

        These tests are captured in a test strategy at the very start of the project (problem formulation). In this early stage, it is important to capture key performance indicators (KPIs) to ensure a robust implementation. Next to that, assessing the impact of the model on the user and society is a crucial step in this phase. Based on the assessment, user subpopulation KPIs are collected and measured against, in addition to the performance KPIs.

        An example of a subpopulation KPI is model accuracy on a specific user segment, which needs to be measured on data, model, and system levels. There are open-source packages that we can use to do this, like the AI Fairness 360 package.

        Data testing can be used to address bias, privacy, and false information (consistency) trust issues. We make sure these are mitigated through exploratory data analysis (EDA), with assessments on bias, consistency, and toxicity of the data sources.

        The data bias mitigation methods vary depending on the data used for training (images, text, audio, tabular), but they boil down to re-weighting the features of the minority group, oversampling the minority group, or under-sampling the majority group.

        These changes need to be documented and reproducible, which is done with the help of data version control (DVC). DVC allows us to commit versions of data, parameters, and models in the same way “traditional” version control tools such as git do.

        Model testing focuses on model performance metrics, which are assessed through training iterations with validated training data from previous tests. These need to be reproducible and saved with model versions. We can support this through open MLOPs packages like MLFlow.

        Next, model robustness tests like metamorphic and adversarial tests should be implemented. These tests help assess if the model performs well on independent test scenarios. The usability of the model is assessed through user acceptance tests (UAT). Lags in the pipeline, false information, and interpretability of the prediction are measured on this level.

        In terms of ChatGPT, a UAT could be constructed around assessing if the answer to the prompt is according to the user’s expectation. In addition, the explainability aspect is added — does the model provide sources used to generate the expected response?

        System testing is extremely important to mitigate malicious use and false information risks. Malicious use needs to be assessed in the first phase and system tests are constructed based on that. Constraints in the model are then programmed.

        OpenAI is aware of possible malicious uses of ChatGPT and have incorporated safety as part of their strategy. They have described how they try to mitigate some of these risks and limitations. In a system test, these constraints are validated on real-life scenarios, as opposed to controlled environments used in previous tests.

        Let’s not forget about model and data drift. These are monitored, and retraining mechanisms can be set up to ensure the model stays relevant over time. Finally, the human-in-the-loop (HIL) method is also used to provide feedback to an online model.

        ChatGPT and Bard (Google’s chatbot) have the possibility of human feedback through a thumbs up/down. Though simple, this feedback is used to effectively retrain and align the underlying models to users’ expectations, providing more relevant feedback in future iterations.

        To trust or not to trust?

        Just like the internet, truth and facts are not always given — and we’ve seen (and will continue to see) instances where ChatGPT and other generative AI models get it wrong. While it is a powerful tool, and we completely understand the hype, there will always be some risk. It should be standard practice to implement risk and quality control techniques to minimize the risks as much as possible. And we do see this happening in practice — OpenAI has been transparent about the limitations of their models, how they have tested them, and the governance that has been set up. Google also has responsible AI principles that they have abided by when developing Bard. As both organizations release new and improved models — they also advance their testing controls to continuously improve quality, safety, and user-friendliness.

        Perhaps we can argue that using generative AI models like ChatGPT doesn’t necessarily leave us vulnerable to misinformation, but more familiar with how AI works and its limitations. Overall, the future of generative AI is bright and will continue to revolutionize the industry if we can trust it. And as we know, trust is an ongoing process…

        In the next part of our Trustworthy Generative AI series, we will explore testing LLMs (bring your techie hat) and how quality LLM solutions lead to trust, which in turn, will increase adoption among businesses and the public.

        This article first appeared on SogetiLabs blog.

        Understanding 5G security

        Aarthi Krishna
        29 Mar 2023

        5G powers the new era of wireless communication, and to unleash its potential it must be secure. To better understand its security challenges and how to conduct a risk assessment, it’s important to know why 5G and its security ecosystem differ from its predecessor.

        Why 5G security?

        5G is the fifth generation of cellular technology, offering faster speeds and lower latency compared to 4G. It makes the connected era and Internet of Things (IoT) possible, and whether it’s smart cities, steelmaking, or healthcare, few industries will be untouched by its capabilities.

        There are two types of 5G networks: public and private –

        • Public 5G networks are primarily used by retail customers for smartphones and other day-to-day devices connected to the internet. Owned and operated by mobile carriers, public networks are available to anyone who subscribes to their service. As a network established by telco providers, the security rests with them for the most part.
        • Private 5G networks are not accessible to the public. They are owned and operated by a single entity, such as a company or government agency, and are used to connect devices within a specific location or facility. For example, a factory might set up a private 5G network to connect its machines and other equipment to streamline operations and improve efficiency.

        Most companies using 5G for manufacturing and operations will need to build a private network or employ a hybrid model of public and private, fitted to the requirements. Whichever model a company uses must be underpinned by robust security frameworks.

        5G security is complex because, unlike 4G, it operates outside the perimeter of dedicated equipment, servers, and protocols. Instead, a highly vulnerable software ecosystem of virtualized RAN and cloud-forward services constitutes its core network. The concept of 5G security is new and evolving, which is why it’s essential to be alert to the challenges and develop and deploy new security measures in response.

        5G security challenges

        The introduction of new use cases, new business models, and new deployment architectures makes securing 5G networks more challenging. But without a cohesive approach to mitigating the security risks, it can be difficult to ensure that all potential vulnerabilities are identified and addressed.

        These are the key security challenges for 5G as we see them:

        • Increased attack surface: Millions of new connected devices are entering the digital ecosystem, which increases the attack surface exponentially. Many IoT devices are vulnerable and unprotected and typically operate with lower processing power, making them easy targets for attackers. This makes implementing zero-trust frameworks with true end-to-end coverage critical for protection against threats.
        • New paradigms for telco: With 5G, the telco ecosystem is essentially inheriting IT challenges requiring a software security mindset. Whether public or private, 5G’s virtualized network architecture creates a new supply chain for software, hardware, and services, and this “virtualization” of traditional single-vendor hardware is a major security challenge. It’s time for professionals to acquaint themselves with network function virtualization (NFV), virtualized network functions (VNFs), service-based architectures (SBAs), software-defined networks (SDNs), network slicing, and edge computing.
        • Operational challenges: The requirements or the capabilities needed to monitor a 5G network are different to IT and OT. This means that the tools used for monitoring the IT and the OT networks cannot be retrofitted or scaled for the cellular world, so 5G requires new tools and new capabilities. This involves training new people to understand the protocols and use cases.
        • The complexity of implementation: There is no one way to build 5G architecture. It depends on the requirement of the organization and, as a result, the specification range can be extensive. Trying to bring these models together and manage them is one part of the challenge; the other is finding skilled professionals who know how to do it. Consequently, the margin for human error is another factor to bear in mind.
        • Increased number of stakeholders: Finally, the industry recognizes that the success of building 5G networks is dependent on the entire ecosystem of hardware and software vendors spanning multiple suppliers, from chip vendors to cloud providers. Coordinating new stakeholders and their security efforts while ensuring that all potential vulnerabilities are covered is likely to be challenging. Note that different stakeholders may have different levels of knowledge and expertise when it comes to security.

        Introducing 5G risk assessment

        5G security is extensive and there are multiple parts to be cognizant of to understand where the risks and vulnerabilities are when running a network. You’ll see this mapped out into horizontal and vertical layers in the diagram. To conduct a comprehensive risk assessment of 5G, both axes need to be secured. Knowing where to start involves understanding what constitutes each layer:

        • 5G horizontal security is the sum of five parts: user equipment, radio access, edge/multi-access edge computing, core network, and the cloud. Due diligence is necessary in every area to ensure assets are protected from confidentiality, integrity, and availability attacks.
        • 5G vertical security is the sum of four layers: the product, the network, the applications, and the security operation layer on top. This is generally referred to “chip to cloud” security, particularly in the context of IoT devices.

        A risk assessment, therefore, has to be holistic in nature, covering every aspect of the horizontal and vertical layers with due consideration of the threats, vulnerabilities, and assets that touch each of the specific components in the architecture. Such a risk assessment must also address any regional and industrial compliance requirements, and we will discuss this later in the series.

        At Capgemini, we know that building and securing a 5G network is complex. We also know that everything must be protected end-to-end and in unison for it to work effectively. With deep technology, business, and engineering expertise, Capgemini has the unique capability to guide you on the 5G security journey end-to-end.

        Security today adds value to a business tomorrow, and realizing the possibilities of a new, truly Intelligent era relies on it. Our experts can help you maximize the benefits.

        The next blog in the series will consider how to conduct a robust risk assessment and monitoring in more detail. 

        Meet the authors

          Serendipity systems: Building world-class personalization teams

          Neerav Vyas
          29 March 2023

          The last best experience we have anywhere sets the bar for all experiences everywhere. Consumers don’t want just personalization – they’re demanding it. Delivering personalization is no longer bar-raising. Organizations need to move from providing personalization as a feature to delivering serendipitous experiences. The challenge then is serendipity at scale or obsolescence with haste. Without the right teams, organizations are speeding toward obsolescence.

          Great basketball teams and great personalization teams have a lot in common.

          Imagine a shopping experience that’s completely generic. Worse than generic, it goes out of its way to recommend things you don’t want. It recommends actions that are the opposite of what you’re looking to do. It’s perfectly frustrating. How long will a business based on that sort of experience last?

          Now imagine a personalization experience that knows you so well it’s constantly providing you with serendipitously delightful experiences. You’re discovering things you never knew you wanted. But you’re never allowed to use it because the experience never sees the light of day. The MVP never becomes an available product.

          Both scenarios are terrible. Unfortunately, a variation of the second is more common. 77% of AI and analytics projects struggle to gain adoption. Fewer than 10 percent of analytics and AI projects make an impact financially because 87 percent of these fail to make it into production. What if we could flip the odds? What if rather than most recommendation projects failing, most of them succeeded? Cross-functional, product-centric, teams can do just that. It’s how innovators like Amazon and Netflix were able to succeed so quickly and so often in their personalization programs. It’s also been critical for the dozens of successful personalization programs we’ve delivered at Capgemini.

          Recommendation experiences

          Everything is a recommendation. That insight came from Netflix: “the Starbucks secret is a smile when you get your latte, ours is that the website adapts to the individual’s taste,” said Reed Hastings, co-founder of Netflix. Recommendations weren’t features or algorithms. They were the experience; the means to delight, surprise, frustrate, or anger customers. At Amazon, Jeff Bezos’ original goal was a store for every customer. This wasn’t AI for the sake of AI. Both companies made personalization central to their experiences, and personalization enabled Amazon and Netflix’s visions for more innovative, delightful, and serendipitous experiences. Recommendation experiences (RX) were critical to customer experiences (CX). Experiences were the product. Building products is hard. Josh Peterson co-founded the P13N (personalization) team at Amazon. He described the early days of Amazon as challenging because the company was siloed. Design, editorial, and software engineering were fragmented. “It was really hard to ever get anything all the way out to the site without begging and borrowing people from silos. The one time it was always different was when we did a product launch… So, if there was a big enough effort like launching music or auctions then you had permission to borrow everyone to put together your team.” In the early days of Amazon, there were many engineering efforts around personalization. Even though these efforts were led by brilliant engineers, they saw limited success. It wasn’t until after the launch of Amazon Auctions that personalization made a real impact.

          After Amazon Auctions, Peterson and Greg Linden looked to make Bezos’ vision for a personalized store for every customer a reality. The goal was a team that could “own its whole space,” to break silos to create a cross-functional team to rapidly experiment and deliver. This was the first team, outside of the design organization, to have designers in their team embedded with web developers and technical project managers. This enabled a higher number of launches compared to other teams. The impact of their model was so successful that it became the basis of Amazon’s famous “Two Pizza Team” approach – essentially a team small enough that they could be fed with two pizzas. Small teams that were decentralized, autonomous, and were “owners” of the business could move faster and launch more experiments. More experiments would enable them to have more successful innovations.

          Experimentation

          Successful personalization teams foster a culture of experimentation. Creating a culture of experimentation requires diverse, multi-disciplinary teams. Below we show the various skillsets and domains that are required for modern personalization teams. The circles don’t represent people, they represent skills. Great basketball teams and great personalization teams have a lot in common. In basketball, you need defense. You need offense, both close to the rim and from afar. You need diversity in skillsets. You could get lucky and find a unicorn but fielding multiple teams of unicorns is not practical. Creating a team of all-stars sounds good on paper, but there are plenty of examples where those super teams fail to live up to expectations. A team without a diverse set of skills is unlikely to be very successful, and almost certainly not great.

          “Experimentation requires blending creativity and data. Practically, this becomes a blend of statistics, behavioral economics, psychology, marketing, and expertise in experience design.”

          Small teams with most of the skills above are more likely to do end-to-end personalization well. No one person will have all the skills needed, but together they’ll bring more experiments to the table. Early Amazon teams were engineering and data-science heavy. It wasn’t until the addition of design, business expertise, and a product-centric approach that they were able to execute end-to-end and achieve Bezos’ vision.

          Velocity is a leading indicator. Successful personalization teams test many ideas. They break experiments into small chunks so no one failure is large enough to disrupt the business. They test and learn quickly. Testing a dozen ideas and refining them will be more efficient than trying to make one idea “perfect.” Our intuition on what is going to work is often wrong. Testing many ideas allows the data and results to guide us, rather than intuition. This requires personalization teams to develop many ideas end-to-end quickly.

          What does the future hold? Cross-functional, product-centric teams are the beginning, not the end. Experimentation requires blending creativity and data. Practically, this becomes a blend of statistics, behavioral economics, psychology, marketing, and expertise in experience design.

          These teams need to track which features drive results to understand what is working and what is not. The goal is to achieve consistent and reliable serendipity from personalization efforts. The obvious is not serendipitous. Experimentation is needed to discover that which is not obvious and that which drives business outcomes. Without that, we can’t scale serendipity.

          INNOVATION TAKEAWAYS

          DIVERSITY LEADS TO SPEED

          Speed leads to innovation. Diversity leads to innovation. End-to-end cross-functional teams with dedicated resources are more likely to successfully implement personalization programs and innovate faster than their peers

          A CULTURE OF EXPERIMENTATION IS CRITICAL

          Velocity, variety, and volume of experiments are leading indicators of innovation. “Our success at Amazon is a function of how many experiments we do per year, per month, per week, per day.” – Jeff Bezos

          SPEED IS A COMPETITIVE ADVANTAGE

          Testing and learning iteratively as well as being able to deploy quickly contribute to faster speed to market. “Companies rarely die from moving too fast, they frequently die from moving too slowly.” – Reed Hastings

          Interesting read?

          Capgemini’s Innovation publication, Data-powered Innovation Review | Wave 5 features 19 such articles crafted by leading Capgemini and partner experts, about looking beyond the usual surroundings and be inspired by new ways to elevate data & AI. Explore the articles on serendipity, data like poker, circular economy, or data mesh. In addition, several articles are in collaboration with key technology partners such AWS, Denodo, Databricks and DataikuFind all previous Waves here.

          Author:

          Neerav Vyas

          Neerav Vyas

          Head of Customer First, Co-Chief Innovation Officer, Insights & Data, Global
          Neerav is an outstanding leader, helping organizations accelerate innovation, drive growth, and facilitate large-scale transformation. He is a two-time winner of the Ogilvy Award for Research in Advertising and an AIconics 2019 and 2020 finalist for Innovation in Artificial Intelligence for Sales and Marketing.
          Chloe Cheau 

          Chloe Cheau 

          Customer First Head of CDP and Experience Engineering
          Chloe drives strategy and delivery of innovative Data and Analytics solutions for her clients by leveraging her expertise in Data Engineering, Machine Learning, and AI. She leads beta programs for partners, delivers proof-of-concepts, and provides technical points of view and thought leadership for offerings and solutions.

            The 11 ways in which the metaverse is shifting software development  

            Gunnar Menzel
            28 Mar 2023

            Over the past 70 years, we have seen many technology disruptions that impacted the way we design, develop, and deploy software. The invention of C, the emergence of the personal computer, the rise of the internet, and the move from waterfall to agile to name but a few.

            However, nothing compares to what might be about to happen – the convergence of artificial intelligence (AI), blockchain, and 6G/satellite connectivity combined with concepts like the metaverse will change the way we design, develop, and deploy software. For the purpose of this short blog, I will focus on the metaverse and the effect it might have on software developers. 

            What is the metaverse? 

            The metaverse is a virtual reality that allows us to interact with a fully virtual (and immersive) environment just as we do in real life, doing the same things we would in real life. According to Wikipedia, the metaverse “is a hypothesized iteration of the internet, supporting persistent online 3-D virtual environments through conventional personal computing, as well as virtual and augmented reality headsets.” A Capgemini publication focusing on metaverse in healthcare defines it as “a container of 2D or 3D virtual spaces, a persistent place parallel to the physical world, aiming to combine online digital and real-time interactions with the sense of presence. 

            An immersive experience

            For years, games like Roblox and Fortnite, but also older games like World of Warcraft, Minecraft, or Second Life, have developed a parallel virtual world where players can engage and connect with others in a mostly fantasy-like landscape. To illustrate the concept, one could also draw parallels with the film The Matrix; in the film, the main character “moves” between two reality-like parallel worlds.  

            Many consider the metaverse as the internet V3, with V1 back in the 1990s, and the emergence of social media at the start of the 2000s as V2. Several use cases for the metaverse exist: for example, in the smart city space and in healthcare. However, there are also some who are more skeptical, who believe that the metaverse is already part of the past. The truth might lie somewhere in between. What seems clear, however, is that either the metaverse or part of the various metaverse concepts will impact the way we develop software: 

            1. Moving away from mouse and keyboard 

            When apple unveiled their iPhone in 2007, it heralded the start of the end of mobile phone keyboards. With the emergence of the metaverse, we might see the same happening to our PCs. The mouse, invented by IBM back in 1964, still the de facto PC input device next to the keyboard, might be slowly replaced by gesture, speech, and movement for end users (some state that using mind control devices might also become more mainstream). Of course, VR has been around since the mid-1990s after its invention in 1968, but due to various factors has not quite hit the mainstream. This might change now that developments in the metaverse have started, with more vendors announcing they are developing MR devices – Apple has started production in March 2023. 

            For developers, the shift away from using text for coding is still a big unknown. If the shift occurs, then text input devices will slowly disappear. If it does not, then developers will have to deal with both traditional physical and new virtual ways of working. In any case, designing and developing software that supports different data input devices will require different skills, techniques, and tools compared to relying on mouse and keyboard only. It seems most likely that we will see a convergence in which developers use a mixture of traditional physical and new virtual ways of working.  

            2. The move from 2D monitor screen interactions to full 3D with the use of VR, AR, MR, and XR 

            It is not just our traditional user input devices that might change. We might also see our traditional user interaction devices change. Over the past 30 years, the PC monitor only changed in terms of resolution and size, but not really in concept: a screen that displays data in visual and text form, projected on a two-dimensional screen. The touch screen tried to allow for a better experience but failed to really take off. Driven by the metaverse, we might see a shift from today’s PC-based fixed and two-dimensional monitors to the use of mixed physical and virtual reality devices. Using virtual reality (VR) headsets or mixed reality (MR) glasses for user interaction combined further with either smartphones, gesture, or even mind reading might fundamentally change the way we design and develop applications. It is very likely that the shift will be a gradual process. The emergence of MR for both end users interacting with applications and for developers designing, developing, and delivering code might still be a way off. However, software developers must master the new (and currently various) software development kits (SDKs) to ensure that they can establish fully seamless and fluid interactions.    

            3. New development platforms  

            With the advent of the metaverse, organizations and communities are also starting to develop new programming languages. For instance, in December 2022, Epic Games launched the Metaverse programming language Verse. Verse is focused on making it possible to create social interactions in a shared three-dimensional (3D) world in real time. The web3 programming language family now includes Verse along with others like Clarity, Solidity, Curry, Mercury, and Rust. Verse also aims to support interoperable content by utilizing operational standards from several game engines, such as Unity, and live upgrades of active code. Another example is solidity. Created by Ethereum, solidity is a statically typed programming language designed for developing smart contracts that run on Ethereum. Solidity is used on the Ethereum blockchain, an object-oriented programming language, for building and developing smart contracts on blockchain systems. The question with all new programming languages is whether they will become mode dominant or widespread. Clearly only time will tell.  

            4. Testing  

            The quality of applications will be as important as in today’s applications. However, with MR as well as digital twin type environments, testing the use of both physical as well as virtual devices will be different as new testing facilities are needed to avoid manual interventions that might read “put the headset on, run the app, and see if it works.” The integration of MR and/or different VR devices as well as the use of different platforms might require different testing regimes.  

            5. Being more aware of non-functional aspects like latency, security, and safety  

            Walking around with Google Glass or any other VR or MR devices could pose various risk profiles, and developers must consider this when designing and developing metaverse-based solutions. In addition, latency – the time it takes for a service to respond (also sometimes referred to as “lag”) – is another aspect developers will have to consider more than in our current “traditional” 2D environment. User experience will be a key critical aspect in the metaverse, and a fully immersive experience can only be achieved if the rendering is fully fluid and seamless. With the end user being mobile or stationary with various data transfer opportunities (currently 5G, but soon 6G or even via low orbit satellites) it is important to ensure the developed metaverse solution fully considers that. With these requirements, more “traditional” aspects, like writing efficient netcode (referring to synchronization issues between clients and servers) and 3D engines, will become even more important.  

            6. The move from two dominant mobile platforms (Android and Apple) to multiple platforms 

            The metaverse will require massive 3D content to engage users, and 3D is expensive to make, to understand, to store, and to transport. Developing a metaverse application involves creating a virtual experience for platforms such as HTC Hive, Oculus Quest, and other VR or MR systems. Popular developer tools for metaverse focused on 3D creation include Epic’s Unreal Engine, Unity, Amazon Sumerian, Autodesk’s Maya, and Blender. And then there are the various (at the time of writing) development platforms that cover metaverse-related tools and accelerators like Webaverse, Ethereal Engine, JanusWeb, WebXR, Open Metaverse, Nvidia’s Omniverse, Hadea’s metaverse infrastructure, and the Microsoft metaverse stack.   

            7. Increased importance of application programming interfaces (APIs)  

            Interoperability (getting systems to talk to each other) will be one of the main challenges for developers writing metaverse applications. As with the advent of the internet in the mid-1990s, where multiple vendors as well as open communities developed and released new standards, the metaverse is also triggering numerous, and sometimes conflicting, standards. How it will all pan out is still open. However, what is clear is that software developers must have an excellent appreciation of data integration, particularly as data is being exchanged in real time between different platforms.   

            8. Greater emphasis on real-time collaboration 

            As applications in the metaverse will be used in an interactive and real-time manner, applications written for the metaverse will have to be able to respond to unpredictable events in a real-time manner, providing a seamless user experience. This means that software developers will have to use statistical techniques like deep learning on provided data and real-time user interactions to predict a response or next step, without the software having been specifically programmed for that task. 

            9. Security and trust will be critical elements  

            The success of the metaverse will also depend on users trusting the virtual counterparts; this means active and passive security will be a critical element. As the metaverse will evolve around the real-time exchange of virtual assets, new ways of securing and controlling virtual assets and interactions in real time will be needed. This will include authentication and access control, data privacy, securing interactions and transactions, and protecting virtual assets. In addition, passive security-related aspects, like strong network security protecting from cyberattacks, hacking, and other security threats, will be needed.  

            10. The further use of tech like blockchain and NFTs 

            One of the main use cases in the metaverse is the trading of goods and services. Therefore, it is likely that technologies like blockchain and non-fungible tokens (NFTs) will be supporting the exchange of virtual assets. And this means that software developers should have an understanding of how to manage NFTs as well as distributed ledgers like blockchain.  

            11. AI will impact software development  

            Another technology that will be part of the metaverse is AI. AI will be a key element in supporting the metaverse as it will help with end-user personalization, content creation to create more immersive and engaging virtual environments, as well as analysis of user behavior to help to identify trends and patterns, enabling developers to optimize the virtual world and provide a better user experience.  

            Even without the emergence of the metaverse, AI will impact software development significantly. AI is positively impacting the way we design and develop software in these areas:  

            1. Generating code: several AI tools can generate code, including DeepCoder developed by Microsoft, Kite, TabNine, GitHub Copilot, etc. 
            1. Automation: AI can automate repetitive and time-consuming tasks in software development, such as testing, debugging, and code optimization. 
            1. Quality: AI can improve the accuracy of software development by identifying potential bugs and vulnerabilities in code before it is deployed. 
            1. Efficient resource utilization: AI can help software developers optimize resource utilization, such as server capacity and memory usage, to ensure that applications are running efficiently. 
            1. Increasing immersion: by, for instance, making aspects more dynamic and immersive in the environment 
            1. Creating virtual worlds: through, for instance, “text-to-environment” or “text-to-world.” Instead of placing assets using a mouse and keyboard, a developer could describe the environment instead. 

            Today, many use cases exist where AI is aiding the entire software development process. The possible advent of the metaverse, or aspects of it, will further impact and change the way software developers work.  

            Summary 

            It is anyone’s guess as to whether the metaverse will indeed be the next incarnation of the internet. I remember an interview with David Bowie in 1999 in which he accurately predicted the impact the internet will have. He might have said the same about the metaverse today. In any case, technologies like VR, AR, MR, and AI will drive more and more user interactions into the virtual world, and software developers must deal with the shift in technology and the change in user experience. 

            Special thanks to: Stuart Williams, Simon Spielmann and some support from ChatGPT 

             

            Gunnar Menzel

            Gunnar Menzel

            Chief Technology Officer North & Central Europe 
            “Technology is becoming the business of every public sector organization. It brings the potential to transform public services and meet governments’ targets, while addressing the most important challenges our societies face. To help public sector leaders navigate today’s evolving digital trends we have developed TechnoVision: a fresh and accessible guide that helps decision makers identify the right technology to apply to their challenges.”

              Deliver a seamless sales experience across the lead-to-order lifecycle

              Deepak Bhootra
              28 Mar 2023

              Frictionless, digitally augmented, data-driven sales operations drive operational excellence, increased value and competitive advantage across your business.

              Just as professional rally drivers rely on a navigator to get them from A to B, so the sales function depends on strong sales operations support.

              It’s the role of the sales operations team to generate, track, and progress sales leads; to capture, validate, and track opportunities as part of sales forecasting; to move those offers forward to the offer stage with a configured and competitive quote; and when the sale is made, to convert the purchase order into a valid sales order for fulfilment.

              These responsibilities are beset by all kinds of challenges. Sales operations teams frequently find they have insufficiently accurate, easy-to-access data and insight-driven forecasting; that their sales technology is outdated; and that they have inadequate resources and roles that are not clearly defined. At the same time, teams constantly need both to recruit and retain talent, and to adapt to changing business models.

              All these challenges often mean that sales operations teams spend much of their time dealing with day-to-day tactical issues when they would rather be thinking and acting strategically – looking ahead, developing plans, testing them, and then putting them to work.

              Design, build – and transform

              What’s needed is a smart, seamless sales operations model (think of this as a sales operations-as-a-service concept) that can be tailored to the culture, practices, and needs of the individual organization – and that empowers the people who use it.

              It’s the bespoke nature of the model that makes the design stage so important. If a service provider is involved, it’s our view that the best approach is for that provider to work closely with its client organization, designing and mapping processes based on lived experience within the sales operations function, and also on relevant personas.

              What should emerge from this deep dive into future aspirations and current practices is a target operating and service model. The organization and its service partner work together to design and set up services including policies, process rules, a control framework, and new ways of supporting sales operations team members.

              The final stage in the transition is to move from current processes to a more streamlined and coherent smart digital model. Technology collapses processes and creates a tremendous opportunity to eliminate drag in a process and improve how an internal or external user experiences it. Focusing on customer experience not only delivers hard gains (ROI, margins etc.), but also qualitative benefits such as CSAT/NPS that translates to stickiness, repurchase, loyalty, and “mind-share.”

              What does success look like?

              At Capgemini, our digital sales solutions take advantage of innovative technologies and sales systems to integrate, streamline, and optimize sales touchpoints and processes across the lead-to-order lifecycle – delivering accurate, easy-to-access data, enhanced sales support, and data-driven sales analytics.

              The aim is to enrich our clients’ digital sales strategy with relevant insights and data that drive operational excellence and efficiency across the sales function. And we’ve seen some truly transformative business outcomes, including 15–25% reductions in turnaround time, 3–5% improvements in win-rate, 15–25% increases in time returned to sales, and 10–20% improvements in net promoter score.

              Everybody wins

              Intelligent, integrated sales operations of this kind not only address those organizational challenges I outlined earlier in this article – they also provide increased value for a company’s customers and business partners.

              When sales processes are efficient and cost-effective, and when sales operations teams are well informed and in control, everyone is happy.

              To learn how Capgemini’s Empowered Sales Operations solution delivers frictionless, digitally augmented, data-driven sales operations that drives competitive advantage across your business, contact: deepak.bhootra@capgemini.com

              Deepak Bhootra is an established executive with two decades of global leadership experience. He delivers process excellence and sales growth for clients by optimizing processes and delivering seamless business transformation.

              Author

              Deepak Bhootra

              Deepak Bhootra

              GTM Lead, Empowered Sales Operations, Capgemini’s Business Services
              Deepak Bhootra is an established executive with two decades of global leadership experience. He delivers process excellence and sales growth for clients by optimizing processes and delivering seamless business transformation.

                Closing the climate-disclosure gap

                Farah Abi Morshed
                22 Mar 2023

                Companies need to provide reliable information on climate-related risks and impact on their operations, but the lack of consistency and standardization in data collection, the absence of the right infrastructure to record data, and insufficient expertise in financial and climate analytics poses significant challenges.

                The Securities and Exchange Commission (SEC) has proposed new rules that would require companies to disclose certain information about the impact of climate change on their business, including greenhouse-gas emissions and climate-related risks that are likely to have a material impact on their operations or financial condition. The proposed rules are intended to provide investors with consistent, comparable information to help them make informed investment decisions, and to provide companies with clear guidelines for disclosing such information.

                The changes would require companies to:

                1. Disclose information about their governance and risk-management processes related to climate change
                2. Relate the impact of climate-related risks on their business, financial statements, and strategy
                3. Include a safe harbor for liability from disclosing Scope 3 emissions and an exemption for smaller reporting companies.

                Under the proposed rule changes, larger companies would be required to have their greenhouse gas emissions disclosures attested to by an independent service provider to increase the reliability of the disclosures for investors.

                Implementation challenges to be SEC compliant

                The SEC’s policy has been welcomed by many as an important step towards addressing the growing concern about climate change and its impact on businesses. However, several challenges are impeding organizations from taking action.

                1. The proposed rule will require publicly traded companies in the United States to disclose information on their governance, risk management, targets, and goals related to climate change in their financial filings. This will be the first time such requirements have been implemented in the country. While there are some data and methodologies for assessing climate-related hazards, there is a lack of consistency in the data and methods used to assess the severity of climate-related risks (physical and transitional), which makes it difficult to determine the potential damages or losses from such risks.
                2. Substantial effort will be needed to acquire the right data across clients’ assets/sites/systems due to inconsistency and lack of standardization. Organizations are not yet collecting all the information their systems are creating or don’t have the proper infrastructure in place to record the right data. Organizations lack master data management practices resulting in low data quality. Overall, setting up GHG emissions and reduction goals means collecting data and benchmarking emissions to show changes year-over-year, which is proving to be challenging.
                3. Companies may use one or several third-party reporting frameworks, including the Global Reporting Initiative (GRI) standards, Greenhouse Gas (GHG) Protocol standards, Task Force on Climate-related Financial Disclosures (TCFD) framework, and Sustainability Accounting Standards Board (SASB) standards, for voluntary reporting on climate data. Transparency is obscured by the use of multiple measures and the fact that some frameworks might paint a company in a more positive light than another.
                4. Performing risk calculations requires solid financial and climate acumen and analytics, which expertise is often missing. This results in difficulty assessing risks and opportunities from a range of climate-change scenarios. Additionally, organizations do not have the right processes in place for climate risk approval and oversight.      

                Assessing the impact of the SEC’s climate disclosure policy

                Capgemini worked with the Wharton School’s Venture Lab to investigate the impact of the SEC’s climate-disclosure policy on organizations.

                First, the Capgemini team and the students from The Wharton School conducted a deep-dive into the SEC’s climate disclosure policy. This involved reviewing the policy in detail, as well as researching best practices for climate-related disclosure and assessing the potential impact of the policy on companies across different industries.

                Next, the team mapped the gaps and risks identified in the SEC policy to the solutions offered in the market. This analysis involved a comprehensive review of existing climate-related disclosure frameworks and standards, as well as an assessment of the strengths and weaknesses of each approach.

                The team also conducted research on recent trends in ESG and “green” dealmaking. This involved analyzing data on recent deals, identifying key players in the market, and assessing the impact of climate-related risks and opportunities on dealmaking.

                Finally, the team reviewed the variation in reporting requirements between different organizations, including EFRAG, SEC, and ISSB. This also involved an assessment of the potential challenges and opportunities associated with each approach.

                Managing SEC climate-related challenges

                To address the challenges posed by the SEC’s climate disclosure policy, businesses need to take a comprehensive approach to collecting and managing their climate-related data. This involves addressing several key areas, including data availability, quality, access, and governance.

                1. Businesses can start by collecting all the information their systems are creating and building the right infrastructure to record and ensure high-quality data. This involves creating a centralized database that can be accessed by all relevant stakeholders and ensuring that the data is accurate, complete, and up-to-date.
                2. To ensure that the data is being used effectively, businesses also need to focus on data access and governance. This involves centralizing data, especially when it is available but siloed, to limit overall access. A well-designed enterprise data model and governance principles can help ensure that data is being used effectively and efficiently.
                3. Another key area of focus is climate-related data monitoring. Businesses need to define clear ESG metrics and gather baseline data for comparison against future goals. This will allow them to track their progress over time and identify areas where they need to improve.
                4. To make informed decisions about climate-related risks and opportunities, businesses also need to focus on quantifying the impacts of climate change on their business models for different climate scenarios and decarbonization pathways. This will help them develop effective strategies for managing climate risks and identifying new opportunities for growth.
                5. Finally, businesses need to focus on developing ESG strategies that are closely connected to their overall business strategy. This involves assessing the impact of emission reduction projects and deciding which low-cost and high-impact solutions to pursue.

                By taking a comprehensive approach to managing their climate-related data, businesses can not only comply with the SEC’s climate disclosure policy but also make more informed decisions about how to manage climate-related risks and identify new opportunities for growth.

                At Capgemini, we understand the challenges organizations face in managing their climate-related data and complying with the SEC’s climate disclosure policy. That’s why we offer a range of solutions designed to help businesses address the key areas of data availability, quality, access, and governance, as well as climate-related data monitoring, risk and opportunities insights, ESG strategies, and emission-reduction steps.

                Author:

                Farah Abi Morshed

                Farah Abi Morshed

                Manager, Energy, Utilities, Chemicals and Natural Resources
                Farah is a subject matter expert in energy and sustainability who helps large organizations make strategic decisions to transition towards a more sustainable future. Throughout her career, she has worked in various parts of the energy/utility sector. She has performed analysis on energy projects, analyzed and published on energy macro trends and supported organizations with trading services and technology to accelerate their energy transition.

                  Why bother with an OMS, I’ve already got an ERP?

                  Capgemini
                  24 Mar 2023

                  Key considerations when thinking about your future fulfilment strategy

                  The demand for fully flexible, customer-first experiences is increasingly hard to achieve – especially as customer needs are constantly evolving.
                   
                  This is prompting customer-focused businesses to look at their technology stacks and assess whether they’re able to continue to keep up with such expectations, and one of the most frequent questions they ask themselves is: do I need an order management system (OMS), or can my existing eCom/ERP/CRM do the job?
                   
                  The answer certainly isn’t one-size-fits-all. It will vary significantly between businesses based on myriad requirements. So how should you determine what’s right for you?
                   
                  Taking it back to basics, let’s consider the four main roles of an OMS. Then we can look at how these inform the key considerations when deciding if your existing systems fulfil your needs. Read on to see if a dedicated order management system will benefit your business.
                   
                  At its heart, an OMS has four roles:

                  • Offer – The availability offer – It manages how availability of products and services gets consistently & reliably displayed to end customers through whichever channel they choose to shop.
                  • Promise – The customer promise – As a customer, when I show intent to purchase, the system should calculate a specific fulfilment promise which lets me know exactly when my items will be fulfilled or ready to collect, based on accurate, trusted availability.
                  •  Fulfil – Fulfilment of the order – As the order moves out of the checkout and into the process of picking, packing, and shipping, the OMS should maintain the consolidated, master view of order status and be responsible for orchestrating any customer communications or interactions between different fulfilment nodes and final mile shippers.
                  • Management – Management throughout the order lifecycle – There may be requests to modify the order, such as changing a delivery address, dates, cancelling items, cancelling a whole order, etc. These changes could come from customers themselves or from the business in the event of supply difficulties. The OMS should be the gateway that these requests are processed through, as well as handling post-order processing such as returns and exchanges to streamline, simplify, and take cost out of these interactions.

                  Based on the above, it’s perhaps easy to ask “Well, why can’t my ERP do all that?”, and to a certain extent it is possible for an ERP, or the combination of an ERP and eCom platform, to cover some of the functions described. But the challenge should not be “can my ERP do this?”; it should be “is my ERP the right system to do this both today and in the future?” If all your experience is with an ERP, then it’s tempting to see it as the solution to all types of problems; in the words of Abraham Maslow, “If the only tool you have is a hammer, you tend to see every problem as a nail.” But an OMS should be seen as another tool in your belt – perhaps not right for every task, but certainly an option to be considered in the right circumstances.

                  So, what could be some of the reasons for using an OMS rather than an ERP or a commerce platform?

                  • In many businesses, the ERP which manages back-end functions around supply chain or finance is not the same solution managing customer-facing or in-store functions. A customer offer doesn’t care about these boundaries (e.g. click and collect needs to know inventory, which may be in the ERP, but the store systems are critical in enabling the pick-up process), so strategy should be driven firstly by customer experience, customer-focused use cases and value drivers, and only then should existing organizational or systemic constraints be considered. A modern OMS connects one or many customer-facing front ends via APIs or built-in apps to back-end supply chain & finance processes, acting as a reliable bridge between channels, stores, warehouses, ERPs, and more.
                  • Customer service – like above, the ERP is usually not the system which enables customer services’ call center tools. An OMS can easily integrate through enterprise APIs to whichever systems the call centers are using to reduce complexity and connect the customer journey.
                  • Returns – returns are a huge cost driver for virtually all D2C businesses, and many resort to implementing a specific returns solution separate to their channels and/or ERP. Whilst this can enable more customer functionality, it is often at the expense of being able to tie the returns flow directly in with the outward fulfillment, often making the customer journey disjointed. An OMS enables retailers to automate and coordinate the return process to decrease cycle times and handling costs – all while simplifying the customer journey.
                  • ERP order fulfillment flows are typically more aligned to an order to cash process, driven by a limited number of customers ordering large quantities of products frequently via Electronic Data Interchange (EDI) or even through dedicated account managers, and making limited changes to those orders, rather than a consumer-focused fulfillment flow where large numbers of customers will order small baskets of products infrequently across a large number of self-service channels, and will often want real-time changes to those orders during or after that fulfilment – the differences in these two approaches are considerable, almost like speaking two different languages, and thus having an OMS in the middle can be the translator.
                  • As fulfillment networks grow and become more complex, with the options to fulfill both from owned warehouses, but also stores, 3rd party logistics providers, retail partners, drop ship vendors, and other locations, it’s likely an ERP simply cannot master all inventory or location details (or perhaps would not want to, given that to do so may impact financial calculations), which then makes presentation of a consolidated supply view difficult, let alone accurate for a customer promise.
                  • ERPs are typically designed to mandate best practice flows to business processes, and deviation from those flows is often costly or not possible, so implementing new logic or processes to handle the ever-changing world of customer fulfillment can be disruptive and costly. By contrast, an OMS is set up to expect ongoing change in logic, new workflows, new offers, and new capabilities.
                  • Lastly, a modern OMS is highly modular and designed for agility – Fluent Commerce order management for example, is event-based, so inputs trigger outputs and operations are all in real time. A MACH (short for Microservice, API-drive, Composable & Headless) architected application like this means updates can be made often and changes are much easier to test, deploy, assess, and iterate, generally without continual needs to test end-to-end functionality.

                  Where could an ERP be suitable for enabling OMS capabilities?

                  All of the above is intended to point out some of the difficulties of trying to manage OMS functionality inside an ERP, but it isn’t the case that it’s always the wrong decision. Modern ERPs do allow more flexibility in their operations, and if you’re a D2C business which has grown up with the capabilities to deliver a customer-centric offer via digital channels, then many of the considerations above have probably already been considered and factored in. In such examples, the key consideration should perhaps not be whether an ERP is right for my business today, but whether there is enough agility in my operations that I could adapt ways of working to handle new channels, new offers, new products, etc. without impacting the wider ways of working each time.

                  So, in summary, what are some of the initial questions I need to consider when looking at my existing systems vs. a new OMS?

                  • What are the customer features/functions/offers which are going to add value to my business in the near term and long term?
                  • How feasible is it to adapt my current systems to handle these new customer offers/requirements? And how feasible is it given whatever else is already in the pipeline for these systems?
                  • If my requirements change, can my current system keep up with frequent changes? Is it future-proof? Scalable?
                  • Can I add additional value by decoupling my customer offer from my core processes through use of an OMS?
                  • How much would implementing an OMS cost vs. adapting my current ERP?

                  These aren’t easy questions and there will always be good arguments on both sides, so please reach out to the experts at Capgemini.

                  Author

                  Leo Muid

                  Leo Muid

                  Consumer-Centric Grocery Fulfillment Offer Lead
                  Leo is Capgemini’s Global Offer Lead for Order Management. He has 20 years’ experience working with retail and CPG firms as an architect and CTO adviser in digital order management, omnichannel order fulfillment, and customer supply chain. He has worked extensively with leading OMS technologies and delivered some of the largest global implementations.