Skip to Content

Why sustainable IT is the backbone of a greener future

Ygor Zeliviansky
Apr 11, 2023

As organizations begin seriously considering their post-pandemic futures, they now face the challenge of walking the tight rope between meeting growth objectives and building sustainable businesses. 

Over the last year, promises of long-term sustainability agendas have more than tripled, with pledges of zero carbon and carbon neutrality abounding. While many organizations are turning to technology to meet these targets and solve environmental issues, enterprises need to ensure their IT does not become a part of the problem.  

The era of sustainable tech is on the rise. Companies are now leveraging innovative, data-driven technology to simultaneously streamline operations, cut carbon emissions, and reduce their carbon footprint. Often perceived as a savior rather than a sinner, the production, use, and disposal of technology has an often-overlooked carbon footprint: an estimated 57.4 million tons of e-waste were generated worldwide in 2021 alone. The total is growing by an average of 2 Mt a year. 

Accelerated by the fiercely dynamic and competitive markets, more organizations are embracing digital transformation across their business. As a result, demand for computing power and data storage is on the rise and so too is the energy required to produce and run them. Curating sustainable enterprise-wide digital systems will be crucial for any business trying to balance the objectives of sustainable growth post-pandemic.   

But with so little awareness on the matter, the road to successful and sustainable IT needs a clear and rigorous roadmap. Our research has found crucial factors to consider when building and implementing sustainable IT strategies – let’s have a look through each one.  

  •  Understanding the task  

When it comes to strategy, half of firms have defined an enterprise-wide sustainability approach, yet less than one in five (18%) have a comprehensive, sustainable IT strategy with well-defined goals and target timelines.  

Before a clear and robust framework can be rolled out, organizations need to get clued in on what they are dealing with. Our research revealed an alarming gap in awareness regarding the overall environmental impact of IT, with fewer than half of executives (43%) globally aware of their organization’s IT footprint. Many are confused about the true impact of IT. Only 34%, for example, know that the production of mobiles and laptops has a higher carbon footprint than the usage of these devices over their lifetime.  

Getting a clear understanding of the issue is the first critical stage for firms looking to develop sustainable strategies. Once the baselines and benchmarks of an enterprise’s environmental footprint have been marked out, organizations can then look to establish, implement, and monitor key performance indicators, targets, and frameworks. 

  •  Engaged and informed employees  

Employees and leaders who are engaged with sustainability agendas drive greater progress. Even the most thorough strategies can come undone when those involved are not committed to the cause. Taking things one step further by developing a specialist sustainable IT team can provide streamlined purpose and coherence. Organizations must adopt the same mindset with employees as with consumers. People want to buy from companies with sustainable products and services. Likewise, employees want to work for such organizations. People are a critical component of sustainability transformation. Therefore, you must foster a culture that celebrates and promotes environmentalism, while trusting and empowering your people to contribute their own ideas. Those who have made sustainability a pillar of the organizational culture have seen greater progress.
Our research found that 60% of organizations are adopting sustainability to align with the demands of potential employees.

  •  Sustainable software architecture 

Sustainability needs to be at the very center of an organization’s business. While emissions and output need to be carefully scrutinized, developing a sustainable software architecture is an imperative.  

Understanding the environmental consequences of software deployment and making decisions based on the carbon cost of infrastructure will ingrain sustainability into the foundations of an enterprise. Once the architecture is available, specific software modules within the structural design must be viewed from a sustainability perspective. For instance, organizations should empower their developers to understand the carbon cost of their software modules and use green coding to produce algorithms that have minimal energy consumption, at all times.

Upskilling developers in circular design will help product and design teams lessen their waste and thus their environmental footprints.

Sustainable IT can play a critical role in creating a circular economy by reducing waste, maximizing resource efficiency, and promoting more sustainable production and consumption practices. By introducing sustainability into the company’s value chain, you will drive the whole organization toward new efficiencies and a circular economy.

Moving forward, sustainability must be at the core of all our efforts. While many organizations have begun to focus on their overall sustainability agenda, the critical issue of sustainable IT has been overlooked. To give sustainable IT the attention it deserves, organizations need to understand the carbon cost of our digital world and accelerate the move to sustainable systems with engaged and dedicated teams. In this way, sustainable IT can play a central part in tackling climate change, promoting a circular economy, driving innovation, and moving the world to a more resilient and sustainable future.  

Meet the author

Ygor Zeliviansky

Head of Global Portfolio, Cloud Infrastructure Services, Capgemini 
I am a Solutions Consultant with a demonstrated history of working in the information technology and services industry and have delivered business value for global clients in service delivery, enterprise software, HP products, enterprise architecture, and storage.

    Transitioning to sustainable mobility 

    Klaus Feldmann
    10 April 2023
    capgemini-engineering

    Tens of millions of cars sell every year. That means every increase in a vehicle’s emissions is multiplied by millions, but equally, so is every reduction. We must therefore make vehicles as sustainable as possible.

    But what does maximum sustainability look like? What fuel and propulsion methods should you use? What raw materials should you pursue? Where should you manufacture?

    These big decisions will set corporate direction for years. They must properly analyse the full life cycle impact of any choice, whilst also considering systems outside of their control, from land, to energy infrastructure, to competition from other industries.

    To take a top-level example, what is the most sustainable vehicle propulsion method – Electric, Hydrogen and E-fuels? We need to understand the full life cycle – by performing an integrative Life Cycle Assessment – in order to reliably make the comparison.

    So we would need to look at the original fuel (eg energy mix of grid, power source for an electrolyser, or biomass) and its emissions profile. Then we’d need to look at the energy efficiency of each step between the energy inputs and the vehicle’s propulsion. Then you can compare how much of each you need to produce the same amount of propulsion.

    We must also look at the inputs of creating the propulsion system itself – such as battery or engine components and materials.

    We can then combine these to work out the most sustainable option. Maximum sustainability will need to address the fuel, the vehicle design and the energy systems that power it. The results will of course vary in different scenarios.

    Making good decisions needs highly sophisticated system-of-systems modeling, combining your own engineering and supply chain models with climate, energy, demographic and macroeconomic models.

    In our new whitepaper offer an introduction to planning strategic decisions for a sustainable transition, and provide top level worked examples of propulsion and battery choices, alongside some initial answers.

    Author

    Klaus Feldmann

    CTO for Automotive Sustainability and e-Mobility, Capgemini Engineering
    Klaus Feldmann is the Chief Technical Officer of our sustainability & e-Mobility Offers and Solutions for the Automotive industry supporting our customers in their path to carbon neutrality across their products and footprints and service to fight against climate change and contribute to a decarbonized economy.

      Capgemini’s Quantum Lab participates in BIG Quantum-AI-HPC Hackathon

      Kirill Shiianov
      5 Apr 2023

      Capgemini’s Quantum Lab participates in BIG Quantum-AI-HPC Hackathon and wins the Technical Phase together with students from Ecole Polytechnique de Paris, and the Technical University Munich (TUM).

      At the beginning of March, Capgemini’s Quantum Lab participated in the BIG HPC-AI-QC Hackathon organized by QuantX in collaboration with PRACEGENCIBCG and under the high patronage of Neil Abroug (Head of the French National Quantum Strategy) in Paris, France. Leading players of international Quantum Computing, HPC and AI ecosystems (e.g., industrial companies, quantum hardware and software providers, HPC centers, VC/PEs and consulting groups, representatives of academia and government) gathered to accelerate the transfer of competencies and advance hybrid HPC-AI-QC solutions and their practical application. The hackathon consisted of two parts: the technical phase and the business phase.

      The solution co-crafted by our Capgemini team on a technical use case provided by BMW Group was crowned winner of the technical phase! The team consisted of Capgemini employees (Camille de Valk, Pierre-Olivier Vanheeckhoet and Kirill Shiianov) and students from Ecole Polytechnique de Paris (Bosco Picot de Moras d’Aligny), and Technical University Munich (TUM) (Fiona Fröhler). They were assisted by technical mentors Elvira Shishenina (BMW Group), Jean-Michel Torres and Elie Bermot (IBM), and Konrad Wojciechowski (PSNC).

      The solution they proposed is the first step to improve the BMW cars acoustics. The international jury of experts was enthusiastic about the team’s technical solution, as well as their excellent presentation. The French minister, Jean-Noël Barrot, and Nobel prize winner, Alain Aspect, joined the awards ceremony to hand out the prizes to the winners.

      Camille de Valk, one of Capgemini’s Quantum Lab Specialists, on the Technical Phase:

      “BMW Group provided us with a challenging use-case for the technical phase of the hackathon. It’s all about optimizing the design of cars to have less irritating sound in the cabin. This involves complicated physics and mathematics, but luckily our team had both physicists and computer scientists. The teamwork was one of the best parts of the hackathon for me.

      We created a toy-demonstration of a differential equation solver using variational quantum circuits and we explored its scaling in an artificial intelligence (AI), high performance computer (HPC) and quantum computing (QC) workflow. This was the first step to experiment with the efficiency of complex simulations around sound propagation, to improve the cabin’s acoustics by optimizing the design of the car. Working in this hackathon with such a talented team and great mentors was a great experience for me!”

      Kirill Shiianov, Consultant at Capgemini Engineering, about the Business Phase:

      “In the business phase of the BIG Hackathon, Capgemini’s Quantum Lab team took on a challenge to build a business case around one of the solutions from the participants from the technical phase. The team showed its best at developing the case of the business phase: uniting people from different backgrounds and business units. The team existed of different areas of expertise, which helped to understand different aspects of the problem and come up with creative solutions.

      The use-case intended to augment Natural Language Processing (NLP) models with a quantum approach. The use-case provider was Merck Group, and the real-world application of the technology was incented to investigate promises of Symbolic AI and as concrete example detect differences between Adverse Event reports (AE, event and drug exposure) and causal Adverse Drug Reactions (ADR, event due to drug exposure) mentioned in textual sources, like medical reports or social networks.

      During two intensive days, we fully immerged in the technology, built a complete business case, and presented it to a jury, consisting of technology VPs of high-tech companies, such as Quantinuum.

      Interaction with the use-case provider (Thomas Ehmer from Merck Group), and technical people from Quantinuum helped us getting unique insights in the respective domains. It was a unique experience, and I am already looking forward to participating in the next editions of the hackathon!

      Meet the authors

      Kirill Shiianov

      Junior Consultant at Capgemini Engineering
      Kirill has a background in experimental physics and quantum systems. He is part of Capgemini’s Quantum Lab and investigates industrial applications of quantum technologies, working on projects such as EQUALITY. In Capgemini’s Quantum Lab Kirill has explored applications of quantum computing for optimization problems, working with different providers, such as IBM Quantum, AWS and D-Wave.

      Camille de Valk

      Quantum optimisation expert
      As a physicist leading research at Capgemini’s Quantum Lab, Camille specializes in applying physics to real-world problems, particularly in the realm of quantum computing. His work focuses on finding applications in optimization with neutral atoms quantum computers, aiming to accelerate the use of near-term quantum computers. Camille’s background in econophysics research at a Dutch bank has taught him the value of applying physics in various contexts. He uses metaphors and interactive demonstrations to help non-physicists understand complex scientific concepts. Camille’s ultimate goal is to make quantum computing accessible to the general public.

        SIAM co-design: When copy and paste just doesn’t cut it
         

        Ian Turner
        31 Mar 2023

        SIAM, standing for Service Integration and Management, should be a fundamental pillar of every CIO’s digital transformation strategy.  

        Separately contracted and supplied IT services are the norm within almost every organization’s portfolio, and it is a challenge to align all the ecosystem’s providers and services to address the evolving needs of the business. This is the landscape in which the SIAM approach generates high value. But to be successful, it must be tailored to an organization’s objectives and culture.

        One of the most common reasons organizations fail when implementing SIAM is an incomplete adoption of the associated changes. When talking to customers who have taken this path, it turns out that this is often an outcome of trying to impose a Copy and Paste SIAM approach. Little or no regard is given to managing the path to successful and relevant change for that customer. 

        It’s a somewhat obvious fact that is rarely recognized: not all customers are the same. Naturally, this means each one should be treated as a specific entity with individual circumstances, needs, and culture. You cannot simply take the approach used when engaging one and apply it to another. The problem is knowing which actions should be taken and when.

        The alternative to the Copy and Paste approach

        These days, everyone is looking for a templatized quick fix. But sometimes, this is simply not possible. You have to consider a myriad of variables to achieve optimal outcomes. The following ten actions will enable you to implement a successful SIAM approach.

        • Talk to the practitioners

        Engage real-world practitioners of SIAM to shape things – theorists do not tend to have the experience to mold the approach from client to client.

        • Get involved in shaping it

        You need to be involved in the blueprint design process, ideally as part of the design team, to validate and shape the operating model and to ensure it is aligned with your organizational objectives.

        • Build the guardrails for success from the get-go

        A set of tailored SIAM Design Principles, acting as guard-rails, should be generated, tested, and socialized. To ensure the fundamental objectives and outcomes of the SIAM are shared and accepted widely, they should be implemented very early on with diverse sounding boards within your organization. These sounding boards will be invaluable ‘influencers’ for SIAM. 

        • Make it about you and your customers

        It is vital that your customers, their diversity, their needs, and their culture are at the absolute forefront of any design. They should be involved in the sounding board discussions from the start. By making early moves here, your organization will be able to harness great momentum for change.

        • The hardest part of transformation is people

        There needs to be a dedicated focus on managing the change within both the IT organization and its customers. This is the number one takeaway from every SIAM implementation.

        • You say “SIAM,” I say “SI”

        IT has a lot of buzzwords and variations in how we describe the same things, which can cause confusion, lead to entrenched positions, and result in incorrect assumptions. Agree on the vernacular early in the design process, making it relevant and recognizable to your organization. There is no better way to smash barriers than when everyone speaks the same language.

        • We’re not all the same; deal with it

        Understand the variances and exceptions within the current ways of working and structures, and why they exist. Don’t assume variation is inefficient until you understand the reasons for it. Perhaps you will discover a new use case that can augment your design.

        • What’s in it for me?

        Wherever possible, design for automation. If the SIAM is associated with a removal of routine or inefficient manual processing, then you will have a ready-made cohort of change agents who will support and evangelize the transformation.

        • What’s in it for my customers?

        Focus on how the SIAM will drive a better user experience. Be sure to commit to this in the SIAM design principles. Position the SIAM to be the agent of implementing effective eXperience Level Agreements (XLA). It has a unique role in driving your IT Supply Chain end to end and the best view of the user experience. 

        • What’s in it for my CEO?

        Build a synergy model that shows and quantifies the benefits driven by SIAM and when they will be realized. If your CEO can see what the improvements are, the magnitude of their benefits, and the roadmap of realizing them, you will have a top-down agent for SIAM to complement those within your organization and around your customer base.

        Capgemini

        Talk to Capgemini about how our Digital SIAM Co-Design approach puts you in the driving seat for SIAM. Tailored to you and your business, our approach is distilled from our unparalleled, global SIAM experience, which is why analysts recognize us as an industry leader in this space.

        Capgemini differentiates itself from other leaders with its sophisticated SIAM operations and SIAM managed services. Capgemini offers exceptional integration capabilities across technologies and platforms, driving business excellence.”, says ISG.

        One of the key reasons for our success is that we do not treat SIAM as a commodity or a copy and paste service. From customer to customer, our SIAM approach is tailored to their specific situation, culture, and needs.

        We know that to transform into a more effective, globalized, and harmonized service provider, it takes both industry expertise and experience alongside a deep appreciation of the customer, its own customers, and their needs. 

        We’ll work in lockstep with you to build the SIAM outcomes you are looking for and get the buy in from your organization. We’ve helped many clients of all sizes on the journey to SIAM. We’ve learned from each client, mainly that each one is different. So, why copy and paste? 

        Author

        Ian Turner

        ESM Offer Lead & Solution Lead

          The secret to successful FinOps

          It’s time to go deep into FinOps and maximize its value

          Thomas Sarrazin
          3 Apr 2023

          In 2023, cloud experts and consumers understand the necessity for FinOps in any cloud undertaking. Now it’s time to go deeper.

          The great digital transformation is driving cloud adoption at an exponential rate. At the same time, controlling cloud consumption has never been more complex, making FinOps one of the top priorities for CIOs in 2023. What’s needed at this stage is a look below the surface, at the foundations of strong FinOps practices.

          This blog highlights the key priorities for FinOps experts in the coming months. This is the first part of a series of blogs taking an end-to-end look at FinOps that offers a glimpse into the insights that you will find in our new white paper which is about to launch. The white paper shares comprehensive strategies that will maximize the success of your FinOps endeavor.

          So, what is the foundation of FinOps? We’re familiar with the FinOps triptych: skills, processes, tools. Going deeper, we need to figure out how to build and maintain this triptych. We come at this question from three angles, starting with governance.

          1.           Achieving agile FinOps governance

          FinOps governance should be at the heart of any IT organization, fully integrated into the new cloud operational models. It must be an agile practice, connected with DevOps teams and integrated into the DevOps lifecycle of applications. This means all FinOps initiatives should be shared, understood, and deployed from the beginning of application development or cloud modernization projects. The ultimate goal is to align FinOps governance with cloud strategy.

          2.           Improving operational efficiency through automation

          Despite increased efforts in previous quarters, FinOps teams are still facing operational difficulties in implementation and execution. Improving operational efficiency is one of the key challenges that FinOps governance must address in 2023. The solution is automation – including that of all FinOps events so that they can be reported without the need for human oversight, according to predefined rules. The goal should be a balanced strategy that benefits from the strengths of automation and human decision-making.

          3.           Leading a culture shift

          FinOps must bring together two different disciplines – technology and finance. The challenge here is joining these two disciplines into one seamless culture that optimizes costs while preserving business value.

          The full value of FinOps

          Once these foundations are laid, FinOps becomes a tool to achieve a number of benefits beyond financial savings. FinOps can be a powerful tool to help organizations reduce their carbon footprints and move closer to their sustainability goals. The practice, sometimes known as GreenOps, builds on the same foundations, such that FinOps and GreenOps reinforce each other. FinOps also helps streamline processes and strengthen collaboration between groups, both of which support multiple goals across teams. Ultimately, FinOps is a way to bring cloud and operations teams into alignment, helping organizations get the most out of their cloud strategies.

          Where do we stand today?

          FinOps teams are facing many challenges in 2023, but with the right priorities in place, they can overcome them and control cloud consumption effectively. Cloud providers are also playing a more proactive role in the growing maturity of FinOps by providing recommendations for the use of their services.

          In many ways, the technical challenges of FinOps are the easy part. The key lies in building the right organizational structures – and that’s where our next blog will turn.

          A leader in cloud and operational optimization, Capgemini is helping organizations around the world optimize their cloud services, saving money and lowering their carbon footprint.

          Looking to go deeper into FinOps? Stay tuned for our new white paper!

          Author

          Thomas Sarrazin

          Global FinOps Offer Leader, Cloud Infrastructure Services, Capgemini France
          Thomas is a seasoned leader in cloud infrastructure, with a proven track record of driving innovation in the information technology and services industry. As the Global FinOps offer leader, he leverages his extensive experience in engineering and cloud solution design to deliver robust, cost effective, and scalable solutions. Thomas is adept at aligning business processes with technology, conducting comprehensive requirements analysis, and implementing enterprise software and architecture.

            ChatGPT and I have trust issues

            Tijana Nikolic
            30 March 2023

            Disclaimer: This blog was NOT written by ChatGPT, but by a group of human data scientists: Shahryar MasoumiWouter ZirkzeeAlmira PillaySven Hendrikx and myself.

            Stable diffusion generated image with prompt = “an illustration of a human having trust issues with generative AI technology”

            Whether we are ready for it or not, we are currently in the era of generative AI, with the explosion of generative models such as DALL-e, GPT-3, and, notably, ChatGPT, which racked up one million users in one day. Recently, on March 14th, 2023, OpenAI released GPT-4, which caused quite a stir and thousands of people lining up to try it.

            Generative AI can be used as a powerful resource to aid us in the most complex tasks. But like with any powerful innovation, there are some important questions to be asked… Can we really trust these AI models? How do we know if the data used in model training is representative, unbiased, and copyright safe? Are the safety constraints implemented robust enough? And most importantly, will AI replace the human workforce?

            These are tough questions that we need to keep in mind and address. In this blog, we will focus on generative AI models, their trustworthiness, and how we can mitigate the risks that come with using them in a business setting.

            Before we lay out our trust issues, let’s take a step back and explain what this new generative AI era means. Generative models are deep learning models that create new data. Their predecessors are Chatbots, VAE, GANs, and transformer-based NLP models, they hold an architecture that can fantasize about and create new data points based on the original data that was used to train them — and today, we can do this all based on just a text prompt!

            The evolution of generative AI, with 2022 and 2023 bringing about many more generative models.

            We can consider chatbots as the first generative models, but looking back we’ve come very far since then, with ChatGPT and DALL-e being easily accessible interfaces that everyone can use in their day-to-day. It is important to remember these are interfaces with generative pre-trained transformer (GPT) models under the hood.

            The widespread accessibility of these two models has brought about a boom in the open-source community where we see more and more models being published, in the hopes of making the technology more user-friendly and enabling more robust implementations.

            But let’s not get ahead of ourselves just yet — we will come back to this in our next blog. What’s that infamous Spiderman quote again?

            With great power…

            The generative AI era has so much potential in moving us closer to artificial general intelligence (AGI) because these models are trained on understanding language but can also perform on a wide variety of other tasks, that in some cases even exceed human capability. This makes them very powerful in many business applications.

            Starting with the most common — text application, which is fueled by GPT and GAN models. Including everything from text generation to summarization and personalized content creation, these can be used in educationhealthcare, marketing, and day-to-day life. The conversational application component of text application is used in chatbots and voice assistants.

            Next, code-based applications are fueled by the same models, with GitHub’s Co-pilot as the most notable example. Here we can use generative AI to complete our code, review it, fix bugs, refactor, and write code comments and documentation.

            On the topic of visual applications, we can use DALL-eStable Diffusion, and Midjourney. These models can be used to create new or improved visual material for marketing, education, and design. In the health sector, we can use these models for semantic translation, where semantic images are taken as input and a realistic visual output is generated. 3D shape generation with GANs is another interesting application in the video game industry. Finally, text-to-video editing with natural language is a novel and interesting application for the entertainment industry.

            GANs and sequence-to-sequence automatic speech recognition (ASR) models (such as Whisper) are used in audio applications. Their text-to-speech application can be used in education and marketing. Speech-to-speech conversion and music generation have advantages for the entertainment and video game industry, such as game character voice generation.

            Some applications of generative AI in industries.

            Although powerful, such models also come with societal limitations and risks, which are crucial to address. For example, generative models are susceptible to unexplainable or faulty behavior, often because the data can have a variety of flaws, such as poor quality, bias, or just straight-up wrong information.

            So, with great power indeed comes great responsibility… and a few trust issues

            If we take a closer look at the risks regarding ethics and fairness in generative models, we can distinguish multiple categories of risk.

            The first major risk is bias, which can occur in different settings. An example of bias is the use of stereotypes such as race, gender, or sexuality. This can lead to discrimination and unjust or oppressive answers generated from the model. Another form of bias is the model’s word choice. Its answers should be formulated without toxic or vulgar content, and slurs.

            One example of a language model that learned a wrong bias is Tay, a Twitter bot developed by Microsoft in 2016. Tay was created to learn, by actively engaging with other Twitter users by answering, retweeting, or liking their posts. Through these interactions, the model swiftly learned wrong, racist, and unethical information, which it included in its own Twitter posts. This led to the shutdown of Tay, less than 24 hours after its initial release.

            Large language models (LLMs) like ChatGPT generate the most relevant answer based on the constraints, but it is not always 100% correct and can contain false information. Currently, such models provide their answers written as confident statements, which can be misleading as they may not be correct. Such events where a model confidently makes inaccurate statements are also called hallucinations.

            In 2023, Microsoft released a GPT-backed model to empower their Bing search engine with chat capabilities. However, there have already been multiple reports of undesirable behavior by this new service. It has threatened users with legal consequences or exposed their personal information. In another situation, it tried to convince a tech reporter he was not happily married and that he was in love with the chatbot (it also proclaimed their love for the reporter) and consequently should leave his wife (you see why we have trust issues now?!).

            Generative models are trained on large corpora of data, which in many cases, is scraped from the internet. This data can contain private information, causing a privacy risk as it can unintentionally be learned and memorized by the model. This private data not only contain people, but also project documents, code bases, and works of art. When using medical models to diagnose a patient, it could also include private patient data. This also ties into copyright when this private memorized data is used in a generated output. For example, there have even been cases where image diffusion models have included slightly altered signatures or watermarks it has learned from their training set.

            The public can also maliciously use generative models to harm/cheat others. This risk is linked with the other mentioned risks, except that it is intentional. Generative models can easily be used to create entirely new content with (purposefully) incorrect, private, or stolen information. Scarily, it doesn’t take much effort to flood the internet with maliciously generated content.

            Building trust takes time…and tests

            To mitigate these risks, we need to ensure the models are reliable and transparent through testing. Testing of AI models comes with some nuances when compared to testing of software, and they need to be addressed in an MLOps setting with data, model, and system tests.

            These tests are captured in a test strategy at the very start of the project (problem formulation). In this early stage, it is important to capture key performance indicators (KPIs) to ensure a robust implementation. Next to that, assessing the impact of the model on the user and society is a crucial step in this phase. Based on the assessment, user subpopulation KPIs are collected and measured against, in addition to the performance KPIs.

            An example of a subpopulation KPI is model accuracy on a specific user segment, which needs to be measured on data, model, and system levels. There are open-source packages that we can use to do this, like the AI Fairness 360 package.

            Data testing can be used to address bias, privacy, and false information (consistency) trust issues. We make sure these are mitigated through exploratory data analysis (EDA), with assessments on bias, consistency, and toxicity of the data sources.

            The data bias mitigation methods vary depending on the data used for training (images, text, audio, tabular), but they boil down to re-weighting the features of the minority group, oversampling the minority group, or under-sampling the majority group.

            These changes need to be documented and reproducible, which is done with the help of data version control (DVC). DVC allows us to commit versions of data, parameters, and models in the same way “traditional” version control tools such as git do.

            Model testing focuses on model performance metrics, which are assessed through training iterations with validated training data from previous tests. These need to be reproducible and saved with model versions. We can support this through open MLOPs packages like MLFlow.

            Next, model robustness tests like metamorphic and adversarial tests should be implemented. These tests help assess if the model performs well on independent test scenarios. The usability of the model is assessed through user acceptance tests (UAT). Lags in the pipeline, false information, and interpretability of the prediction are measured on this level.

            In terms of ChatGPT, a UAT could be constructed around assessing if the answer to the prompt is according to the user’s expectation. In addition, the explainability aspect is added — does the model provide sources used to generate the expected response?

            System testing is extremely important to mitigate malicious use and false information risks. Malicious use needs to be assessed in the first phase and system tests are constructed based on that. Constraints in the model are then programmed.

            OpenAI is aware of possible malicious uses of ChatGPT and have incorporated safety as part of their strategy. They have described how they try to mitigate some of these risks and limitations. In a system test, these constraints are validated on real-life scenarios, as opposed to controlled environments used in previous tests.

            Let’s not forget about model and data drift. These are monitored, and retraining mechanisms can be set up to ensure the model stays relevant over time. Finally, the human-in-the-loop (HIL) method is also used to provide feedback to an online model.

            ChatGPT and Bard (Google’s chatbot) have the possibility of human feedback through a thumbs up/down. Though simple, this feedback is used to effectively retrain and align the underlying models to users’ expectations, providing more relevant feedback in future iterations.

            To trust or not to trust?

            Just like the internet, truth and facts are not always given — and we’ve seen (and will continue to see) instances where ChatGPT and other generative AI models get it wrong. While it is a powerful tool, and we completely understand the hype, there will always be some risk. It should be standard practice to implement risk and quality control techniques to minimize the risks as much as possible. And we do see this happening in practice — OpenAI has been transparent about the limitations of their models, how they have tested them, and the governance that has been set up. Google also has responsible AI principles that they have abided by when developing Bard. As both organizations release new and improved models — they also advance their testing controls to continuously improve quality, safety, and user-friendliness.

            Perhaps we can argue that using generative AI models like ChatGPT doesn’t necessarily leave us vulnerable to misinformation, but more familiar with how AI works and its limitations. Overall, the future of generative AI is bright and will continue to revolutionize the industry if we can trust it. And as we know, trust is an ongoing process…

            In the next part of our Trustworthy Generative AI series, we will explore testing LLMs (bring your techie hat) and how quality LLM solutions lead to trust, which in turn, will increase adoption among businesses and the public.

            This article first appeared on SogetiLabs blog.

            Understanding 5G security

            Aarthi Krishna
            29 Mar 2023

            5G powers the new era of wireless communication, and to unleash its potential it must be secure. To better understand its security challenges and how to conduct a risk assessment, it’s important to know why 5G and its security ecosystem differ from its predecessor.

            Why 5G security?

            5G is the fifth generation of cellular technology, offering faster speeds and lower latency compared to 4G. It makes the connected era and Internet of Things (IoT) possible, and whether it’s smart cities, steelmaking, or healthcare, few industries will be untouched by its capabilities.

            There are two types of 5G networks: public and private –

            • Public 5G networks are primarily used by retail customers for smartphones and other day-to-day devices connected to the internet. Owned and operated by mobile carriers, public networks are available to anyone who subscribes to their service. As a network established by telco providers, the security rests with them for the most part.
            • Private 5G networks are not accessible to the public. They are owned and operated by a single entity, such as a company or government agency, and are used to connect devices within a specific location or facility. For example, a factory might set up a private 5G network to connect its machines and other equipment to streamline operations and improve efficiency.

            Most companies using 5G for manufacturing and operations will need to build a private network or employ a hybrid model of public and private, fitted to the requirements. Whichever model a company uses must be underpinned by robust security frameworks.

            5G security is complex because, unlike 4G, it operates outside the perimeter of dedicated equipment, servers, and protocols. Instead, a highly vulnerable software ecosystem of virtualized RAN and cloud-forward services constitutes its core network. The concept of 5G security is new and evolving, which is why it’s essential to be alert to the challenges and develop and deploy new security measures in response.

            5G security challenges

            The introduction of new use cases, new business models, and new deployment architectures makes securing 5G networks more challenging. But without a cohesive approach to mitigating the security risks, it can be difficult to ensure that all potential vulnerabilities are identified and addressed.

            These are the key security challenges for 5G as we see them:

            • Increased attack surface: Millions of new connected devices are entering the digital ecosystem, which increases the attack surface exponentially. Many IoT devices are vulnerable and unprotected and typically operate with lower processing power, making them easy targets for attackers. This makes implementing zero-trust frameworks with true end-to-end coverage critical for protection against threats.
            • New paradigms for telco: With 5G, the telco ecosystem is essentially inheriting IT challenges requiring a software security mindset. Whether public or private, 5G’s virtualized network architecture creates a new supply chain for software, hardware, and services, and this “virtualization” of traditional single-vendor hardware is a major security challenge. It’s time for professionals to acquaint themselves with network function virtualization (NFV), virtualized network functions (VNFs), service-based architectures (SBAs), software-defined networks (SDNs), network slicing, and edge computing.
            • Operational challenges: The requirements or the capabilities needed to monitor a 5G network are different to IT and OT. This means that the tools used for monitoring the IT and the OT networks cannot be retrofitted or scaled for the cellular world, so 5G requires new tools and new capabilities. This involves training new people to understand the protocols and use cases.
            • The complexity of implementation: There is no one way to build 5G architecture. It depends on the requirement of the organization and, as a result, the specification range can be extensive. Trying to bring these models together and manage them is one part of the challenge; the other is finding skilled professionals who know how to do it. Consequently, the margin for human error is another factor to bear in mind.
            • Increased number of stakeholders: Finally, the industry recognizes that the success of building 5G networks is dependent on the entire ecosystem of hardware and software vendors spanning multiple suppliers, from chip vendors to cloud providers. Coordinating new stakeholders and their security efforts while ensuring that all potential vulnerabilities are covered is likely to be challenging. Note that different stakeholders may have different levels of knowledge and expertise when it comes to security.

            Introducing 5G risk assessment

            5G security is extensive and there are multiple parts to be cognizant of to understand where the risks and vulnerabilities are when running a network. You’ll see this mapped out into horizontal and vertical layers in the diagram. To conduct a comprehensive risk assessment of 5G, both axes need to be secured. Knowing where to start involves understanding what constitutes each layer:

            • 5G horizontal security is the sum of five parts: user equipment, radio access, edge/multi-access edge computing, core network, and the cloud. Due diligence is necessary in every area to ensure assets are protected from confidentiality, integrity, and availability attacks.
            • 5G vertical security is the sum of four layers: the product, the network, the applications, and the security operation layer on top. This is generally referred to “chip to cloud” security, particularly in the context of IoT devices.

            A risk assessment, therefore, has to be holistic in nature, covering every aspect of the horizontal and vertical layers with due consideration of the threats, vulnerabilities, and assets that touch each of the specific components in the architecture. Such a risk assessment must also address any regional and industrial compliance requirements, and we will discuss this later in the series.

            At Capgemini, we know that building and securing a 5G network is complex. We also know that everything must be protected end-to-end and in unison for it to work effectively. With deep technology, business, and engineering expertise, Capgemini has the unique capability to guide you on the 5G security journey end-to-end.

            Security today adds value to a business tomorrow, and realizing the possibilities of a new, truly Intelligent era relies on it. Our experts can help you maximize the benefits.

            The next blog in the series will consider how to conduct a robust risk assessment and monitoring in more detail. 

            Meet the authors

            Aarthi Krishna

            Global Head, Intelligent Industry Security, Capgemini
            Aarthi Krishna is the Global Head of Intelligent Industry Security with the Cloud, Infrastructure and Security (CIS) business line at Capgemini. In her current role, she is responsible for the Intelligent Industry Security practice with a portfolio focussed on both emerging technologies (as OT, IoT, 5G and DevSecOps) and industry verticals (as automotive, life sciences, energy and utilities) to ensure our clients can benefit from a true end to end cyber offering.

            Kiran Gurudatt

            Director, Cybersecurity, Capgemini

              Serendipity systems: Building world-class personalization teams

              Neerav Vyas
              29 March 2023

              The last best experience we have anywhere sets the bar for all experiences everywhere. Consumers don’t want just personalization – they’re demanding it. Delivering personalization is no longer bar-raising. Organizations need to move from providing personalization as a feature to delivering serendipitous experiences. The challenge then is serendipity at scale or obsolescence with haste. Without the right teams, organizations are speeding toward obsolescence.

              Great basketball teams and great personalization teams have a lot in common.

              Imagine a shopping experience that’s completely generic. Worse than generic, it goes out of its way to recommend things you don’t want. It recommends actions that are the opposite of what you’re looking to do. It’s perfectly frustrating. How long will a business based on that sort of experience last?

              Now imagine a personalization experience that knows you so well it’s constantly providing you with serendipitously delightful experiences. You’re discovering things you never knew you wanted. But you’re never allowed to use it because the experience never sees the light of day. The MVP never becomes an available product.

              Both scenarios are terrible. Unfortunately, a variation of the second is more common. 77% of AI and analytics projects struggle to gain adoption. Fewer than 10 percent of analytics and AI projects make an impact financially because 87 percent of these fail to make it into production. What if we could flip the odds? What if rather than most recommendation projects failing, most of them succeeded? Cross-functional, product-centric, teams can do just that. It’s how innovators like Amazon and Netflix were able to succeed so quickly and so often in their personalization programs. It’s also been critical for the dozens of successful personalization programs we’ve delivered at Capgemini.

              Recommendation experiences

              Everything is a recommendation. That insight came from Netflix: “the Starbucks secret is a smile when you get your latte, ours is that the website adapts to the individual’s taste,” said Reed Hastings, co-founder of Netflix. Recommendations weren’t features or algorithms. They were the experience; the means to delight, surprise, frustrate, or anger customers. At Amazon, Jeff Bezos’ original goal was a store for every customer. This wasn’t AI for the sake of AI. Both companies made personalization central to their experiences, and personalization enabled Amazon and Netflix’s visions for more innovative, delightful, and serendipitous experiences. Recommendation experiences (RX) were critical to customer experiences (CX). Experiences were the product. Building products is hard. Josh Peterson co-founded the P13N (personalization) team at Amazon. He described the early days of Amazon as challenging because the company was siloed. Design, editorial, and software engineering were fragmented. “It was really hard to ever get anything all the way out to the site without begging and borrowing people from silos. The one time it was always different was when we did a product launch… So, if there was a big enough effort like launching music or auctions then you had permission to borrow everyone to put together your team.” In the early days of Amazon, there were many engineering efforts around personalization. Even though these efforts were led by brilliant engineers, they saw limited success. It wasn’t until after the launch of Amazon Auctions that personalization made a real impact.

              After Amazon Auctions, Peterson and Greg Linden looked to make Bezos’ vision for a personalized store for every customer a reality. The goal was a team that could “own its whole space,” to break silos to create a cross-functional team to rapidly experiment and deliver. This was the first team, outside of the design organization, to have designers in their team embedded with web developers and technical project managers. This enabled a higher number of launches compared to other teams. The impact of their model was so successful that it became the basis of Amazon’s famous “Two Pizza Team” approach – essentially a team small enough that they could be fed with two pizzas. Small teams that were decentralized, autonomous, and were “owners” of the business could move faster and launch more experiments. More experiments would enable them to have more successful innovations.

              Experimentation

              Successful personalization teams foster a culture of experimentation. Creating a culture of experimentation requires diverse, multi-disciplinary teams. Below we show the various skillsets and domains that are required for modern personalization teams. The circles don’t represent people, they represent skills. Great basketball teams and great personalization teams have a lot in common. In basketball, you need defense. You need offense, both close to the rim and from afar. You need diversity in skillsets. You could get lucky and find a unicorn but fielding multiple teams of unicorns is not practical. Creating a team of all-stars sounds good on paper, but there are plenty of examples where those super teams fail to live up to expectations. A team without a diverse set of skills is unlikely to be very successful, and almost certainly not great.

              “Experimentation requires blending creativity and data. Practically, this becomes a blend of statistics, behavioral economics, psychology, marketing, and expertise in experience design.”

              Small teams with most of the skills above are more likely to do end-to-end personalization well. No one person will have all the skills needed, but together they’ll bring more experiments to the table. Early Amazon teams were engineering and data-science heavy. It wasn’t until the addition of design, business expertise, and a product-centric approach that they were able to execute end-to-end and achieve Bezos’ vision.

              Velocity is a leading indicator. Successful personalization teams test many ideas. They break experiments into small chunks so no one failure is large enough to disrupt the business. They test and learn quickly. Testing a dozen ideas and refining them will be more efficient than trying to make one idea “perfect.” Our intuition on what is going to work is often wrong. Testing many ideas allows the data and results to guide us, rather than intuition. This requires personalization teams to develop many ideas end-to-end quickly.

              What does the future hold? Cross-functional, product-centric teams are the beginning, not the end. Experimentation requires blending creativity and data. Practically, this becomes a blend of statistics, behavioral economics, psychology, marketing, and expertise in experience design.

              These teams need to track which features drive results to understand what is working and what is not. The goal is to achieve consistent and reliable serendipity from personalization efforts. The obvious is not serendipitous. Experimentation is needed to discover that which is not obvious and that which drives business outcomes. Without that, we can’t scale serendipity.

              INNOVATION TAKEAWAYS

              DIVERSITY LEADS TO SPEED

              Speed leads to innovation. Diversity leads to innovation. End-to-end cross-functional teams with dedicated resources are more likely to successfully implement personalization programs and innovate faster than their peers

              A CULTURE OF EXPERIMENTATION IS CRITICAL

              Velocity, variety, and volume of experiments are leading indicators of innovation. “Our success at Amazon is a function of how many experiments we do per year, per month, per week, per day.” – Jeff Bezos

              SPEED IS A COMPETITIVE ADVANTAGE

              Testing and learning iteratively as well as being able to deploy quickly contribute to faster speed to market. “Companies rarely die from moving too fast, they frequently die from moving too slowly.” – Reed Hastings

              Interesting read?

              Capgemini’s Innovation publication, Data-powered Innovation Review | Wave 5 features 19 such articles crafted by leading Capgemini and partner experts, about looking beyond the usual surroundings and be inspired by new ways to elevate data & AI. Explore the articles on serendipity, data like poker, circular economy, or data mesh. In addition, several articles are in collaboration with key technology partners such AWS, Denodo, Databricks and DataikuFind all previous Waves here.

              Author:

              Chloe Cheau 

              Customer First Head of CDP and Experience Engineering
              Chloe drives strategy and delivery of innovative Data and Analytics solutions for her clients by leveraging her expertise in Data Engineering, Machine Learning, and AI. She leads beta programs for partners, delivers proof-of-concepts, and provides technical points of view and thought leadership for offerings and solutions.

                The 11 ways in which the metaverse is shifting software development  

                Gunnar Menzel
                28 Mar 2023

                Over the past 70 years, we have seen many technology disruptions that impacted the way we design, develop, and deploy software. The invention of C, the emergence of the personal computer, the rise of the internet, and the move from waterfall to agile to name but a few.

                However, nothing compares to what might be about to happen – the convergence of artificial intelligence (AI), blockchain, and 6G/satellite connectivity combined with concepts like the metaverse will change the way we design, develop, and deploy software. For the purpose of this short blog, I will focus on the metaverse and the effect it might have on software developers. 

                What is the metaverse? 

                The metaverse is a virtual reality that allows us to interact with a fully virtual (and immersive) environment just as we do in real life, doing the same things we would in real life. According to Wikipedia, the metaverse “is a hypothesized iteration of the internet, supporting persistent online 3-D virtual environments through conventional personal computing, as well as virtual and augmented reality headsets.” A Capgemini publication focusing on metaverse in healthcare defines it as “a container of 2D or 3D virtual spaces, a persistent place parallel to the physical world, aiming to combine online digital and real-time interactions with the sense of presence. 

                An immersive experience

                For years, games like Roblox and Fortnite, but also older games like World of Warcraft, Minecraft, or Second Life, have developed a parallel virtual world where players can engage and connect with others in a mostly fantasy-like landscape. To illustrate the concept, one could also draw parallels with the film The Matrix; in the film, the main character “moves” between two reality-like parallel worlds.  

                Many consider the metaverse as the internet V3, with V1 back in the 1990s, and the emergence of social media at the start of the 2000s as V2. Several use cases for the metaverse exist: for example, in the smart city space and in healthcare. However, there are also some who are more skeptical, who believe that the metaverse is already part of the past. The truth might lie somewhere in between. What seems clear, however, is that either the metaverse or part of the various metaverse concepts will impact the way we develop software: 

                1. Moving away from mouse and keyboard 

                When apple unveiled their iPhone in 2007, it heralded the start of the end of mobile phone keyboards. With the emergence of the metaverse, we might see the same happening to our PCs. The mouse, invented by IBM back in 1964, still the de facto PC input device next to the keyboard, might be slowly replaced by gesture, speech, and movement for end users (some state that using mind control devices might also become more mainstream). Of course, VR has been around since the mid-1990s after its invention in 1968, but due to various factors has not quite hit the mainstream. This might change now that developments in the metaverse have started, with more vendors announcing they are developing MR devices – Apple has started production in March 2023. 

                For developers, the shift away from using text for coding is still a big unknown. If the shift occurs, then text input devices will slowly disappear. If it does not, then developers will have to deal with both traditional physical and new virtual ways of working. In any case, designing and developing software that supports different data input devices will require different skills, techniques, and tools compared to relying on mouse and keyboard only. It seems most likely that we will see a convergence in which developers use a mixture of traditional physical and new virtual ways of working.  

                2. The move from 2D monitor screen interactions to full 3D with the use of VR, AR, MR, and XR 

                It is not just our traditional user input devices that might change. We might also see our traditional user interaction devices change. Over the past 30 years, the PC monitor only changed in terms of resolution and size, but not really in concept: a screen that displays data in visual and text form, projected on a two-dimensional screen. The touch screen tried to allow for a better experience but failed to really take off. Driven by the metaverse, we might see a shift from today’s PC-based fixed and two-dimensional monitors to the use of mixed physical and virtual reality devices. Using virtual reality (VR) headsets or mixed reality (MR) glasses for user interaction combined further with either smartphones, gesture, or even mind reading might fundamentally change the way we design and develop applications. It is very likely that the shift will be a gradual process. The emergence of MR for both end users interacting with applications and for developers designing, developing, and delivering code might still be a way off. However, software developers must master the new (and currently various) software development kits (SDKs) to ensure that they can establish fully seamless and fluid interactions.    

                3. New development platforms  

                With the advent of the metaverse, organizations and communities are also starting to develop new programming languages. For instance, in December 2022, Epic Games launched the Metaverse programming language Verse. Verse is focused on making it possible to create social interactions in a shared three-dimensional (3D) world in real time. The web3 programming language family now includes Verse along with others like Clarity, Solidity, Curry, Mercury, and Rust. Verse also aims to support interoperable content by utilizing operational standards from several game engines, such as Unity, and live upgrades of active code. Another example is solidity. Created by Ethereum, solidity is a statically typed programming language designed for developing smart contracts that run on Ethereum. Solidity is used on the Ethereum blockchain, an object-oriented programming language, for building and developing smart contracts on blockchain systems. The question with all new programming languages is whether they will become mode dominant or widespread. Clearly only time will tell.  

                4. Testing  

                The quality of applications will be as important as in today’s applications. However, with MR as well as digital twin type environments, testing the use of both physical as well as virtual devices will be different as new testing facilities are needed to avoid manual interventions that might read “put the headset on, run the app, and see if it works.” The integration of MR and/or different VR devices as well as the use of different platforms might require different testing regimes.  

                5. Being more aware of non-functional aspects like latency, security, and safety  

                Walking around with Google Glass or any other VR or MR devices could pose various risk profiles, and developers must consider this when designing and developing metaverse-based solutions. In addition, latency – the time it takes for a service to respond (also sometimes referred to as “lag”) – is another aspect developers will have to consider more than in our current “traditional” 2D environment. User experience will be a key critical aspect in the metaverse, and a fully immersive experience can only be achieved if the rendering is fully fluid and seamless. With the end user being mobile or stationary with various data transfer opportunities (currently 5G, but soon 6G or even via low orbit satellites) it is important to ensure the developed metaverse solution fully considers that. With these requirements, more “traditional” aspects, like writing efficient netcode (referring to synchronization issues between clients and servers) and 3D engines, will become even more important.  

                6. The move from two dominant mobile platforms (Android and Apple) to multiple platforms 

                The metaverse will require massive 3D content to engage users, and 3D is expensive to make, to understand, to store, and to transport. Developing a metaverse application involves creating a virtual experience for platforms such as HTC Hive, Oculus Quest, and other VR or MR systems. Popular developer tools for metaverse focused on 3D creation include Epic’s Unreal Engine, Unity, Amazon Sumerian, Autodesk’s Maya, and Blender. And then there are the various (at the time of writing) development platforms that cover metaverse-related tools and accelerators like Webaverse, Ethereal Engine, JanusWeb, WebXR, Open Metaverse, Nvidia’s Omniverse, Hadea’s metaverse infrastructure, and the Microsoft metaverse stack.   

                7. Increased importance of application programming interfaces (APIs)  

                Interoperability (getting systems to talk to each other) will be one of the main challenges for developers writing metaverse applications. As with the advent of the internet in the mid-1990s, where multiple vendors as well as open communities developed and released new standards, the metaverse is also triggering numerous, and sometimes conflicting, standards. How it will all pan out is still open. However, what is clear is that software developers must have an excellent appreciation of data integration, particularly as data is being exchanged in real time between different platforms.   

                8. Greater emphasis on real-time collaboration 

                As applications in the metaverse will be used in an interactive and real-time manner, applications written for the metaverse will have to be able to respond to unpredictable events in a real-time manner, providing a seamless user experience. This means that software developers will have to use statistical techniques like deep learning on provided data and real-time user interactions to predict a response or next step, without the software having been specifically programmed for that task. 

                9. Security and trust will be critical elements  

                The success of the metaverse will also depend on users trusting the virtual counterparts; this means active and passive security will be a critical element. As the metaverse will evolve around the real-time exchange of virtual assets, new ways of securing and controlling virtual assets and interactions in real time will be needed. This will include authentication and access control, data privacy, securing interactions and transactions, and protecting virtual assets. In addition, passive security-related aspects, like strong network security protecting from cyberattacks, hacking, and other security threats, will be needed.  

                10. The further use of tech like blockchain and NFTs 

                One of the main use cases in the metaverse is the trading of goods and services. Therefore, it is likely that technologies like blockchain and non-fungible tokens (NFTs) will be supporting the exchange of virtual assets. And this means that software developers should have an understanding of how to manage NFTs as well as distributed ledgers like blockchain.  

                11. AI will impact software development  

                Another technology that will be part of the metaverse is AI. AI will be a key element in supporting the metaverse as it will help with end-user personalization, content creation to create more immersive and engaging virtual environments, as well as analysis of user behavior to help to identify trends and patterns, enabling developers to optimize the virtual world and provide a better user experience.  

                Even without the emergence of the metaverse, AI will impact software development significantly. AI is positively impacting the way we design and develop software in these areas:  

                1. Generating code: several AI tools can generate code, including DeepCoder developed by Microsoft, Kite, TabNine, GitHub Copilot, etc. 
                1. Automation: AI can automate repetitive and time-consuming tasks in software development, such as testing, debugging, and code optimization. 
                1. Quality: AI can improve the accuracy of software development by identifying potential bugs and vulnerabilities in code before it is deployed. 
                1. Efficient resource utilization: AI can help software developers optimize resource utilization, such as server capacity and memory usage, to ensure that applications are running efficiently. 
                1. Increasing immersion: by, for instance, making aspects more dynamic and immersive in the environment 
                1. Creating virtual worlds: through, for instance, “text-to-environment” or “text-to-world.” Instead of placing assets using a mouse and keyboard, a developer could describe the environment instead. 

                Today, many use cases exist where AI is aiding the entire software development process. The possible advent of the metaverse, or aspects of it, will further impact and change the way software developers work.  

                Summary 

                It is anyone’s guess as to whether the metaverse will indeed be the next incarnation of the internet. I remember an interview with David Bowie in 1999 in which he accurately predicted the impact the internet will have. He might have said the same about the metaverse today. In any case, technologies like VR, AR, MR, and AI will drive more and more user interactions into the virtual world, and software developers must deal with the shift in technology and the change in user experience. 

                Special thanks to: Stuart Williams, Simon Spielmann and some support from ChatGPT 

                 

                Gunnar Menzel

                Chief Technology Officer North & Central Europe 
                “Technology is becoming the business of every public sector organization. It brings the potential to transform public services and meet governments’ targets, while addressing the most important challenges our societies face. To help public sector leaders navigate today’s evolving digital trends we have developed TechnoVision: a fresh and accessible guide that helps decision makers identify the right technology to apply to their challenges.”

                  Deliver a seamless sales experience across the lead-to-order lifecycle

                  Deepak Bhootra
                  28 Mar 2023

                  Frictionless, digitally augmented, data-driven sales operations drive operational excellence, increased value and competitive advantage across your business.

                  Just as professional rally drivers rely on a navigator to get them from A to B, so the sales function depends on strong sales operations support.

                  It’s the role of the sales operations team to generate, track, and progress sales leads; to capture, validate, and track opportunities as part of sales forecasting; to move those offers forward to the offer stage with a configured and competitive quote; and when the sale is made, to convert the purchase order into a valid sales order for fulfilment.

                  These responsibilities are beset by all kinds of challenges. Sales operations teams frequently find they have insufficiently accurate, easy-to-access data and insight-driven forecasting; that their sales technology is outdated; and that they have inadequate resources and roles that are not clearly defined. At the same time, teams constantly need both to recruit and retain talent, and to adapt to changing business models.

                  All these challenges often mean that sales operations teams spend much of their time dealing with day-to-day tactical issues when they would rather be thinking and acting strategically – looking ahead, developing plans, testing them, and then putting them to work.

                  Design, build – and transform

                  What’s needed is a smart, seamless sales operations model (think of this as a sales operations-as-a-service concept) that can be tailored to the culture, practices, and needs of the individual organization – and that empowers the people who use it.

                  It’s the bespoke nature of the model that makes the design stage so important. If a service provider is involved, it’s our view that the best approach is for that provider to work closely with its client organization, designing and mapping processes based on lived experience within the sales operations function, and also on relevant personas.

                  What should emerge from this deep dive into future aspirations and current practices is a target operating and service model. The organization and its service partner work together to design and set up services including policies, process rules, a control framework, and new ways of supporting sales operations team members.

                  The final stage in the transition is to move from current processes to a more streamlined and coherent smart digital model. Technology collapses processes and creates a tremendous opportunity to eliminate drag in a process and improve how an internal or external user experiences it. Focusing on customer experience not only delivers hard gains (ROI, margins etc.), but also qualitative benefits such as CSAT/NPS that translates to stickiness, repurchase, loyalty, and “mind-share.”

                  What does success look like?

                  At Capgemini, our digital sales solutions take advantage of innovative technologies and sales systems to integrate, streamline, and optimize sales touchpoints and processes across the lead-to-order lifecycle – delivering accurate, easy-to-access data, enhanced sales support, and data-driven sales analytics.

                  The aim is to enrich our clients’ digital sales strategy with relevant insights and data that drive operational excellence and efficiency across the sales function. And we’ve seen some truly transformative business outcomes, including 15–25% reductions in turnaround time, 3–5% improvements in win-rate, 15–25% increases in time returned to sales, and 10–20% improvements in net promoter score.

                  Everybody wins

                  Intelligent, integrated sales operations of this kind not only address those organizational challenges I outlined earlier in this article – they also provide increased value for a company’s customers and business partners.

                  When sales processes are efficient and cost-effective, and when sales operations teams are well informed and in control, everyone is happy.

                  To learn how Capgemini’s Empowered Sales Operations solution delivers frictionless, digitally augmented, data-driven sales operations that drives competitive advantage across your business, contact: deepak.bhootra@capgemini.com

                  Deepak Bhootra is an established executive with two decades of global leadership experience. He delivers process excellence and sales growth for clients by optimizing processes and delivering seamless business transformation.

                  Author

                  Deepak Bhootra

                  GTM Lead, Empowered Sales Operations, Capgemini’s Business Services
                  Deepak Bhootra is an established executive with two decades of global leadership experience. He delivers process excellence and sales growth for clients by optimizing processes and delivering seamless business transformation.