Skip to Content

EDGEOPS: The future of edge computing

Himanshu Pant
17 August 2022
capgemini-engineering

Edge computing is a model in which data handling happens near the site where information is generated and collected, as opposed to on a server or in the cloud. Sensors are synchronized with edge servers to securely decode real-time information and connect devices, like PCs and cell phones, to the network.

Modern organizations need edge computing because it offers new and better strategies for enhancing functional efficiency, further enhancing performance, improving security, automating all basic business activities, and maintaining constant accessibility. The approach is widely used for achieving the digital transformation of business practices. The increase in computing power it offers allows organizations to build a foundation for more independent systems, enhancing their performance and efficiency, while allowing individuals to focus on higher-value tasks.

EdgeOps combines the upsides of edge processing with edge-enhanced AI/ML edge inferencing, execution, and control, offering three levels of significant value that expand upon each other:

  1. split-second information, virtualization, and examination with EdgeOps
  2. fast, versatile deployment of sophisticated models and applications
  3. versatile control that empowers machines to foster self-restorative and self-streamlining capacities.
Figure 1: EdgeOps Architecture Source: Capgemini Engineering

Challenges

Deploying modern workloads such as machine learning apps and AI close to the edge brings several challenges. Those challenges include:

Edge Machine Learning

Gartner expects the number of devices to grow by 15 billion by 2029. As the volume and use of collected data increase, the deployment of machine learning could unleash the full potential of edge analytics. Edge locations have limited resources to perform machine learning tasks: sometimes, the development of a machine learning model requires data to be sent to the cloud for additional validation, as the reduced computational power makes it harder to manage this hybrid environment. Deploying models at locations with low network visibility and limited bandwidth could be problematic in terms of latency, connectivity, etc.

Machine learning algorithms are based on extensive linear algebra operations as well as vector and matrix data processing. A specialized processing flow is required to meet the low-latency requirements for use cases such as self-driving vehicles and Unmanned Aerial Vehicles (UAVs). But traditional architectures are not optimized for such edge intelligence, which requires customized hardware support for the deployment of machine learning workloads. In addition, they usually need to be able to store and access a set of parameters that describe how the model operates. These neural network architectures require access to a massive amount of memory locations for each classification. Deploying machine learning algorithms is therefore challenging due to memory access on resource-constrained devices and keeping data local to avoid costly reads and writes to external memory (e.g., data-reuse).

Power Efficiency

AI strategies such as neural networks normally come at the price of a need for high computing and memory. In typical application scenarios today, these neural networks run on powerful GPUs that dissipate a huge amount of power. For the practical deployment of neural networks on mobile devices, there is a significant need to improve not only the efficiency of the underlying operations performed by the neural networks but also their structure. They must also be agreeable to asset proficient frameworks to amplify execution while decreasing power utilization and limiting the actual space required.

Security and Accessibility

The rapid growth of IoT devices is modernizing the world, as seen in the automobile industry, construction, healthcare, and many other sectors. Gartner predicts that 75% of enterprise-generated data will be created and processed outside a traditional centralized data center or cloud. This proliferation could expose edge devices to different security risks as some of these devices are deployed outside the centralized infrastructure, making them harder to monitor and access in terms of software and hardware. Security risks include:

  • Data storage and protection
  • Authentication and passwords
  • Data sprawl
  • Lateral attacks
  • Personal data and account theft
  • DDoS attack and entitlement theft

Mitigation Approach

Modern organizations deploy technology to edge locations which requires security solutions to ensure that data is secure. The following solutions can improve EdgeOps capabilities and security:

Edge Machine Learning Security

EdgeOps combined with AI technology provides capabilities such as efficiency, adaptive control, machine autonomy, and many more. Machine learning provides highly accurate data if its algorithm models are trained on large sets of data. Applying Continuous Integration and Continuous Deployment (CI/CD) for training models could be a solution for preparing better algorithms. Acting on real-time insights such as monitoring and analyzing real-time data and applying machine learning to it will detect threats, and attacks, and act accordingly from predictions and results. Having distributed architecture to store, process, and perform real-time analysis on the data generated is a way to reduce latency and cost-efficiently retain data at the edge.

Using container orchestration technology, we can limit or restrict the usage of resources and power consumption by up-scaling or down-scaling the clusters. We can also control the security of containers by controlling user access, having root access in a limited manner, reducing OS installed components, using namespaces, forbidding containers to run as root by default, checking container health, and monitoring metrics, logs, and container runtimes.

Implement Zero Trust Edge Access

A key solution to the edge computing security problem is to apply a “zero trust” or “least access” policy to all edge devices. In this scenario, cyber security professionals allow only a minimal amount of access for each device; only what is needed to do its job. IoT gadgets commonly have a particular reason and speak with a few different servers or gadgets, so it should be easy to use a narrow set of security protocols. Using an access control policy to manage your device network means you can give users access only to the resources they need. If one device is compromised, a hacker will be much more unlikely to damage additional resources.

Ensure Physical Security of Connected Devices

Edge deployments typically reside outside the central data infrastructure, making physical security a crucial component. Organizations must implement controls to prevent the dangers of others physically tampering with devices, adding malware to belongings, swapping, or interchanging devices, and creating rogue edge data centers. Security personnel ought to know how to tamper-proof edge devices and employ procedures consisting of a hardware root of trust, crypto-based ID, encryption for in-flight and at-rest data, and automated patching.

Conclusion

EdgeOps is the key to enabling AI at scale on embedded devices. It has the potential to provide enhanced security and reduced costs while maintaining performance in comparison to cloud-based processing, and it can enable new capabilities for companies and individuals alike. Considering the above discussion, we can see that a significant number of changes must be made to improve not only the capabilities of the computing infrastructure but also the underlying architecture. Co-optimizing machine learning algorithms with DevSecOps principles and hardware architecture in terms of security can allow for highly intelligent and resource-efficient systems to realize the vision of EdgeOps.

Authors

Himanshu Pant

Himanshu Pant

Product Services and Support team, Capgemini Engineering GBL
Himanshu is part of the Product Services and Support team at Capgemini Engineering GBL. He focuses on developing DevOps and Cloud solutions and delivering them to customers.
Panigrahi Prasad

Panigrahi Prasad

DevOps Engineer, Capgemini Engineering 
Panigrahi Prasad is part of the Product Services and Support team at Capgemini Engineering GBL. Focuses on developing DevOps and Cloud solutions and delivering them to customers.

    What part of your intelligent product is worst for the environment?

    Martine Stillman
    17 Aug 2022
    capgemini-invent

    What is the most environmentally damaging part of an intelligent product’s lifecycle? The plastic packaging? The disposal of electronic waste? Shipping?

    Whilst each of these has hit the headlines as the scourge of product’s climate impact, the real – and perhaps unsatisfactory – answer is: ‘it depends’.

    It can be easy to focus a sustainability drive on the latest consumer pressure point. But in reality every product is different, and the most environmentally beneficial changes will be unique to your business.

    Companies that genuinely want to make the most meaningful changes to their product’s environmental impact should rise above the day-to-day narrative, put aside their gut feelings, and instead do a data-driven assessment of their product’s lifecycle impact.

    It’s almost never what you think

    If there’s one thing we can say with certainty about product’s environmental footprint, it’s that the biggest contributors are almost always unexpected.

    For example, we worked with a wearables company to assess their product’s lifecycle emissions. We found one of the biggest contributors was the size of the box. The too-big box meant fewer items per freight load, which meant more emissions per unit. They also were using air freight rather than shipping due to historical supply chain arrangements. By changing these two areas, they knocked 10% off lifetime emissions.

    These changes may not have been intuitive to someone sitting in an American HQ far away from the factories, but they were what the data showed.

    However, this was specific to this case. In another wearables example, the device was smaller, and changes from air to shipping made no significant difference.

    Then, in another example – a coffee machine – by far the biggest impact was nothing to do with its production, but the energy it used during its lifetime, which were about 30 times greater than its manufacturing emissions, and 300x the packaging impact

    Data, not instinct, is the key to sustainability

    To understand a product’s lifecycle emissions, and so make the best decisions, we need a lifecycle assessment (LCA) of its entire footprint, from raw material extraction, manufacture, transportation, use, and disposal. This includes everything, from packaging to parts to recycling, and is the best holistic assessment of a product’s true impact, which can be measured with a variety of metrics. Intelligent products have particular challenges, since they often involve complex hard-to-recycle components or hazardous materials (especially in batteries) and require energy to function. But the essential process is the same for all products.

    They key is to take a data-based approach – not to jump to conclusions. Once you know where the biggest impacts are, you can make informed decisions about the most effective changes.

    A simple exercise can tell you quite a lot. A lifecycle analysis/sustainable design expert can derive a huge amount on insight from quite basic data on size, volume, main materials, and location of manufacturing and markets. Even a product photo can give us a lot to get started with. This data can then be put into models to identify the ‘hotspots’ – the pieces of the product’s life that are creating the biggest environmental impact.

    Of course, with more detailed data from both manufacturing and use, more sophisticated analyses are also possible. Connected products in particular present opportunities to perform real-time assessment of energy usage and longevity. This can be fed that back into design, and even used to make changes after the product is sold via over-the-air software updates – in our coffee maker example, a simple software change to reduce the default ’keep warm’ time could have been implemented remotely and had a huge impact on its power consumption.

    But that all depends on how sophisticated your data collection is, and your resources for emissions reductions. A long-term sustainability program is great, but it is better to do something with what you have, than spend a year gathering perfect data then run out of time, money, or momentum to implement the actual changes.

    What we would say is the earlier you start in the product development, the better. Too many LCAs are done after the product is made. By performing a series of small LCAs as you progress through design and scale up, you can make a much bigger impact by catching the greatest hotspots when it is possible to change them, and saving your company money by doing it right the first time.

    Better for the environment, better for your bottom line

    Doing all this may sound like a high-cost endeavor. But, in truth, many times the sustainable design changes actually reduce the cost of the project. Software changes, for example to improve energy efficiency, are usually simple and low cost. Reducing the amount of material you use, or switching to more sustainable transport is usually a significant money-saver. Most of the time, sustainable design is focused on reducing the amount of resources to make, ship, use, and dispose of a product — and when you aren’t using resources, you don’t pay for them.

    Even where changes do not save money, they can usually be done without great cost. We estimate we can reduce most companies’ product lifetime emissions by 20-50% without significantly impacting the user experience or the bottom line, and often improving both. All of which gives you a great sustainability message to your customers, which is genuine, credible and data-driven.

    Synapse, Part of Capgemini Invent, is a product development consultancy. We have developed an iterative sustainable design process which provides a formal structure for designing and evaluating against sustainability objectives, and which fits within a typical product development process.

    Author

    Martine Stillman

    Martine Stillman

    Vice President of Engineering, Synapse Product Development
    I lead a team of engineering experts at Synapse, whose mission is to develop innovative solutions to our clients’ toughest problems. Using multidisciplinary teams, we build products and complex systems that have a positive impact on people and the planet. I’m an architect in Synapse’s Sustainable Design and Lifecycle Assessment practice and am deeply involved in our Climate Tech and CDR portfolio.

      What do you need to consider before adopting a hybrid workplace?

      Laura Sophie Mina
      Laura Sophie Mina
      16 Aug 2022
      capgemini-invent

      How does a company create a hybrid workplace?

      Even before the COVID pandemic, many organizations were asking this question, as they had decided to get ahead of the curve and embrace more flexible working conditions. But with the spread of the coronavirus, the transitory process has necessarily been greatly accelerated. Now the question is bigger than ever, as nearly 3 in 10 employers expect that more than 70% of their workforce will be fully remote within 2-3 years.

      However, while companies broadly understand the importance and future impact of the hybrid working model, many also aren’t certain how to go about launching this kind of transition. It’s not enough just to identify and supply employees with the necessary hardware needed to support remote working. Businesses also need to prepare their workforce for new ways of collaboration while having a plan for how they will handle their office space. Ultimately, it comes down to three pillars: people, space, and technology.

      Preparing people for the hybrid workplace

      For now, let’s focus on the people aspect of the hybrid workplace, as the one constant across all businesses, no matter their preparedness for a hybrid workplace, size, or ambition, is a need to support the workers through any change.

      Getting the hybrid workplace right requires more than equipping them with new tools. Instead, a company looking to make such a substantial transition needs to undergo an entire shift in its working paradigm. Though the specific details change depending on circumstance and an organization’s particular goals for the future, a successful hybrid workplace is built with four universal levers: organization design, digital leadership and talent, the digital workplace, and the management of the real estate.

      Organization design

      To adopt a new paradigm, a company will need to redefine its target operating model, which means defining the setup needed in terms of people, places, and structures. This involves enhancing digital leadership and fostering a remotely connected community within an organization, recognizing the changes that must be made to processes to fit the new structure, and identifying new ways of working as well as personal and team rituals

      There are a variety of approaches that can be utilized to fully understand the gap between the existing and future needs of the workforce, which is essential when preparing to adopt a new operating model. One option is to create personas related to different groups of employees and the unique needs that new processes will need to address. To do so, a company must identify ideal representatives for different worker groups before following a chosen interview process to gather the needed information.

      With a substantial knowledge base to draw upon, it becomes substantially easier for any company to clarify its roadmap to the hybrid workplace and fully understand the actions it must take to get across the finish line.

      Digital leadership

      In any transition from a traditional to a hybrid work environment, it is essential to recognize the fundamental change to team structures and coordination that will occur. When leaders can no longer rely on face-to-face interactions on a daily basis, they will need new options for guiding their teams and maintaining cultural ties that direct people towards the fulfillment of a shared goal. Part of this process focuses on new leadership KPIs, including a different set of expectations, such as valuing the wellbeing of employees and fostering a culture of trust.

      The challenge leaders have to face in a hybrid environment is recognizing and rewarding their team members in a virtual setting, which can change both the available incentives and the metrics by which those benefits are awarded. Due to the lack of human interaction and visibility when working from home, employees can experience a sense of detachment and disengagement from their manager. Therefore, it is critical for leaders to recognize this challenge and reassure their teams that their efforts and commitment are seen and valuable to the organization. Regular one-on-one check-ins, for instance, are an effective way for leaders to both express recognition of employees and share feedback on their ongoing performance but have decreased in frequency with the expansion of hybrid working.

      With people working both in and out of the office, it’s critical that a hybridizing organization finds ways to keep its teams connected and to ensure that they are part of a cohesive culture. Whether by setting up new, online social events, extracurricular activities, or simply regular checkpoints to maintain essential channels, all of this work will go towards adapting teams to the new work structure and enabling productivity.

      An effective tool that can help employees cope with a digital transition and adaptation process is mindfulness, which represents the ability to be fully present at the moment whether working in or out of the office. By encouraging mindful practices like breathing methods, gratitude greetings, calendar blockers, or wellbeing breaks, leaders can contribute to creating an environment fostering growth and collaboration. In particular, they can help their team members build resilience, increase their productivity, learn and develop their individual purpose, and improve their overall wellbeing and sense of belonging to the company.

      The digital workplace and real estate

      Unsurprisingly, a hybrid workplace requires a digital environment in which employees can connect both from inside and outside of the office. A customizable equipment catalog represents an essential technical element that enables flexible work and communication. A wide variety of potential solutions, such as a virtual campus, VR, and booking tools that can create an office that is connected across any mixture of workspaces, enable any company to create a digital environment suited to its particular needs.

      Of course, so much technical change will require a great deal of communication with employees. Throughout the digital transition, it is crucial to provide them with the right coordination material, support, and a strong change management plan; this will enable team members to more easily shift their ways of working, thus reducing any potential confusion or resistance.

      In addition, while this may seem to add nothing but challenges, a truly agile, innovative company can find an opportunity for overall improvement. As we rethink the workplace and how we work together, we have a unique opportunity to rework shared spaces, both physical and digital, to support a more sustainable approach.

      Of course, the introduction of a digital workplace to support a hybrid structure does not mean that physical infrastructure ceases to exist. Fortunately, real estate has a major role to play in the hybrid workplace, though as with each of the levers, it requires a thorough understanding of the transition’s challenges and a plan to fully realize the potential value. A company can truly maximize the benefits derived from its offices by launching a redesign that better aligns physical space with the new digital setup.

      In addition, a number of logistical questions have to be asked. Which employees need to come into the office regularly and how often? Who needs a dedicated workspace as opposed to more flexible arrangements, such as hot desks? And, in a hybrid workplace, to what degree does travel remain essential and for which employees?

      Every company will likely have its own answers based on the existing balance between office and remote work, its vision for the future, and the sensitivity of its work.

      A path forward

      If all of this sounds complicated or even mystifying, don’t worry. You’re not alone.

      But there is more good news for businesses everywhere: at Capgemini Invent, we’ve been asking these exact questions and developed an approach for answering them. Expertise covers a lot of ground and helps when it comes to selecting solutions and methodology for implementing a change and undergoing a transformation.

      Education, communication, and adaption require a finer touch, however. If a business wants its hybrid workplace to succeed, it needs to put in the work beforehand to ensure that it understands the exact requirements a solution needs to fulfill. This can include gamification or interactive learning depending on the audience in question and the organization’s culture.

      Finally, keep in mind that hybrid workplaces are likely to continue evolving, meaning that a business needs to be ready for changes to technology, security, and employee expectations. Fortunately, organizations can learn to be adaptive and replicable processes can be put in place to ensure that, in the future, internal reviews can be performed again with a great deal more clarity and preparedness.

      While hybrid workplaces pose a great number of challenges and questions to the organizations who wish to establish them, the answers do exist and the opportunities awaiting those companies that make the effort outweigh the short-term costs. Capgemini Invent is ready for a future of work built on more flexible conditions.


      About Author

      Laura Sophie Mina

      Laura Sophie Mina

      Senior ConsultantWorkforce & Organisation

        Everybody is a CMO in the future organization

        Tim van der Galiën
        Tim van der Galiën
        12 Aug 2022
        capgemini-invent

        Marketing leaders are always on the lookout for the latest trends and are increasingly focusing on continuously changing consumer patterns. In essence, we try to predict the future. But are we able to predict the future and create a future proof organization?

        Who could ever imagine hundreds (if not thousands) of delivery guys on the streets, racing against time to deliver your groceries as fast as possible? The level of expectations of consumers are rising, and so is the demand for engaging in contextualized content across channels. This means that marketing organizations need to play a different role to keep up with this pace. Organizations must become completely customer-centric; marketing should be embedded in the whole operating model. To meet the golden customer-centric standard, all employees – regardless of role – must have the customer knowledge that a marketing leader has. So, does this mean that everyone must become a CMO to create a future-proof organization?

        What we see at companies

        As a marketing organization, it is challenging to keep up with expectations and it means that you must rethink your modus operandi. What we see in different organizations is that functional structures have created siloed, unharmonized departments. Today’s marketing teams are often organized on a channel or category-based structure. However, because of the increasing need to adapt to customer needs and serve them with consistent and relevant content, there is a visible shift in marketing operating models. Looking at firms running at the forefront of marketing, we see the ongoing pursuit of centralizing local marketing operations to drive operational efficiency.

        Tempted by the typical benefits of centralizing any business function (e.g., economies of scale, greater control), especially B2C CMOs forego the creative power installed within local offices. Moreover, by transforming traditional marketing departments into customer data hubs that continuously act on new insights or value pools, the responsibilities of the marketing department are fundamentally changing. This is as more and more firms are formalizing customer experience into official roles and functions. Hereby, transferring traditional marketing responsibilities throughout the organization, yet progressing on the marketer’s biggest asset: real-time data usage. Today’s CMO (and therefore, the marketing organization) will become more purpose-led, data-driven, human-centered, and collaborative than ever before.

        The role of a CMO in the new marketing organization

        The role of the CMO has evolved in new directions and expanded beyond traditional brand-building. 90% of CMOs have some level of responsibility for business strategy, its tactical execution, and business-model innovation. With the right digital tools and digitalized processes, the modern CMO can take over as orchestrator of the Connected Marketing ecosystem to drive a truly value-adding customer experience.

        Understanding marketing from an ecosystem perspective reduces complexity and supports the CMO in managing in four areas:

        • Data-driven – Creation of benefits beyond brand values
        • Responsive – Collaboration between departments
        • At scale – Services rely on business and IT interplay
        • Personalized – Unified and trusted data

        “Connecting the dots on the journey towards becoming a future-ready organization requires following a follow a new path”

        Connecting the dots on the journey towards becoming a future-ready organization requires following a new path. Managing the customer journey within a connected ecosystem is about asking relevant questions to ensure interactions with the brand can be put into context.

        The six critical focus areas for a data-driven marketing environment

        Keeping up with complex future marketing trends is not enough, CMOs must address the need for restructuring within their organization because of the necessity of data-driven skills, collaboration, and automation. Most firms struggle in transforming into these new-age marketing powerhouses. Therefore, we identified six focus areas critical to a CMO’s preparation for a data-driven marketing environment:

        1. Create a clear vision for the marketing strategy
          • Ensure data-driven capabilities are at the core of the strategy
          • Define the roadmap for transformation
        1. Reimagine the customer journey with real-time engagement
          • Implement a customer-data platform
          • Utilize customer-listening tools to understand the intent
          • Have a clear content-management strategy and solutions
          • Use automation tools for delivery
        1. Ensure talent is equipped with a baseline of data and creative skills while allowing for specialists
          • Recruit or upskill marketing talent
          • Focus on developing an analytical mindset
          • Upskill on digital and performance marketing
          • Develop a learning curve
          • Establish a center of excellence
        1. Accelerate collaboration across the marketing ecosystem
          • Collaborate with key functions (IT, sales, finance) and external partners
        1. Implement a framework-driven data collection process
          • Create a data collection framework
          • Consider data from emerging digital touchpoints
          • Unify internal data silos
        1. Integrate long-term brand building and short-term marketing engagements
          • Build-in brand building with short-term marketing initiatives
          • Allocate separate budgets for long- and short-term marketing engagements

        What are four ways to help your organization in building a future-proof marketing organization?

        As traditional organizational structures within the marketing ecosystem must be reinvented to keep up with managing the content explosion, fast customer reactions, and technological advancements, management consultancy becomes crucial. CMOS need to train and structure their teams differently, collaborate with external partners, and find the right balance between consistency and independence. The organization needs to be structured such that it supports the marketing ambitions. But how do you do that?

        “Unifying your marketing organization is the first step in building future marketing ecosystem”

        As CMO, it is key to adopt a new end-to-end marketing organizational model. And as CMO, you can take the first steps to build that organization, where your customers are at the center of everything. This means that all people in your organization should think, act and feel from the customer’s perspective. Let’s not just say; that everybody is a CMO, but everybody should have a customer-centric view of a CMO.

        There is a lot to do for marketing organizations, we are more than happy to help you make a start; feel free to get in contact with us or one of our colleagues. If you want more information about this subject read our blog series here  or watch our Invent talks episode here.


        Our Experts

        Tim van der Galiën

        Tim van der Galiën

        Senior ManagerConnected Marketing at frog, part of Capgemini Invent
        Tim is responsible for the strategic marketing offering within frog, part of Capgemini Invent. He is an expert in marketing transformation & customer strategy and helps brands build bridges between people, data and technology.
        Richard Christophersen

        Richard Christophersen

        ConsultantCustomer Transformation at frog, part of Capgemini Invent
        Richard is a data-driven sales & marketing expert who specializes in the design and growth of customer-facing organizations. He has worked with global brands in consumer products, financial services and SaaS.
        Frederike van de Water

        Frederike van de Water

        Consultant Customer Transformation at frog,part of Capgemini Invent
        Frederike is a marketing and Customer Experience expert. She has passion to help organizations switch from traditional ways of working to a focus on customer-centricity and how it impacts an organization’s governance, processes, culture & mindset.

          The future of CRM: Exploiting the full potential of fleet data to foster B2B relationships with fleet managers

          Thomas Ulbrich
          10 Aug 2022
          capgemini-invent

          In our first blog series, we explored the status quo of consumer relationship management (CRM) for OEMs and took a glimpse into the future.

          In the coming years, CRM will be characterized by a central unit within global CRM hubs, car data integration, and innovative sales models with a strong focus on B2C clients. These predicted trends have held true. Consequently, those OEMs that anticipated the shift in daily operations have fared well; however, even many of these forward-thinking organizations have struggled to give B2B relationships the attention they deserve. Today, with the pace of change accelerating, many business leaders are beginning to rethink the ways they engage with clients.

          With this in mind, we decided to make the pivotal subject of B2B and OEM relationships the theme of this CRM blog series. We will examine all the challenges and the many opportunities such a shift in focus presents for those prepared to embrace change. As part of this analysis, we will explore the impact of collected fleet data, how it is processed, and how it is used. We will also look at the emerging “consumerized” experience, a factor that aligns with the growing focus on customer-centricity now characterizing many, if not all industries. And finally, we will offer insight on how OEMs can build a strong foundation that enables them to tap into fleet data’s potential. Improving these core components will profoundly improve B2B relationships and establish a more immersive CRM.

          B2B fleet managers become targets for OEMs finessing their CRM activities

          CRM provides great opportunities for business customers and Original Equipment Manufacturers (OEMs) operating within the automotive industry. We will explore the substantial potential that smart fleet solutions provide in the segments of passenger cars and vans that will revolutionize the B2B sector in the upcoming years.

          We distinguish between white and black fleet business customers. White fleet purchases are made by the fleet manager for the whole company, including pool and multipurpose vehicles, which don’t belong to a specific user. Black fleet purchases, also referred to as “user chooser,” are mainly decided by the actual driver of the vehicle.

          Future of CRM
          Figure 1: Black vs. white fleets

          Even though the market share of white fleets is noticeably smaller than that of black fleet clients (20% in Germany, 2020),1 it is an immensely interesting market for OEMs and one that is constantly growing.

          While financial aspects like Total Cost of Ownership (TCO) are still key determining factors for fleet clients, the perception, the experience as well as the relationship of the fleet manager to the OEM based on this is becoming increasingly important as a decision criterion. What makes this so relevant is that the fleet managers are a very limited group of decision makers, deciding over significant budgets, compared to the wide group of user choosers with their personal budgets. OEMs can increase their standing by turning fleet clients into brand ambassadors that embody a positive image through their large functional fleets.

          In this blog post, we will demonstrate the importance of appropriate CRM measures to address decision-makers in B2B business. We will tackle the “user chooser/ black fleet” subject in our next blog post.

          Agency sales, functions-on-demand offers and a consumerized experience

          The needs of B2B fleets are changing: with shorter leases and fewer service demands, Functions-on-Demand (FoDs) is becoming another major value stream, while fleet managers also demand a “consumerized” experience.

          In the coming years, the relationship between OEMs and B2B clients will be increasingly driven by fleet agency sales within direct sales models, as already highlighted in our agency sales POV. The fleet agency model turns offline salespeople into direct agents of the OEM, thus making use of existing dealer structures and enabling a connected use of customer data. This makes it possible for OEMs to build a 360-degree view and offer an end-to-end experience. With fleet business accounting for a major percentage of sales, targeting B2B and white fleet customers in particular through an agency model is becoming a priority for OEMs.

          In addition to that, there is a necessity to engage more intelligently with B2B clients: Shorter leasing cycles and subscription models will increase churn, while the electrification of fleets decreases the need for aftersales services. New opportunities for revenue streams like on-demand, fleet-wide functions and disruptive shared mobility offers are developing rapidly.

          Demand from B2B clients tends towards “consumerized,” frictionless B2C-style customer experiences. Exposure to digital information only intensifies this demand, leading fleet managers to continuously evaluate their set of OEMs and offers. Consequently, OEMs will have to interact considerably more often with fleet managers in a highly personalized and seamless way, predicting their needs to avoid churns and amplify loyalty.

          Fleet data and targeted B2B relationships

          OEMs need to use the full potential of fleet data for targeted B2B relationship management. Integrated technologies within fleet vehicles will provide a huge variety of new data points in the coming years. OEMs need to connect all new sources, including fleet management solutions, telematics, data analytics, smart surveillance, and radio frequency identification. As already outlined in our previous blog post, “The Intelligent Combination of Vehicle Customer Data,” by around 2030, Capgemini experts expect growth from the current 100 data points to over 10,000.

          Before long, based on available fleet vehicle data, we anticipate that OEMs will be able to gather a comprehensive overview of their customers’ fleet assets and desires – aggregated fleet driver as well as fleet manager needs. Knowing and addressing these particular requirements results in a major opportunity for OEMs. They will be able to improve their CRM fleet activities and provide tailored offerings to a target group, one that OEMs and NSOs are still trying to win by price rather than experience. We predict the following use cases:

          1. Usage-based FoD offers:

          By analyzing fleet data, OEMs can identify and even predict usage patterns in order to derive and communicate certain offers that add benefit to fleets and therefore fleet managers. The intelligent combination of that knowledge with the rise of Functions-on-demand that can be added to existing fleet vehicles gives OEMs a real difference-maker.

          If the analysis of fleet data, for instance, shows an over-average number of accidents, CRM could automatically propose additional FoDs. For example, driving assistance functionalities that lower the number of accidents could be extended, thereby reducing operational costs for fleet managers. Additionally, OEMs that recognize a large number of fleet vehicle drivers activate certain Functions-on-Demand (FoDs) could offer a package price for the whole fleet, thus creating a win-win-win situation for fleet users, fleet managers, and the OEM.

          1. Need-based leads:

          The fleet sales department is usually in frequent contact with its business clients, as are the corresponding fleet managers, constantly looking for the best offer to beat the competition. Analyzing and aggregating fleet driver behavior can provide all the relevant information needed to work out best-fit offers (including a pre-calculation of all benefits) or recommend best fleet compositions (share of a fleet). This helps fleet managers to find the most-adequate vehicles for the companies’ purpose, minimize the cost of vehicle ownership (including repairs and maintenance), and improve the user journey. Ideally, a lead is automatically generated based on gathered insights. But certain conditions must first be met, such as fleet usage, available offers, and strategic targets timeframe.

          1. Predictive fleet maintenance:

          When it comes to service resolution, proactive resolution of cases can be a great approach to establishing a more convenient experience for fleet managers. In the short term, telematic sensors will be perpetually improved, making it possible to track parameters like humidity, light exposure, and temperature, all of which influence the intensity of wear-and-tear. OEMs should react by offering prognosticative service packages leveled across the fleet. This will enable them to address peak levels and unforeseeable circumstances. For instance, the majority of the fleet needs to have service at the same time.

          Exploiting the potential of fleet data

          As already touched upon, fleet data has the potential to transform an OEM. But getting the data is only the first step. In fact, to exploit the potential of fleet data and realize the outlined used cases, OEMs need to find answers to the following three questions:

          • How do you gain access to fleet data?
          • How do you process fleet data?
          • How do you make use of fleet data?
          Fleet data availability

          Technically, fleet data should be collected firsthand, either by integrating the OEM’s CRM systems into its own fleet solutions (e.g., Mercedes-Benz Connect: Your Fleet and business) or by connecting them with independent fleet management solutions (e.g., Verizon Connect). Market-leading CRM systems already offer out-of-the-box integrations with various fleet management solutions. The pros and cons are evident: using your own data is most likely going to be less complex but, of course, limited to the corresponding brand and its market penetration. Alternatively, OEMs can make use of the rising number of connected vehicles, as outlined in our previous blog post, The Intelligent Combination of Vehicle and Customer Data. However, this leads to a complex allocation within the set CRM system and is only available for the OEM’s own brand of vehicles.

          The biggest challenge is not a technical one but a legal and psychosocial one. Users need to provide consent, only doing so if they either trust or benefit from data sharing (e.g., by reduced cross-fleet FoD prices).

          Fleet data processing

          Processing the connected data is as important as integrating and making use of it. Consequently, two factors are vital for successful implementation. First, a CRM solution with AI functionalities enables the evaluation and derivation of insights and next best actions. Second, as already outlined in our first CRM blog post, Customer Relationship Management in 2030, a CRM Service Hub with a centrally bundled variety of services can exploit the power of data from customers and cars and provide customized and aggregated overviews of relevant data.

          Fleet data usage

          To add actual value, the technical and procedural set-up needs to enable concretely set goals and use cases, with continuous measurement of clearly-set KPIs. This will put the “consumerized” experience over the lowest price and boost the loyalty and lifetime value of fleet managers extremely. All client-facing entities need access to insights and the ability to embed them in their daily client conversation. Currently, pure leasing price dominates when it comes to the white fleet business. OEMs value experience more than price, with tailored offers and next-best-action representing a better overall package throughout the lifecycle with respective FoD s or tailored leasing and consumption.

          Thinking ahead, CRM managers should always look at bi-directional integrations. This will enable them to also use data generating systems as communication tools. For example, they can provide insights and offers into the fleet management solution that is used by the fleet manager.

          Laying the foundation early is the key to success

          To prepare for the upcoming challenges and fully exploit opportunities, in addition to a sharp CRM strategy, integrated CRM processes, an intelligent CRM system, and sufficient operation models, there has to be a direct, data-driven offering to B2B fleet managers. To make use of the tremendous amount of data for campaigns, leads, and case management, we recommend four keys to success:

          Future of CRM
          Figure 2: Success factors
          • CRM SERVICE HUB:

          Establish a CRM Service Hub to combine and process data centrally, and be able to apply smart analytics to personalize communications and offerings.

          • DATA COLLECTION & AI:

          Exploit the full potential of data from all sources, including smart fleet solutions and telematic systems, by storing large amounts to be analyzed automatically.

          • WILLINGNESS TO SHARE DATA:

          Increase end customers’ data trust by providing transparency about the purpose of data sharing and pointing out the benefits and use cases.

          • CONSENT MANAGEMENT:

          Set up a resilient consent structure to centrally collect and store data, thus enabling the enrichment of your CRM with customers, vehicle usage, and contract data.

          • INTEGRATION INTO FLEET MANAGEMENT SOLUTION:

          Implement bi-directional integration in personal and independent fleet solutions to gather valuable fleet data firsthand.

          Which priorities and which additional use cases do you see? We are looking forward to hearing your thoughts!

          This blog has been co-authored by Thomas Ulbrich, Christopher Rose and Lorenz Finsterhoelzl. Please get in touch if you have questions or need further information. We look forward to exchanging ideas on this particular current topic. 

          Source[1]: Dataforce, 2020


          Meet our experts

          Thomas Ulbrich

          Thomas Ulbrich

          Director, Customer Transformation, Capgemini Invent
          Christopher Rose

          Christopher Rose

          Manager,Customer Transformation,Capgemini Invent
          Lorenz Finsterhoelzl

          Lorenz Finsterhoelzl

          Consultant,Customer Transformation,Capgemini Invent

            The rise of impact-driven L&D: How to bridge the gap between learning and business

            Sabrina Rubruck
            9 Aug 2022
            capgemini-invent

            Companies invest large amounts in comprehensive Learning and Development (L&D) measures to ensure the continuously updated qualifications of their workforce. But the question of whether the investment is worthwhile, i.e., the success of learning and the impact of its measures on businesses, is rarely asked.

            Could it be because L&D has a general raison d’être or because L&D is seen as some kind of gratification to the workforce? However, the truth is that only joint consideration of L&D measures and the learning impact at both an individual and organizational level enables the sustainable positioning of L&D.

            Return-on-Learning (RoL): The missing business case consideration

            With the rapid changes in the world of work, companies are faced with a qualification imperative: by 2030, one billion people globally will have to undergo further training in order to successfully meet their daily tasks.1 The new work paradigm is driven by Covid-19, advancing digitalization and general skills shortages, demands efficient skilling strategies from companies to empower their workforce. What we often encounter is that companies react with extensive investments in L&D measures, but neglect continuous consideration of the business case: how effective are learning measures, and what is the impact of L&D on individuals, teams, and businesses? Yet, transparency about the business impact of L&D is essential to secure further investments. L&D-focused KPIs are needed along with the possibility to measure and evaluate the impact of L&D measures at both an individual and an organizational level. This is the basis for sustainable L&D design and investment.

            You cannot manage what you do not measure

            In response to the qualification imperative, companies purchase expensive state-of-the-art technologies (e.g., SABA or degreed) to centrally deliver digital learning experiences. Despite the high costs and implementation effort, the benefits of technology most often remain untapped and unseen. Typically, an imbalance of content richness versus insight poverty exists: a broad mass of learning content is offered and presented online, but only simple data, like participant numbers, is collected, ruling out deeper analysis. Thus, neither HR nor business stakeholders can draw conclusions about individual learning success or learning effectiveness. Like any other return-on-investment analysis, the return-on-learning calculation needs a reliable data basis and systematic collection.

            Define your L&D KPI system to gain accurate insights

            One rather simple but effective way to measure L&D effectiveness goes back to Kirkpatrick’s four-stage model2 (see Figure 1). This model introduces four stages that build upon each other to measure the L&D effectiveness:

            Figure 1: The four stages of the Kirkpatrick model to evaluate the impact of learning on various levels within an organization.

            In order to derive insights on suitable KPIs for your organization, we recommend a combined approach of top-down and bottom-up evaluation. With a learning maturity assessment, ideas for the improvement of the L&D environment and KPIs to measure L&D impact are collected as a starting point. In the next step, a compiled selection of suggested KPIs is evaluated with representatives of the target group (the learners), leading to valuable insights on which KPIs are suitable for the organization. Two aspects must be considered when defining an organization-specific metrics system: the way learning is perceived and the acceptance of KPIs. This way, L&D ensures the KPI system is in line with the cultural characteristics of the organization. From our experience, only if employees understand, trust, and participate in the continuous data collection process can the KPI system have significance for an organization. In order to sustainably anchor the new process around KPIs in an organization, we support the implementation with a people-centered behavioral change approach. The described approach ultimately results in an L&D KPI system that can provide information about individual learning success (e.g., knowledge acquisition and sharing rate), individual behavioral changes (e.g., application of what has been learned or takeover of new tasks and roles), and business results (e.g., customer satisfaction values, sickness rate, and sales figures). The L&D KPI system again is individual for each organization e.g., in a client case where we supported digital skill building within procurement, the negotiation success was an important L&D KPI whereas in other departments or organizations focus might be different.

            Prioritize your KPIs and start implementing

            In terms of the implementation effort and business impact, the prioritization of the defined KPIs indicates which metrics should be implemented first. KPIs with low implementation effort and high business impact are low-hanging fruits that are predestined to quickly deliver expected results. For example, the recommendation of training by participating learners (“On a scale from 0 to 10, how likely are you to recommend this training to a colleague?”) represents such a quick win as it is a single question at the end of the training session, providing significant information about quality for the L&D department. Metrics such as shifts in learners’ competency profiles are particularly good at reflecting the impact of learning interventions. But they require more effort and are typically planned for the long term. Metrics that are particularly good at reflecting the impact of learning interventions but require more effort are also planned for the long term.

            The technical deployment requires translating KPIs into technical requirements: what aspects (e.g., methods of data collection, sources of KPIs) need to be stored in the system to provide the automated presentation of results? Systems such as SABA and degreed offer corresponding functions that can be configured accordingly.

            Return-on-Learning: What to expect

            If you dare to measure your L&D´s business impact with a clear “RoL”, you will be able to develop learning around effectiveness in the first place. By measuring the RoL you set the base to continuously improve the L&D portfolio in a data-driven way and provide learner-centric experiences to evolve into a learning organization.

            Given the urgency of upskilling strategies to establish a continuously qualified workforce, it is imperative organizations and teams focus on learning effectiveness. We support you as a partner from planning to implementation to make L&D impact-driven and look forward to sharing our experience across industries with you.

            Let’s get in touch and discuss how we can bridge the gap between L&D and business.

            We look forward to working with you.


            Sources:

            1. Moritz, R. E. (2020), World Economic Forum, How do we upskill a billion people by 2030? [online]
            2. Kirkpatrick, J. and Kayser-Kirkpatrick, W. (2014). The Kirkpatrick four levels: A fresh look after 55 years. Ocean City: Kirkpatrick Partners.

            About the Authors

            Sabrina Rubruck

            Sabrina Rubruck

            Senior Manager, Workforce & Organization
            9+ years experience within the HR context, in various project management roles as well as in focus areas like HR transformation, change & communication, selection & assessment. Experienced in answering strategic-conceptual tasks as well as handling the operational implementation of a great number of processes at the same time. Diploma in psychology with a specialization on Industrial/ Organizational psychology.
            Ali Erguen

            Ali Erguen

            Senior Consultant,Workforce & Organization
            Cologne-based consultant with a passion for personnel and organizational questions. After graduating in business psychology, he accompanies individual and organization-wide transformation processes. In doing so, with a focus on identifying and promoting individual strengths and potentials with the help of people diagnostics and development programs. Along with this, he supports the sustainable anchoring of new ways of working and digital innovations in organizations through systematic and needs-based change management. In doing so, he gained international experience in the conception, implementation and execution of training courses and the management of workshops.
            Marie Steffens

            Marie Steffens

            Senior Consultant at Capgemini Invent | Systemic Coach | New Work & Change Management | Coaching, Leadership & Development

              Industry collaboration is key to transform the vision of cloud-native, AI-optimized 5G Open RANs

              Shamik Mishra
              8 Aug 2022
              capgemini-engineering

              Rome wasn’t built in a day. The same can be said of cloud-native, AI-optimized 5G open RANs (O-RANs), which are so fundamentally different from any previous mobile generation that they’re not just evolutionary. They’re a step change on how radio networks are designed and built.

              Figure 1 illustrates this evolution — including the dauntingly steep climb that the industry faces today as it migrates from virtualized RANs (vRANs) to O-RANs. For a long time, radio access networks (RANs) remained outside the boundaries of the transformation achieved through virtualization. That changes with vRAN. With the O-RAN Alliance defining open interfaces, interoperable cloud-native RAN is now a reality. And like all cloud-native workloads, this would require massive intelligent automation at scale to realize the full economic potential of reduced TCO offered by such open networks. In fact, the step up from cloud-native to AI-optimized networks requires a team effort by vendors, operators, and systems integrators because no single company has all of the necessary technologies and expertise to do it alone.

              Figure 1

              Open, cloud-native networks have six major focus areas:

              1. A cloud-centric architecture for vertical applications and for transforming the operator’s BSS.
              2. Cloud-native edge compute (MEC).
              3. A 5G standalone (SA) core.
              4. Disaggregated O-RAN and cloud RAN.
              5. A data-driven autonomous network.
              6. A sustainable cloud-native network.

              Areas 4-6 require broad industry collaboration to create the common standards, architecture and models that will be critical for test and automation. The i14y Lab is an example of how this collaboration is already well underway. Backed by Capgemini, Deutsche Telekom, Nokia, Rohde & Schwarz, Telefónica, Vodafone and other major companies, the i14y Lab is focused on identifying and closing gaps in the test and automation space to verify functionality and multi-vendor interoperability.

              Currently, there are three major gaps:

              • An open approach to testing that spans all of the domains. This would be automated and API-based, with standardized testbenches and performance benchmarks. Collaboration initiatives such as the i14y Lab also could provide test tools such as robust, 3GPP- and O-RAN-compliant simulators that can emulate Layer 1, user equipment or base stations.
              • Standardized automation platforms for network operations. These would automate both the network and the alerts that it generates. Besides providing the NOC staff with greater visibility, automation also would maximize their productivity and make the network more predictable, which aligns with one of O-RAN’s major goals: a lower TCO than traditional networks can achieve. For example, automation can enable the use of digital twins for multiple RANs, where the NOC monitors the twins rather than the real RAN, thus providing better observability and manageability of the network.

                Automation platforms require enormous amounts of data so they can be trained to handle a wide variety of real-world scenarios. Industry collaboration labs such as i14y can play an important role by providing that data.
              • An open orchestration architecture. Driven by open APIs, this architecture would include common data platforms and models. The collected data also would be open so developers can build use cases around it.
              Figure 2

              When it comes to industry collaboration to address these gaps, each type of member faces its own unique set of challenges and opportunities. For example, figure 3 summarizes those for systems integrators.

              Figure 3

              Achieving Automation at Scale for Cloud-Native O-RAN

              Although the migration toward cloud-native, open networks includes the use of commercial off-the-shelf (COTS) IT hardware, it can’t simply be used as-is. To meet stringent telecom requirements such as five-nines reliability and RAN functionality, COTS gear will need to be beefed up with hardware accelerators and other components. The radio unit also needs to be managed through the same automation framework. In vRAN, a large number of edge data centers will be utilized to host the baseband functionality of the RAN as a software on such COTS gear. This kind of robust and highly distributed infrastructure, supported with tools for monitoring the network and predict failures, are key for getting systems integrators and infrastructure and cloud vendors interested in the mobile market. Their participation also ensures that systems integrators don’t have to take on all of the responsibilities and liabilities associated with implementing cloud-native, open networks.

              Cloud-native, open networks can generate enough telemetry that can be leveraged through machine learning operations (MLOps) at scale to develop and test network AI algorithms to ensure that they’ll work in a real-world environment. But like COTS IT equipment, the MLOps currently used for cloud workloads will need to be modified to meet telecom’s unique requirements.

              A third example is industry collaboration projects like Nephio, a Kubernetes-based, intent-driven automation of network functions and the infrastructure that supports them. A Linux Foundation project backed by Deutsche Telekom, Google and others, Nephio would help the mobile industry meet challenges such as predicting resource usage in O-RANs and standardizing an orchestration model for cloud-native networks.

              All of this is a lot to think about, which goes back to the point that cloud-native, open networks require broad industry collaboration to become reality and live up to their potential. The good news is that many operators, vendors and other stakeholders have recognized that need and are now collaborating in initiatives such as the i14y Lab.

              Shamik Mishra

              Shamik Mishra

              CTO of Connectivity, Capgemini Engineering
              Shamik Mishra is the Global CTO for connectivity, Capgemini Engineering. An experienced Technology and Innovation executive driving growth through technology innovation, strategy, roadmap, architecture, research, R&D in telecommunication & software domains. He has a rich experience in wireless, platform software and cloud computing domains, leading offer development & new product introduction for 5G, Edge Computing, Virtualisation, Intelligent network operations.

                Prepare your business to leverage the potential of open source

                Capgemini
                5 Aug 2022

                What comes into your mind if you think about open – source software (OSS)?

                Depending on your history in the IT industry, the answer might be very different – ranging from “I won’t trust my business to fuzzy stuff that people plug together in their free time” to “paid software is overpriced stuff, I only trust OSS.”

                Now, you might be upset by the fact that I mixed OSS and paid vs. free software in one sentence. And you are totally right – OSS is about licensing and your access to the source code or even “freedom,” and not about a pricing model. In fact, in my opinion there are very good reasons to pay for OSS.

                In this post I will share my experience of over 15 years in software development and software architecture using OSS.

                What is open source?

                According to Gartner, “Open source describes software that comes with permission to use, copy and distribute, either as is or with modifications, and that may be offered either free or with a charge. The source code must be made available. Open-source software may be developed in a collaborative public manner.”

                Important for OSS are several direct and indirect characteristics:

                • Community
                  • Open-source solutions geared toward the enterprise often have thriving communities around them, bound by a common drive to support and improve a solution that both the enterprise and the community benefit from (and believe in).
                • Transparency
                  • Open-source code means that all have full visibility into the code base, as well as all discussions about how the community develops features and addresses bugs.
                • Fast patching
                  • As there are “more eyes on it,” the bugs are identified and fixed quickly. You can even do it yourself.
                • Trust
                  • As it is open and can easily be vetted by the community trust will be high
                • Avoid vendor lock-in
                  • OSS gives you the freedom of choice. You may use and adopt products independently of any vendor.

                Your IT is running on open source

                Linux is probably one of the most famous OSS projects. On top of that, some of the most discussed topics in the IT industry are data, AI, and cloud. If you have a look at the technologies that drive these topics, you will find out that many are famous OSS projects, like Spark, Cassandra or Kafka for data, TensorFlow for AI, and a myriad of OSS technologies for cloud, with Kubernetes being one of the most famous of these. If you look further, you might stumble upon the Cloud Native Computing Foundation, which lists over 120 open-source projects and has over 800 members. So, you can clearly see that your business is already “running on open source” or will be in the near future.

                Why open source?

                Open source became a huge topic in recent years in the enterprise IT industry. Many traditional “closed source companies” invested in OSS, for example Microsoft, which bought Github.com for USD 7.5 billion in stock. Much of the world’s software infrastructure is now based on open-source software. Applications and software engineering leaders should use this hype cycle to track innovations that facilitate the use of, or are powered by, open-source software. According to a Gartner survey from 2021, 75% of successful digital businesses used cloud-native OSS stacks and OSS-powered cloud services to build their digital platforms. In 2020, 60 million new OSS repositories were created on GitHub by 56 million developers on the platform. The COVID-19 pandemic has driven renewed interest in OSS, primarily in search of cost-cutting strategies. While OSS reduces licensing fees, it doesn’t always reduce total cost of ownership because support subscriptions are often comparable in price to license subscriptions. Organizations that opt out of support subscriptions must bear the risks and costs of self-support. Strategic planning assumptions from Gartner predict that through 2025 more than 70% of enterprises will increase their IT spending on open-source software, compared with their current IT spending. According to Gartner, additionally, software as a service will become the preferred consumption model for OSS due to its ability to deliver better operational simplicity, security, and scalability Capgemini itself also invested in OSS, for example with its own OSS project devonfw and through its contributions to Drupal. In the next sections, I will expand on some reasons why OSS is so well known and important.

                Openness

                During my career in the IT industry, I worked a lot with closed- and open-source products and vendors. And working with vendors of OSS products often felt much easier compared to those focusing on closed source products. Maybe it is the culture of openness, sharing, and trust which drives the way of working of open-source companies which I like better.

                There are also some practical reasons why working with OSS is often more efficient in IT projects. Using an OSS product during development often reduces hassle with procurement and license management. Closed source vendors sometimes offer demo versions, but often you must make a request first, giving your contact details, and get constantly contacted afterwards. Just hitting the download button for a full featured version and full access to the documentation is much more convenient. When it comes to a problem with a product, having the source code available is a great help in many situations, whether you are hunting for a bug or something just doesn’t work as expected. Just to be able to see (e.g., in a debugger) what is really going on helped me more than once in the recent years. For sure sometimes it is more efficient to contact the vendor (or some other expert) for a problem with a product, but that is also possible for OSS. So, with OSS you get more options to help yourself.

                Portability

                The availability of the source code allows OSS to be ported to many different hardware platforms. In some cases, there is a very passionate community which ports OSS to many different hardware platforms. Even if it is not very “enterprise,” the success of “Doom ports” shows this very impressively. After the computer game “Doom” was made open source, many groups of people tried to run it on each and every hardware platform they could find. Today, “Doom” even runs on TI-84 calculators. For sure I don’t expect too many advantages for enterprise environments from that, but it clearly demonstrates the concept. From an enterprise point of view, portability is also very relevant. Open source allows you to port required tools, libraries, and other products to specialized hardware platforms and leverage their benefits. Being able to use (battery) optimized IoT hardware and not having to develop the software from scratch but having it built on well-proven OSS could be a huge advantage. Linux is probably the most famous example for portable OSS. I think a huge driver of its success is its availability on so many platforms.

                Support

                Often, clients argue that they need to have support for a product and use this as an argument against open source. For sure for many important OSS products support is available. And from my experience, this support is often very high-quality. The reason might be that support is often key for the business models of open-source companies and they are often really good at it. On the other hand, there are OSS products with no dedicated support available. I would not say that you need dedicated support for each and every product; for example, when it comes to small developer-oriented libraries the support could be covered by the vendor of the OSS solution or by yourself. But for major building blocks, such as OSS databases, you probably will need support. So, if you want support, you can get it in a good quality!

                Attracting talents

                For sure there is a war for talents in the IT market. Especially when it comes to young professionals, they are used to working with open source. They often used these products during their education or even in private projects. So, those people know OSS products already and love to use them in their early professional lives. When it comes to recruiting, I often noticed that it is a very convincing point to tell candidates that they will work with open source in the company.

                Digital sovereignty

                “Digital sovereignty” is a term that is mostly used in the public sector, but I think the idea behind it is very relevant for other sectors, too. This refers an organization’s “ability to act independently in the digital world” (see the European Parliament’s “Digital sovereignty for Europe”). You could also call this “avoiding vendor lock-in 2.0.” There are many discussions around what this means on a political dimension, for example for Europe. Looked at in a broader scope in other sectors, it means that you should take measures and develop a strategy to keep control over your data and digital products. OSS could really help you in this area. This strategy is also compatible with cloud native development. You could for instance prefer cloud services that are based on an open-source product to foster sovereignty or store your data in a standardized format or to at least assure that a cloud service provides means to move the data to another service (or vendor) if required.

                Costs

                If you think about OSS, cost is always a big topic. Even if many OSS products are free, you should not overlook that there are many very good open-source products available that are not free, and I heavily recommend to have a close look at these products and pay for them if they meet your requirements. But there is a difference in what you pay for. Closed source products often depend on the usage (e.g., number of CPUs, installed memory, number of installations) and this is very often different for OSS! Often enough you do not pay for the product itself but for support, services, or enterprise features. You have more control over what you pay for and what not; for example licensing the enterprise edition for production and selected testing environments (with support) and using a free version in development and most testing environments. This gives you lower costs and more flexibility. Flexibility is key here since you might create additional testing environments without having to deal with licensing.

                How about security?

                If you read about open source and security, you will find many people arguing that open source is more secure than closed source. But this is not generally true. One major argument is that open source is more secure because people could look into the source code and find security issues. But this is only true if people really do look into the source code and are capable of evaluating the quality or identifying potential security issues. Often this is not the case. The recent log4shell security disaster is a very good example of this.

                So, what to do? First, don’t feel safe because you are using open source. You should for sure take the right measures for your context to improve your security, whether it is for OSS or closed source products. Since this blog post is about OSS and not about security, I would like to highlight here that I’m going to focus on a relatively new approach with OSS regarding security.

                In typical software development projects based on open source, there are dozens of different libraries and other products included. So, it is a very good idea to introduce a vulnerability scan into your DevOps pipeline to be aware of these potential issues and take the appropriate measures. Closely coupled to this, I’d like to bring your attention to the so-called software bill of materials (SBOM). SBOMs are becoming more and more popular these days. They are a very good measure to cope with security incidents like log4shell. OWASP provides a standard CycloneDX for SBOMs together with some nice tooling. This standard allows you to list all dependencies of your product, whether it be software libraries or other products, and even SaaS dependencies are supported. The mentioned standard is independent of any programming language. With the standard in place, it becomes feasible to create a central inventory of all of the software components you use. If you were involved in the log4shell incident, you might already guess the huge benefit of this. One of the core problems with this incident was that this library is mostly used under the hood in many products, and finding out which products were affected and which were not was one of the hardest parts.

                There are already many tools available around this standard. These tools automatically cover generation for the SBOMs as part of DevOps pipelines and analysis and management of vulnerabilities. Many of those tools are OSS themselves, for example DefectDojo from OWASP.

                How to select open source?

                Above, I wrote about the benefits of the ease of access to OSS. This ease of access, as in just downloading it, can be dangerous when it is not well managed. Without a product selection strategy or management, there is a high risk of your solutions being built on an uncountable number of components with quality issues, driven by a single developers in their spare time, and including incompatible licenses and other legal pitfalls. Therefore, I suggest to set up a lightweight selection process. The goal of this process is not to fully evaluate (complex) products but to assure a minimum quality for each and every product you use. At least the following criteria should be considered:

                Documentation: Is there documentation available for the product? Is the documentation detailed enough? Is it up-to-date?

                Availability of support: What support is available? This covers commercial and community support, e.g., discussion groups. How active are these groups and how many companies offer support at what costs?

                Licensing: There are a couple of OSS licenses with different rights and obligations. A special thing are so-called copy-left or viral licenses, e.g. GPL. These licenses force you to put “derived” works under the same license as the original work. Together with other obligations, this might force you to put your product under the same open-source license as the component you use. Key here is the judgement on whether you produced a derived work or not. In my experience, solutions running on top of Linux (which is GPL), for example, are not a derived work. But prepacking a specially modified Linux together with your solution has high chances to be counted as derived work. You also must be aware that because of this some OSS licenses are not compatible; products with these licenses must not be combined. Luckily, there are OSS tools available that automatically detect these kinds of licensing issues and give you transparency.

                Future-proofness: What is the expected lifecycle of the component? Is it already end-of-life? There often is no clear answer to that. But there are indicators that allow you to make a qualitative assumption. How many contributions from how many contributors does the project have? How many releases were there in the last couple of months? How active is the community around the products, and how big is this community? How large and stable is the user base?

                Take-away

                There are many very innovative and battle-proven OSS products available that will be beneficial for your business. But do not mix up open source with for free. There are many very good companies in OSS which offer great services that are really worth paying for. While OSS has many inherent advantages, like openness, attractiveness for young talents, etc. there are also a few things to bear in mind. So, create an open-source strategy for your business and select OSS products carefully. This strategy should include at least the following safeguards against possible pitfalls with open source:

                • A suitable selection process for OSS products, which prevents selecting OSS with less quality and less future-proofness
                • A license management which prevents unwanted obligations from OSS licenses
                • Security, since open source does not guarantee more or less security compared to closed source.

                About author

                Simon Spielmann

                Simon Spielmann

                Solutioning Head Cloud Custom Applications, Capgemini
                Simon has nearly 20 years of experience in the software development industry, and he is mainly focusing on the design of the software architecture for large software systems in a security-critical environment. He is developing an innovative solution concepts, advise on the selection of suitable products and support architecture management. He also manages Capgemini agile development teams in the role of an architect.

                  Is the metaverse inching closer to reality?

                  Alan Connolly, Global Head – Employee Experience and Digital Workplace, Capgemini
                  Alan Connolly
                  2 August 2022

                  By 2026, it’s anticipated that 25% of people will spend at least one hour per day in the metaverse.

                  Imagine a world where you can have a meeting in a space station and be home in time for dinner. This might sound like a fantasy but in the metaverse, you might even meet up with someone in another continent after.

                  By 2026, it’s anticipated that 25% of people will spend at least one hour per day in the metaverse.1 Whether for work, shopping, education, or entertainment, it is set to take our experiences online to the next level.

                  Sceptical? It is right to question technologies that promise so much when many have not lived up to the hype. Since Facebook became Meta, the metaverse has received an avalanche of media attention that can sometimes do more harm than good. But with elaborate virtual worlds long established in video gaming and engaged with by millions of people every day, the metaverse has in many ways already proven successful.

                  What the metaverse represents for the world today is essentially the evolution of the internet to a 3D immersive platform. Like the internet, the metaverse will be universally accessible and – through developments in virtual and augmented reality platforms, gaming, machine learning, blockchain related technologies, 3D content, digital currencies, sensors, and AR/VR-enabled headsets – its possible applications will extend far beyond entertainment.

                  But there is another reason why the metaverse is receiving more attention than ever. The pandemic reshaped the working world and while hybrid working has been successful, it is by no means a complete experience. To go beyond the convenience factor, employers should start thinking about the metaverse’s capability to deliver employee experiences that are more authentic, cohesive, and interactive.

                  A new era of collaboration

                  One of the most immediate opportunities for the metaverse are in the workplace. These will be realized in industrial and office-based use cases, where collaboration, creativity, and productivity can be improved dramatically.

                  Distance negatively impacts collaboration. Many of us will be overly familiar with the 2D environment of a video call, where some people prefer to stay off-camera and social cues can be limited by technical issues. As soon as a call ends, a conversation with a colleague might take place over Teams or Slack, but remote working has effectively cut out “water cooler” chat. This should not be trivialised as there is still real value in the informal opportunities to chat with colleagues or networks outside of the call agenda.

                  In many ways, the metaverse can address this issue. By replicating an office environment, people can come together in a shared space that can be both informal and formal. Whether to relax in a breakout space or to present at meetings, employees can use their digital avatar to immerse themselves in a new virtual environment with colleagues.

                  The level of customization is unlimited: employers will be able to just as easily choose their virtual space to take the shape of a familiar office as they will be to a spaceship or beach. This counters social disconnectedness and even gives people the opportunity to spend time with colleagues outside of their teams.

                  Beyond the office environment, the metaverse will enhance the connection between engineer and designer, or consultant and doctor. Teams that tend to operate from a distance can work from the same space to address issues collaboratively, whether that is refining a new car part or advising on an operation in a clinical setting. With virtual-reality technologies already being used to test learners’ skills in scenarios via interactions with 3D models, use cases are off the ground – but we can expect them to improve as the technology becomes more ubiquitous in years ahead.

                  An all-inclusive experience

                  The metaverse has the potential to deliver a more human experience in many ways. Take the onboarding process as one example – in the hybrid working era, people joining companies wade through unengaging documents and only meet with managers or specific team members to discuss day-to-day activity and expectations. Would it not be far more personal to enable people to explore the company interactively in a 3D hall or gallery? Would the subsequent training not be more interesting and more effective if it was truly “hands-on”? What if a candidate could even try the job before they buy-in?

                  There are so many exciting avenues to explore and if handled correctly the net impact will improve inclusion, productivity, and collaboration. Gartner research into hybrid working over the pandemic found that it can boost inclusion by 24%, in part due to a widening of the candidate pool by extending the geographical range. But to take the next step, employers should start exploring with different technologies to deliver a truly inclusive experience for employees.

                  Securing a virtual world

                  While we could spend days talking about the potential benefits, we can’t and must not forget security. As the metaverse evolves to a more advanced stage, the security challenges multiply. In the real world, cybercrime is becoming more rampant with a reported 50% increase in overall attacks per week on corporate networks last year compared with 2020. The metaverse is another avenue for cybercriminals to explore and as more of our working lives take place in virtual worlds, there are serious challenges to consider.

                  It is almost too early to say how metaverses will be secured. Big Tech organizations are responsible for making their environments safe to use, but users are often the weak point in cyber defences and so must also be vigilant by learning what to look out for and how to protect themselves from a potential attack or avoid deepfakes. In the near future, we can also expect wider issues like the cost of access through immersive headsets and its monetary value to be ironed out, but security has to become a primary consideration.

                  Employers thinking about the metaverse should be excited. It can transform operations entirely or simply augment experiences on a case-by-case basis. We are still adjusting to the new working world, but we must now be thinking: how can we preserve real human values and connections at the heart of our online experience? By addressing the challenges ahead, there are so many reasons why the metaverse can be the answer.

                  Contact us for additional information about our employee experience services. We’d be happy to help you explore new options for keeping your employees satisfied and productive—and keeping your business stronger than ever.

                  “Gartner Predicts 25% of People Will Spend At Least One Hour Per Day in the Metaverse by 2026”

                  Alan Connolly, Global Head – Employee Experience and Digital Workplace, Capgemini

                  Alan Connolly

                  Global Head of Portfolio – ESM, SIAM, and ServiceNow
                  Alan is a visionary leader with a deep passion for collaborating with customers, partners, and industry experts to address complex challenges within the workplace and enterprise service management portfolio. With over 20 years of experience, he combines creativity and analytical prowess to craft comprehensive strategies that align with organizational goals and enhance productivity.

                    Respectful personalization turns engagement into delight

                    Padmashree Shagrithaya – Our Expert
                    Padmashree Shagrithaya
                    27 July 2022

                    AI enables enterprises to interact with customers one-on-one, at scale

                    Imagine discovering a boutique clothing store that completely understands you. The staff knows what you have in your closet, what you are missing, and what you’re willing to spend. They understand your style – the colors, patterns, and cuts you like but also what looks good on you and what you’re comfortable wearing. They go beyond selling you a shirt or a jacket to curating your wardrobe, making recommendations from socks to suits that they know complement the clothing you already have and that you will find delightful. And they understand how to communicate with you about those recommendations on your preferred channel– in a way that ensures you are actually excited to receive that email, text message, phone call, or invitation to a trunk sale.

                    Now imagine there was a way to deliver this level of service and sales recommendations at scale – enabling a multinational brand to provide that boutique level of personalized experience to its customers.

                    It leverages data and AI, ML, and other advanced data-analytics technologies to engage with each customer 1:1 – with the right content, at the right time, on the right channel, and at the right frequency – in order to build satisfaction and loyalty. It does this while complying with all data-privacy laws and other regulatory requirements, and in a manner that doesn’t feel invasive, unsettling, or untrustworthy to the customer.

                    That’s what respectful personalization is all about.
                    In helping Capgemini’s clients leverage data and AI (advanced analytics) to improve how they manage & engage with their customers; I’ve identified some common attributes that characterize the most successful deployments.

                    While AI poses immense opportunities, what is important is to ensure that our focus is not just on the “AI Algorithm” but on the entire implementation ecosystem, such as the architecture, technology interfaces, change management and the like. Policy implementations for key areas like privacy and security at all levels – Technology, Data & Algorithms, must be well established.  

                    Data sourcing strategy is key to “Respectful Personalization”. The following elements need to be carefully considered while coming up with a strategy – Which ecosystems are tapped into? How were they sourced? Does the data have the explicit approval of the customers regarding the purpose for which it is being used? What is the law of the land? What are the desired outcomes?
                    Methodologies adopted for algorithm building may need special attention. Many successful deployments occur when people define goals such as – increasing loyalty in a certain environmentally friendly segment, or boost viewership or minimizing inventory. Once this is defined, allow AI to optimize accordingly. This comes more naturally to organizations that have fostered a culture of experimentation – one in which the enterprise tests engagements with customers, collects explicit and implicit feedback, learns from the experience, and modifies its strategies accordingly.

                    Equally, once AI makes recommendations, it’s important that teams share them across the organization. For example, if a company’s marketing team learns its customers are more concerned about sustainability, there are implications for the product design team – but also for the supply chain and sourcing teams. Insights must be embraced, enterprise-wide – across the value chain, to ensure they’re acted on effectively.

                    Successful implementations also recognize that context is key. Customers demand different things at different times of the day or at different stages of their lives. Their preferences may even change depending on the device they’re using. For example:
                    A person visiting a website via a laptop may be open to exploration.
                    If that same person connects via an app on a phone, they’re likely more interested in quickly completing a transaction.

                    If they’re using a company computer, they may wish to receive communications about work-related products and services but not personal products and services.

                    Respectful personalization at scale has become a crucial component of any customer-engagement strategy – so much so that my team and I ensure its front and center when we work with our clients to deploy the Capgemini Data-Driven Customer Experience solution. If you have questions or comments about this, I would be delighted to hear from you.

                    Successful implementations also recognize that context is key. Customers demand different things at different times of the day or at different stages of their lives. Their preferences may even change depending on the device they’re using. For example:

                    • A person visiting a website via a laptop may be open to exploration.
                    • If that same person connects via an app on a phone, they’re likely more interested in quickly completing a transaction.
                    • If they’re using a company computer, they may wish to receive communications about work-related products and services but not personal products and services.

                    Respectful personalization at scale has become a crucial component of any customer-engagement strategy – so much so that my team and I ensure its front and center when we work with our clients to deploy the Capgemini Data-Driven Customer Experience solution. If you have questions or comments about this, I would be delighted to hear from you.

                    Author

                    Padmashree Shagrithaya – Our Expert

                    Padmashree Shagrithaya

                    EVP & Managing Director, Insights & Data, India
                    “Managing multiple machine learning models, built by varied teams is a huge challenge. MLOps is a powerful approach to bring all the pieces together and reap larger organization-wide value from AI at scale projects.”