Skip to Content

How are leading telecoms providers bringing over-the-top customer experience and service to the next generation of consumers?

Amar Misra
Aug 25, 2023

In today’s rapidly changing markets, Communication Service Providers (CSPs) are facing unprecedented challenges. Some of the most pressing challenges include evolving customer preferences and increased demand for service personalization, operational efficiency issues, complex and multivariate charging and cost models, stiff competition from peers, and hyperscalers that are heavily influenced by digital disruptions.

Customized services that add value with every customer interaction

Additionally, shifting demands for seamless Voice to Data, Streaming, and Direct-To-Home (DTH) Television services, along with Smart Connected Home Appliances Monitoring (e.g., Thermostats, Surveillance Cameras, Security Systems, etc.) are forcing CSPs to adapt and adopt new intelligent operational support systems – so they can proactively cater to their customers’ everchanging needs.

To satisfy and exceed these needs – while also effectively conquering all the challenges mentioned above – CSPs must be able to modify their business models and harness the power of digital accelerators and IT-OT synergy. This will enable them to better align themselves with evolving business and market dynamics as true Digital Service Providers (DSPs) that can simultaneously offer customized services that add value with every customer interaction.

Making the leap from CSP to DSP – and landing on over-the-top customer experience and service

Effective IT-OT integration is critical for CSPs to make a complete transformation into marketing-leading DSPs. DSPs can offer a full spectrum of services, which could include Cloud & Data, media, and enhanced customer experience services, along with Over the Top (OTT) offerings.

IT-OT integration also enables CSPs transitioning into DSPs to enhance their network services through a range technological innovations like Network Functions Virtualization (NFV), Service Based Architectures (SBAs), and Software Defined Networks (SDNs). These innovations can ensure business-critical service features such as:

· Quality of Service (QoS) and Service Level Agreement (SLA) monitoring with contextual Business and IT KPIs to address network capacity, latency, and reliability challenges

· AI/ML-driven predictive assurance for enhanced customer experience and improved network operations through network slicing

· Closed-loop automation to reduce Mean Time to Repair (MTTR) and truck rolls.

In unlocking the potential of IT-OT integration while transforming from a CSP to DSP, CSPs will gain multiple benefits in the form of enhanced operational efficiency, reduced costs, and improved predictive maintenance through autodetection and autocorrection of system anomalies. This means that OSS and BSS systems will be able to self-heal and auto-repair. While the ability to address complex business scenarios can bring enhanced customer experience management, reduced systems/network downtime, decreased customer churn, and increased Average Revenue Per User (ARPU).

Becoming a true DSP with Capgemini’s Business Insightful Services (BIS)

Capgemini’s BIS, a transformational lever of our ADMnext strategic offering, can completely align your IT with your Business – and help you make the full transition to an agile DSP. With Capgemini’s BIS, we enable you to deliver enhanced customer experience and boost your CSAT (Customer Satisfaction Score) and NPS (Net Promoter Score) with personalized and compelling customer service, and improved service order provisioning and fulfillment.

Essentially, BIS brings a full-service DSP model through seamless IT-OT integration, which delivers comprehensive visibility across all customer touchpoints, automated problem resolution, and improved service quality management.

While BIS’ automation enablers are grounded in Capgemini’s Telecoms sector expertise and have been specifically developed to help DSP-aspiring companies identify any degradation of network-critical quality parameters in real time an in accordance with specific threshold limits. These automation enablers can help you achieve:

· Early detection and prevention of network performance and service quality errors (traffic, bandwidth, packet loss, jitters and latency)

· Seamless network change adoption that leverages AI/ML-driven predictive assurance

· Reduced maintenance efforts for improved operational rigor

And – most importantly – BIS enhances your ability to achieve your ultimate objectives around fully-autonomous digital service delivery with optimal service quality, which is complemented by speed, agility, scale, and efficiency.

To learn more about Capgemini’s Business Insightful Services and what we can do for your business, click on our expert profiles below and drop us a line.

Meet the authors

Biplab Biswas

Expert in Application Management, Business Process Automation, Business Process Management
“I am responsible for leading domain-centric solutions in the E&U sector, leveraging Capgemini’s next-generation tools and enablers such as Business Process Focus (BPF), Digital Readiness Assessment (Ready4D), and sector-flavored automation BOTs.”

Amar Misra

Director Transversal, GenAI at Capgemini
As the Director Transversal, GenAI at Capgemini, I lead the delivery and pre-sales of innovative and transformative solutions using Artificial Intelligence, Machine Learning, Natural Language Processing, and Process Mining technologies.

Yogeshwar Bhave

Director at Transversal, Capgemini
I am the Director at Transversal, Capgemini. I help clients with business insightful services, next generation ADM, strategic planning, innovation led transformation, etc.

    Generative AI: A powerful tool, with security risks

    Matthew O’Connor
    23rd August 2023

    Generative AI is a powerful technology that can be used to create new content, improve customer service, automate tasks, and generate new ideas. However, generative AI also poses some security risks, such as data security, model security, bias, and fairness, explainability, monitoring and auditing, and privacy. Organizations can mitigate these risks by following best practices to ensure that generative AI is used in a safe and responsible manner.

    Generative AI is a rapidly emerging technology that has the potential to revolutionize many aspects of our lives. Generative AI can create new data, such as text, images, or audio, from scratch. This is in contrast to discriminative AI, which can only identify patterns in existing data.

    Generative AI is made possible by deep learning, a type of machine learning that allows computers to learn from large amounts of data. Deep learning has been used to train generative AI systems to create realistic-looking images, generate human-quality text, and even compose music.

    There are many potential benefits to using generative AI.

    • Create new content: Generative AI can create new content, such as articles, blog posts, or even books. This can be a valuable tool for businesses that need to produce a lot of content regularly. The technology can also support the reduction in time it takes to generate work, enabling a steady stream of fresh content for marketing purposes.
    • Improve customer service: Generative AI can improve customer service by providing personalized assistance. Generative AI can create chatbots that can answer customer questions or resolve issues. These types of uses can support both an enterprise’s employees and customers.
    • Automate tasks: The technology can be used to automate tasks that are currently done by humans. This can free up human workers to focus on more creative or strategic work. The technology has the potential to eliminate a lot of toil in many standard business practices, such as data entry and workflow.
    • Generate new ideas: Generative AI can be used to generate new ideas for products, services, or marketing campaigns. This can help businesses stay ahead of the competition.

    “Generative AI is a powerful technology that can be used for good or evil. It is important to be aware of the potential risks and to take steps to mitigate them.”

    Generative AI provides a lot of potential to change the way businesses operate. Organizations are just beginning to leverage this power to improve their businesses. This is a very new area, and the market potential is just starting to reveal itself. Most of the current market is focused on startups introducing novel applications of generative AI technology.

    Enterprises are thus starting to dip their toes into this space, but the growing use of generative AI also presents security risks. Some of these risks are new for AI, some risks are common to IT security. Here are some considerations for securing AI systems.

    • Data security: AI systems rely on large amounts of data to learn and make decisions. The privacy and security of this data is essential. Protect against unauthorized access to the data and ensure it is not used for malicious purposes.
    • Model security: AI models are vulnerable to attacks. One example is adversarial attacks. An attacker manipulates the inputs to the model to produce incorrect outputs. This can lead to incorrect decisions, which can have significant consequences. It is important to design and develop secure models that can resist this.
    • Bias and fairness: If the training data in the models contains biased information, the resulting AI systems may have bias in their decision-making. This can produce discriminatory decisions, which can have serious legal and ethical implications. It is important to consider fairness to ensure that AI and ML system designs reduce bias.
    • Explainability: AI systems are sometimes opaque in their decision-making processes. This makes it difficult to understand how and why decisions are being made. Lack of transparency leads to mistrust and challenges the credibility of the technology. It is important to develop explainable AI systems that provide clear and transparent explanations for their decision-making processes.
    • Monitoring and auditing: Track and audit AI performance to detect and prevent malicious activities. Include logging and auditing of data inputs and outputs of the systems. Watch the behavior of the algorithms themselves.
    • Privacy: Private data in model building and/or usage should be avoided as much as possible with artificial intelligence models. This avoids unintended consequences. Google’s Secure AI Framework provides a guide to securing AI for the enterprise.

    Securing AI systems is critical to effective deployment in various applications. Considering these issues, organizations can develop secure and trustworthy AI and ML systems. These deliver the desired outcomes and avoid unintended consequences.

    In addition to security risks, there are also ethical concerns related to the use of generative AI. For example, some people worry that generative AI could be used to create fake news or propaganda, or to generate deep fakes that could damage someone’s reputation. It is important to be aware of these ethical concerns and to take steps to mitigate them when using generative AI. Organizations will want to enact policies on acceptable use of generative AI which appropriately support their business objectives.

    Overall, generative AI is a powerful technology with the potential to revolutionize many aspects of our lives. However, it is important to be aware of the security risks and ethical concerns associated with this technology and to use this technology responsibly. By taking steps to mitigate these risks, we can help to ensure that generative AI is used in a safe and responsible manner and supports your future business goals.

    INNOVATION TAKEAWAYS

    GENERATIVE AI IS INNOVATIVE

    It is a powerful technology that can be used to create new content, improve customer service, automate tasks, and generate new ideas.

    THERE ARE RISKS WITH THE USE OF GENERATIVE AI

    Generative AI also poses some security risks, such as data security, model security, bias and fairness, explainability, monitoring and auditing, and privacy.

    COMMON SENSE CAN HELP COMPANIES LEVERAGE GENERATIVE AI

    Organizations can mitigate these risks by following best practices, such as protecting data privacy and security, developing secure models, reducing bias in decision-making, making AI systems more explainable, monitoring, and auditing AI systems, and considering privacy implications.

    Interesting read?

    Capgemini’s Innovation publication, Data-powered Innovation Review | Wave 6 features 19 such fascinating articles, crafted by leading experts from Capgemini, and key technology partners like Google,  Starburst,  MicrosoftSnowflake and Databricks. Learn about generative AI, collaborative data ecosystems, and an exploration of how data an AI can enable the biodiversity of urban forests. Find all previous waves here.

    Matthew O'Connor

    Technical Director, Office of the CTO, Google Cloud
    Matthew specializes in Security, Compliance, Privacy, Policy, Regulatory Issues, and large-scale software services. he is also involved in emerging technologies in Web3 and Artificial Intelligence. Before Google, Matthew held product management and engineering roles building scaled services at Postini, Tellme Networks, AOL, Netscape, Inflow, and Hewlett-Packard. His career started as a US Air Force officer on the MILSTAR joint service satellite program. He has an executive MBA from the University of California and earned a bachelor’s degree in computer science engineering from Santa Clara University.

      Rethinking cost optimization: From cutting corners to fueling growth

      Mark-Standeaven
      Mark Standeaven
      22 August 2023

      Ever wondered if the tried and tested way organizations navigate the volatile economic landscape – slashing budgets and cutting costs – is the most effective?

      The sentiment is understandable. We all grapple with surging costs – rising mortgages, soaring living expenses, and salaries that lag behind inflation. Add to that the relentless uncertainties of global macroeconomics – wars, cyber threats, and unstable supply chains. It’s no wonder cost-saving measures have become the dominant theme during my interactions with insurers.

      When organizations sense an economic pinch, they instinctively resort to familiar cost-cutting measures: downsizing the workforce, squeezing assets, and deferring new investments. While these provide momentary relief, it is not sustainable in the long run. A market upturn could be just around the corner in an era where financial cycles quickly adapt to our ever-changing circumstances. The term “you can’t shrink to greatness” rings particularly true here. The pressing need to grow and expand soon eclipses any immediate cost relief. The task of rebuilding what you’ve lost can be daunting.

      It’s time we considered an alternative approach: a more comprehensive cost optimization strategy. This strategy provides quick fiscal relief while preserving and enhancing future growth opportunities.

      The proposed framework pivots around three fundamental dimensions:

      1. Operating Model: Focusing on optimal task performance and assigning the right people to ensure efficiency and effectiveness. Consider establishing a cost-efficient global IT supply chain centered on a strategic captive center.
      2. Modernization: Identifying the drag that certain assets, applications, and infrastructure impose on an organization’s ability to deliver value and exploring ways to modernize them. If your technical debt is so high that it’s inflating operational costs and impacting investment in new features, a modernization strategy based on domain-driven design, advanced AI automation, and cloud migration could be the solution.
      3. Waste Elimination: Pinpointing duplication and low-value tasks in the IT portfolio and rationalizing, automating, or eliminating them. Are there duplicate policy administration systems in your landscape that could be removed through rationalization?

      Blending these various strategies into a comprehensive initiative program can create a cycle of continual savings. An example is a well-executed FINOPS initiative, which can generate initial savings that subsequently fund the commencement of a decommissioning factory, leading to significant savings by removing legacy and debt-ridden applications.

      So, let’s shift the conversation from cost-cutting to cost optimization, a way to pave the path for future growth. Are you ready to rethink how you approach your costs? I’d love to hear your thoughts and start a dialogue that could redefine your financial strategy.

      Meet the author

      Mark-Standeaven

      Mark Standeaven

      Executive Vice President – Global Cost Optimization and Transformation Leader
      Mark is an Executive Vice President in our Insurance BU with over 36 years of experience in the IT industry, including 18 years in the Finance sector. During his career, he has held roles spanning the full range of IT disciplines, from engineering and architecture to, more recently, CxO advisory. For the past 7 years, he has advised board-level client teams and led transformation initiatives to optimize client operating models, drive sustainable cost reduction and increase organizational agility.

        The major trends in the semiconductor industry right now

        Brett Bonthron
        Aug 18, 2023

        Takeaways from the GSA European Executive Forum and SEMICON West 2023

        Introduction

        In the past months, we witnessed two major semiconductor events across the globe: The 2023 Global Semiconductor Alliance’s (GSA) European Executive Forum gathered leading global senior executives on June 14-15 in Munich to embrace the most pressing issues affecting an industry caught in the throes of change. SEMICON West 2023 took place in San Francisco on July 11-13 to discuss key challenges affecting the global microelectronics industry. In this article, we’ve distilled the major trends that arose during both events; trends that will continue to shape this industry in the foreseeable future. These include supply chain volatility, sustainability, government investments, generative AI, geopolitical tensions, equality, and the tremendous opportunities in automotive. We’ve also mapped out Capgemini’s role as an intermediary in building trust and understanding and helping to welcome new players to the market.

        Resilient supply chains require flexible production and shipments

        Semiconductors are pervasive and will become much more pervasive. Semiconductors are the brain of digitization. It is not widely known that semiconductors are among the most traded goods in the world. Any disruption in the semiconductor supply chain can significantly impact the global economy.

        The first big trend centers around building resilience to the volatility of the semiconductors’ supply chain and ensuring end-to-end transparency to predict forecasts better and manage demand. Supply chain issues caused by the fragility of the supply chain and the incompatibility of production cycles have cost semiconductor company customers, such as automotive, many billions of dollars in lost sales and profits. Automotive customers controversially asked that they be given control over the flow of chips from one Tier 1 to another.  For semiconductor companies, it is imperative to build the resilience and process maturity that will enable them to switch easily between industries. GSA triggered a dialogue on how to be better prepared for whatever the future may hold by increasing inter-industry cooperation and building strategic relationships.

        Harnessing the transformational power of sustainability

        Another major trend broached at both events was the clear focus on sustainability and producing the products that drive it. As the earth’s ability to provide what we need decreases, the need to act on sustainability is increasing. Semiconductor companies have come out with their strategy and goals to become sustainable. Companies are launching initiatives focused on producing sustainable products that enable low power consumption or reduce the carbon footprint of their customers.

        Sanjiv Agarwal, Vice President Global Semiconductor Industry Leader, says, “Semiconductor companies need to embrace sustainability and aim to make technology sustainable. Sustainability is everyone’s responsibility.”

        As the semiconductor industry is projected to double by 2030, and carbon emissions projected to quadruple by 2030, sustainability and government investments also dominated the agenda at SEMICON West as well. Five key messages emerged:

        • AI is very high computing power-intensive – for example, a ChatGPT search consumes thirteen times as much energy as a Google search
        • Companies should design for sustainability – there is a need for dedicated engineering teams to support sustainability goals (equipment, sub-fab, process recipes, and operations). Companies such as Intel and Applied Materials have engineers dedicated to sustainability as part of the engineering PODS
        • Every semiconductor company has thousands of suppliers – Intel, for instance, has 16K suppliers; however, most suppliers have yet to set their sustainability goals. Therefore, there is an urgent need to establish metrics and develop a measurable roadmap to achieve net zero. Sustainable procurement is gaining attraction in the market.
        • Digital technologies can help reduce the carbon footprint and make fabs more sustainable – this can be achieved by optimizing efficiency through advanced analytics (ML, analytics, and AI); improving digital lifecycle collaboration within fabs (digital twin platform across the lifecycle of a fab can reduce production loss and energy waste); and ensuring enterprise-level collaboration across fabs.
        • Companies have made more progress on their US sites than in other regions – for example, Intel and STMicro are net positive water in the US but not in other regions.

        Generative AI – Need for extreme compute power and smaller suppliers

        GenAI is probably our generation’s most disruptive innovation, and it can potentially shape humanity’s future. From simple automation of tasks to writing codes to drug discovery, the scope of areas where it can find use is practically limitless, and the semicon industry is right at the forefront to enable this transformation journey. With such technologies that have the potential to impact so many different industries in a myriad of ways, there are always the early adopters, the ones who need a plan, the late risers, the ones with the FOMO, and the ones who choose to be in their state of inertia unless the market forces apply.

        Surprisingly with Gen AI, no one wants to maintain the status quo. There is a clear indication that almost every industry is looking for ways to adopt Gen AI in its day-to-day operations, be it in Manufacturing, Sales, Marketing, IT, or customer service – and the High-tech segment is leading the pack in terms of adoption. As Gen AI-based applications and use cases for design and manufacturing support start to proliferate, it will transform how the current automation in factories functions. This will create a major shift in how the industry adapts and molds itself to this new reality. According to Vignesh Natarajan, Hi-Tech Segment Leader of Europe, Capgemini, “As generative AI becomes mainstream, the transformation of the data center space will be driven by semiconductor players, who will be the crucial building blocks in the power chain competence.”

        For Capgemini, the biggest trends are generative AI-based use cases, AI-based development use cases, AI-based joint design use cases, and foundry solutions. This will be the big wave as demand for consumer electronics continue to grow, albeit slower than during the Covid era. However, demand for electrification, sustainable solutions, and smart cities will soar. Government funding of large-scale projects will provide a floor for demand to produce the next “boom” cycle for semis.

        “Our ambition is to support the semiconductor ecosystem companies in scaling up to meet their market opportunity with solutions in Intelligent Industry and Enterprise Management,” says Shiv Tasker, Global Industry Vice President, Semiconductor and Electronics.

        Digital Twin offers fast scalability

        Digital Twin showcases huge potential in the semiconductor industry through its ability to simulate the entire fab, manufacturing processes, and various use cases and models to improve efficiency and productivity. Companies are looking to transform various aspects of the manufacturing processes. Some of the examples where semiconductor companies are focusing are:

        1. Device-scale twin – detailed visualization of a device to reduce cycles of silicon learning, thus reducing waste and resources,
        2. Process-scale twin – using simulation to streamline process development thus reducing chemicals and electricity usage,
        3. Equipment-scale twin – improving first time right from design through installation by finding issues before physical build or building equipment expertise faster and more effectively.

        Digital twin, or the digital omniverse, coupled with Generative AI, provides an incredible opportunity by providing millions of variations to the model, and through reinforcement learning, can change models for best-performing output or model. When implemented well, it can escalate product output at a speed that tests the laws of physics.

        OEMs’ growing needs, especially in the automotive

        Automotive is a huge driver for many of the changes facing the semiconductor industry. In fact, there was palpable tension at the GSA event between the semiconductor representatives and auto manufacturers. The auto market is hard to resist for any semi-manufacturer due to its size, but the auto manufacturers will never forget the chip shortages of the Covid era and the tremendous damage that did to their business. The evolving supply chain relationships and the trust challenges were the subject of many formal and side-bar discussions.

        Sanjiv Agarwal adds: “At Capgemini, we work both sides of the equation, helping chip manufacturers “get to market” and fit into the automotive ecosystem, working with the automotive manufacturers to create their chip strategy, selecting and working with foundries to manufacture those chips, and integrating chips into their designs.” We bring in the promise to create an affordable, ever-smarter, software-driven mobility ecosystem that’s centered around customer needs and protects them from both physical and digital threats.

        Geo-political tensions

        Geopolitical tensions are a shared concern rather than a trend, but they will have a large impact on the way semiconductor companies work since 60-70 percent of all chips are manufactured in Taiwan or South Korea, which are both relatively volatile. Divergent national approaches exacerbate these concerns. The US, for example, has shifted from outsourcing production to encouraging chip producers to transfer operations stateside. In general, the U.S. CHIPS Act and the European Chips Act will “onshore” more production and drive diversification of production geography.

        Brett Bonthron, Executive Vice President and Global High-tech Industry Leader, says, “Through the two Chips Acts, semiconductor companies see that governments understand the criticality of the industry.”

        The US Chips Act is a true public-private partnership model and probably the first proactive federal program where the program will be executed along with the states, which would manage permits, labor, land, and other logistics.

        Statements of interest are currently being accepted for all direct funding opportunities (USD2B floor, no ceiling), and over 400 have already been received. The US Chips Act envisions success in four areas:

        • Leading-edge logic –at least two new large-scale clusters of leading-edge logic fabs wherein US-based engineers will develop the process technologies underlying the next-generation logic chips.
        • Memory – US-based fabs will produce high-volume memory chips on economically competitive terms and R&D for next-gen memory technologies critical to supercomputing and other advanced computing applications will be conducted in the US.
        • Advanced packaging – the US will be home to multiple advanced packaging facilities and a global leader in commercial-scale advanced packaging technology.
        • Current generation and mature – the US will have strategically increased its production capacity for current-gen and mature chips. Chipmakers will also be able to respond more nimbly to supply and demand shocks.

        Similarly, the European Chips Act enables the EU to address semiconductor shortages and strengthen Europe’s technological leadership. It will mobilize more than € 43 billion of public and private investments through the Member states through five key areas:

        1. Strengthen Europe’s research and technology leadership towards smaller and faster chips,
        2. Put in place a framework to increase production capacity to 20% of the global market by 2030,
        3. Build and reinforce capacity to innovate in the design, manufacturing, and packaging of advanced chips,
        4. Develop an in-depth understanding of the global semiconductor supply chains,
        5. Address the skills shortage, attract new talent, and support the emergence of a skilled workforce.

        Diversity and workforce development

        Diversity, workforce development, and talent were major topics at both events, with the consensus being that inclusion must start at a much earlier age and that more women and minorities must be allowed to enter leadership positions. Considering the existing workforce, many companies are partnering with universities, granting scholarships, and launching apprenticeship programs so that when these fabs are ready, and the existing workforce is close to retirement, the new, more diverse talent will be ready.

        Conclusion

        The semiconductor industry is in a state of flux. This year’s European Executive Forum by GSA outlined the five major trends – supply chain resiliency, generative AI, geopolitical tensions, the impact of the automotive industry, and sustainability – to emerge from this transition. There are, of course, numerous other factors at play, including issues around inclusion or reducing barriers to entry within the industry. There are also several topics that remained unsaid, for example, shifting relationships between automotive OEMs and tier-one suppliers or the evolution of the semiconductor company vis a vis the value chain. However, at the end of the day, semiconductors are fundamentally about propelling civilization forward and enabling the creation of better societies. As something that is also written our raison d’etre, Capgemini has substantial know-how and near-tech vision to drive this ultimate goal forward.

        Meet our experts

        Brett Bonthron

        Executive Vice President and Global High-tech Industry Leader
        Brett has over 35 years of experience in high-tech, across technical systems design, management consulting, start-ups, and leadership roles in software. He has managed many waves of technology disruption from client-server computing to re-engineering, and web 1.0 and 2.0 through to SaaS and the cloud. He is currently focusing on defining sectors such as software, computer hardware, hyper-scalers/platforms, and semiconductors. He has been an Adjunct Faculty member at the University of San Francisco for 18 years teaching Entrepreneurship at Master’s level and is an avid basketball coach.

        Vignesh Natarajan

        High-tech Segment Leader, North & Central Europe, Capgemini
        Vignesh has spent nearly two decades in the Consulting, Engineering, and IT services space with a specialized focus on Manufacturing organizations. He is passionate about Technology and digitalization, and how they can transform the human experience and enrich lives. In his current role, he helps our strategic customers realize their digitalization roadmap fueled by Innovation and state-of-the-art technologies with a strong focus on decarbonization. He strongly believes that unleashing human potential through technology is the only way to a sustainable future for humanity and that Semiconductor organizations will lead from the front in this transformational journey.

        Sanjiv Agarwal

        Global Semiconductor Lead, Capgemini
        With about 30 years of experience in the TMT sector, Sanjiv is experienced with enabling digital transformation journey for customers using best-of breed technology solutions and services. In his current role as a global semiconductor industry leader, he is working closely with customers on their journey on producing sustainable technology, driving use of AI/ ML, digital transformation, and global supply chain.

        Shiv Tasker

        Global Industry Vice President (ER&D), Technology, Media and Telecom at Capgemini
        With more than three decades of executive management, sales, and marketing experience in the hi-tech sector, Shiv possesses a proven track record of helping SaaS organizations scale by building high-performing sales teams. During the course of his career, he spearheaded the growth of a startup, elevating it to over $100 million in annual recurring revenue (AAR) within a four-year period.

          Welcome to where intelligence transforms everything at Google Cloud Next

          Genevieve Chamard
          18 Aug 2023

          Having a finger on the pulse of technology is critical. But with so much happening so quickly, keeping up can be a struggle. So, I’m truly excited for Google Cloud Next 2023.

          Google Cloud Next ’23 is the flagship annual conference where I’ll be joining my team at Capgemini and some of the brightest minds in the field. We’ll be both experiencing and sharing the latest innovations, technology, and trends from industry experts and global business leaders. 

          If you’re planning to attend, I invite you to join me there for exclusive insights and transformative opportunities tailored just for you. This includes immersive industry demos, live podcast episodes and speaking sessions with our clients – exploring how Capgemini and Google Cloud work together to transform businesses like yours every day.

          Here’s a look into two of the spotlight sessions:

          Elevate your possible with responsible Generative AI 

          More than just a buzzword, Generative AI (GenAI) is making tangible impact to businesses across all industries. I’m delighted to share with you a glimpse of Capgemini’s journey with GenAI at Google Cloud Next.

          Get the inside story of how Capgemini specialists trained their teams on 250 use cases of generative AI and built 52 demos for the clients – all in the span of two months. You’ll explore practical applications and best practices for tangible outcomes – to give you a glimpse into the adoption journey. 

          A leading US bank builds a next-gen enterprise with Google Cloud 

          Explore the journey taken by a leading US bank to become a data-master enterprise. The company adopted a data-driven strategy and accelerated product and service development while capitalizing on market trends, better serving its customers, and getting an edge over the competition. 

          I’m excited to share with you three immersive experiences available at our booth, showcasing the advantage of leveraging Google Cloud’s innovations in three domains.

          Explore the combined power of human intelligence, cloud, and Generative AI 

          • Financial services: Witness how the home-insurance sector is adapting to Google Cloud technology, creating personalized payment models based on user behavior, regardless of their risk profile 
          • Retail: Experience how customers can receive tailored recommendations and interact with audio conversations to self-checkout smartly and conveniently – powered by generative and conversational AI 
          • Automotive: Drive your imagination, literally. With AD SHORTY you’ll explore how autonomous vehicles will be able to take drivers to new areas and terrains, and not just through common roads and freeways.  

          Capgemini’s booth will also be hosting the Cloud Realities Podcast; join over 100,000 listeners as Google experts will discuss key trends, challenges, and opportunities for organizations as it explores sustainability, cybersecurity, and business transformation. Tune in to gain practical advice for navigating large-scale cloud transformations from our Chief Cloud Evangelist and Chief Architect for Cloud, Dave Chapman.  

          So, if you’re attending, please drop me a message on LinkedIn, or find me at booth #1215 and let’s discuss the possibilities in the world of cloud technology.

          Author

          Genevieve Chamard

          Global AWS Partnership Executive
          Genevieve is an expert in partnership strategy at a global level with 13 years of innovation and strategy consulting. Teaming up with partners and startups, Genevieve helps translate the latest, bleeding-edge technologies into solutions that create new captivating customer experiences, intelligent operations and automated processes. She specializes in: Global partnership strategy and management, Go-to-market and growth strategy, Industry vertical solution build, Pilot definition and management and Emerging technology and start-up curation.

            Migrating your SAP to the cloud?
            The most important step is before you begin

            Devendra Goyal
            11 Aug 2023

            Migrating SAP to the cloud can be a daunting task for any organization, and it requires a significant investment of time, resources, and expertise.

            As with any major undertaking, there are challenges at every step of the way, from planning and preparation to execution and beyond. That’s why it’s important to partner with an experienced team that has the SAP and cloud expertise needed to ensure a successful migration.

            Why is a partner necessary for an SAP cloud migration?

            Your company likely already has a team with SAP expertise, cloud expertise, automation experience, and project management skills. So why add the cost and hassle of an external partner? There are a few reasons. A good partner knows what to expect, and when. Your partner will keep this project on track, no matter what else is going on in your organization. You have many tasks; your partner has one – getting your SAP up on the cloud, and doing it as efficiently as possible.

            An SAP and cloud partner with experience

            When considering a partner for your SAP migration, there are several factors to keep in mind. One of the most important is SAP experience. You’ll want to work with a partner that has a proven track record in your specific industry and with the SAP Products you use. Look for a partner that can provide references and case studies that demonstrate their ability to deliver successful SAP projects.

            An SAP and cloud partner with expertise

            Another key factor is cloud expertise. Your partner should have deep expertise in the cloud providers and native tools that you plan to use, as well as a good understanding of cloud best practices. They should be able to help you select the right cloud infrastructure for your needs and ensure that your SAP applications are optimized for performance and scalability.

            Automation and offers can also be important differentiators when choosing a partner for your SAP migration. Look for a partner who can offer automation tools to streamline your migration and reduce the risk of errors. Additionally, a partner who provides packaged offers or services will help simplify your migration and reduce costs.

            What else comes with experience and expertise?

            There are numerous other attributes that come with experience and expertise. One of those is sound competency . A well-rounded team should have diverse skills and experience in SAP, database, operating systems, cloud, data migration, security, and compliance. This will ensure that all aspects of your migration are addressed. Connected with competency is project management. Nothing is more frustrating and discouraging than a poorly managed project. You should look for a partner who has a detailed plan and methodology in place, with a clear timeline and a risk management strategy. A partner with strong quality control processes in place can ensure that all aspects of your migration are thoroughly tested and validated early – when changes are still easy.

            The stamp of approval

            Finally, don’t overlook the value of certifications and partnerships when choosing a partner for your SAP to cloud migration. Certifications should be relevant, and partnerships should include hyper-scalers and SAP vendors. These certifications and partnerships can ensure that your migration is completed to the highest standards, and your partner has access to the latest tools and resources. (You do NOT want to finish your migration only to realize that it’s already a year behind the times.)

            Finally, long-term success depends on ongoing support and maintenance for your SAP applications in the cloud. Therefore, it’s essential to choose a partner with the capability to provide BAU support and ensure that any issues are quickly addressed, and your applications continue to run smoothly.

            Learn more about our cloud offers on our website or contact us here to share your experience and questions. 

            Author

            Devendra Goyal

            Head – Global S2C Offer & Transformation Delivery

              Unleashing the data mesh revolution: Empowering business with cutting-edge data products

              Dan O’Riordan
              9th August 2023

              The principles of data mesh have moved beyond being just theoretical concepts for data architects and forward-thinking executives. It’s time to start delivering on data mesh’s promise of exceptional data products. Data mesh principles can help us uncover the valuable insights that businesses need.

              Feedback Fusion: The power of continuous iteration for product success

              When building a product, it’s crucial to understand the utility of the product and how any changes to the product will impact its utility over time.

              If we consider building a mobile phone or any other product, the cost of building a phone that is unusable will be significant. Therefore, conducting thorough research in the beginning to understand what the market wants is critical before beginning the product-development process.

              Once we have built and distributed a phone, we need to continually consider feedback from different channels, including social media and online reviewers, to continuously iterate and improve the phone.

              This feedback loop is also imperative for data, however in the past data developers have typically waited for feedback from data consumers and then reacted. This has introduced time delays and ultimately frustration for data consumers.

              With product thinking this approach is turned on its head, data product developers are continuously monitoring both quantitative and qualitative feedback from consumers.

              This feedback allows data product teams to proactively evolve the data product to ensure that as data consumers need new capabilities, they are being built into the data product, thus avoiding delays and frustration, and enabling better outcomes for the organization.

              Data mesh dilemma: Embracing innovation amidst fear and uncertainty

              Data mesh principles, which focus on the notion of first-class data products and other factors, have gained an unprecedented amount of interest in the past eighteen months. The conversation in the data mesh community has largely focused on the principles data mesh and what they mean for each organization. Most organizations have invested heavily in cloud but are still struggling to keep up to the pace that the business requires. “Why does it take me three to six months to get a new or modified dataset? Who’s responsible for the data governance? How can I trust that the dataset can be trusted?” and the list of questions goes on.

              What we discovered during these conversations with clients is there is an overall acceptance that data mesh and its principles make good sense, but there is the fear factor on the pain an organization needs to go through to get to the promised land of a truly federated data estate of quality, secured, discoverable data products. So, most organizations have kicked the can down the road.

              Start small, think big, and design for industrialization

              Here are useful guidelines to help reduce this fear of failure.

              1. To effectively build data products, it’s crucial to identify the problem you’re trying to solve and determine why a data product is the appropriate solution from the beginning of the process. Taking the time to clarify the reasons behind your approach will ultimately save you a great deal of time, money, and effort. This fundamental step is applicable to any product-development process, and it’s no different when building data products.

              A simple data product canvas together with the business and domain experts need to be committed to this phase. Note: Forget about all technology during this phase.

              2. Many organizations have not changed their approach to data management in the last 30 years. It is commonly believed that all data must be centralized into a data warehouse or data lake before it can be analyzed, which is both difficult and costly in terms of human resources and technology. Today decision makers wait for data to be made available before it can be used. This means waiting for data pipelines to be specified and built, however this is typically done in the absence of the complete knowledge of the value of the data to a particular use case. This unnecessarily elongated process is fragile and has a negative impact on an organization’s ability to compete using data.

              Fortunately, solutions like Starburst/Trino offer intelligent connectors and a highly optimized federated MPP SQL engine that enables the creation of data products by analysts in the lines of business (domains) with no need for intimate knowledge of the source technology. Lines of business can quickly access data and determine its applicability to a use case without having to rely on central data teams.

              If we consider this in the context of cloud-data migrations, solutions like Starburst/Trino enable these data products to be created, managed, and retired while the underlying data platforms are migrated. The system administrators only need to update the connector to ensure uninterrupted service for business users. With Starburst we want to give the data-product teams the option to decide on what works best for them to deliver the best data product that will satisfy the requirements as outlined by the data product canvas.

              3. Finally, to ensure that the quality of data products is maintained over time as business needs change, a continuous monitoring and feedback loop is key. Data-product producers need to understand who, how, and for what purpose their data product is being used, so they can proactively manage the data product. This management requires technology capabilities to provide this insight as well as an agile approach to streamline the pipeline from ideation to production and constantly improve efficiency. We look at this as the building of a factory like model for data products.

              Data mesh in action

              At online fashion retailer Zalando, various lines of business independently utilize Amazon S3 for storing and managing datasets, eliminating the need for a central data team. A central data “enabling team” oversees data-governance standards and identifies reuse opportunities, while a dedicated platform team supplies compute services including a distributed SQL Engine (Starburst) for analytics. This clear division of responsibilities – lines of business managing data, the enabling team governing it, and the platform team providing technology – prevents bottlenecks and centralization, fostering agility in leveraging data to maintain a competitive edge.

              A prominent French state organization has been devising its data-estate roadmap for 2025 over the past year. Its current extensive data platform comprises batch processing, streaming processing, AI, and use cases, with concerns about cloud readiness. With a complex data estate plagued by performance and monitoring issues, its goal is to streamline operations using a new data platform based on Starburst and Apache Iceberg. The primary objective is simplification and reduced complexity, achieved by focusing on business outcomes and scaling with data-mesh principles.

              “Start small, think big and design for industrialization.”

              Dawn of a new era

              The rise of data mesh and its principles plus the technical offerings from Starburst marks the dawn of a new era for data products. As businesses embrace the principles of data mesh, it’s essential to address the fear factor associated with adopting this approach. By following the guidelines outlined in this article – focusing on identifying the problem to be solved, leveraging modern solutions like Starburst/Trino for data management, and implementing continuous monitoring and feedback loops – organizations can confidently embark on their journey towards a truly federated data estate. Success stories like Zalando and the large French state organization demonstrate the transformative power of data mesh in improving efficiency, agility, and competitiveness. As we move forward, it’s crucial for businesses to embrace the promise of data mesh, shifting from theoretical discussions to real-world implementation. Only then will they be able to harness the full potential of exceptional data products and uncover the valuable insights needed for sustained success in an increasingly data-powered world.

              INNOVATION TAKEAWAYS

              OVERCOMING ADOPTION HURDLES IN A FEDERATED DATA ESTATE

              Data mesh principles enhance data-product creation, driving valuable insights and competitiveness, but adoption is slowed by perceived challenges in achieving a federated data estate.

              THE THREE PILLARS OF EFFECTIVE DATA MESH IMPLEMENTATION

              Implementing data mesh effectively involves problem identification, utilizing modern data-management solutions, and establishing continuous monitoring and feedback loops.

              DATA MESH IN ACTION

              Success stories like Zalando and a large French state organization showcase the benefits of data mesh, including improved efficiency, agility, and competitiveness.

              BRIDGING THE GAP, PRACTICAL STEPS TO DATA MESH SUCCESS

              Moving from theory to practice in data-mesh implementation allows organizations to better harness data-product power and succeed in a data-powered world.

              Interesting read?

              Capgemini’s Innovation publication, Data-powered Innovation Review | Wave 6 features 19 such fascinating articles, crafted by leading experts from Capgemini, and key technology partners like Google,  Starburst,  MicrosoftSnowflake and Databricks. Learn about generative AI, collaborative data ecosystems, and an exploration of how data an AI can enable the biodiversity of urban forests. Find all previous Waves here.

              Dan O’Riordan

              VP AI & Data Engineering, Capgemini
              A visionary with the architectural skills, experience, and insight to transform any application, computing platform infrastructure or data operation to the cloud. He works regularly with the CxO’s of large enterprises across different industries as they embark on a digital transformation journey. A key part of digital transformation requires an organization to be data centric. Organizations are on their journey to using Cloud and have started to migrate applications but also are looking at how to migrate their data operations and how to then build & deliver data services using the latest AI & ML services from the Cloud Service Providers. 

              Andy Mott

              Partner Solution Architect, Starburst
              With more than 20 years of experience in data analytics, Andy Mott is skilled at optimizing the utility of analytics within organizations. When determining how to generate value or fortifying existing revenue through technologies, Andy considers the alignment of an organization’s culture, structure and business processes. He ensures that the strategic direction of the organization will ultimately enable organizations to out compete their respective markets with data. Andy Mott is currently EMEA head of partner solutions architecture and a Data Mesh lead at Starburst, and lives in the United Kingdom.

                Navigating the complexity of enterprise asset management in the energy and utilities sector

                Mark Hewett
                Aug 8, 2023

                Enterprise Asset Management (EAM) is a crucial component of any asset-intensive industry and as technology continues to evolve, managing assets effectively and efficiently is becoming increasingly complex.

                In this post, we’ll explore several key areas of EAM in energy and utilities, including connection volume, IT/OT convergence, commissioning and decommissioning, and the circular economy, as well as security considerations.

                Key Challenges:

                The sustained increase in connections to the grid poses a significant challenge in keeping a grid operational and balanced. It also poses a challenge for those who are responsible for tracking and maintaining these assets. The more assets there are to manage, the more difficult it becomes to ensure that they are all tracked, monitored, and maintained effectively. This leads to increased costs, reduced efficiencies, and potential downtime for critical infrastructure assets.

                How to address this challenge:

                To address this challenge, COOs and Operations Directors are turning to new technologies to collect real-time data about their infrastructure’s performance and health, which can then be analyzed to identify potential issues before they become major problems. These digital technologies can help to provide operations managers with insights and actionable information that they can use to optimize asset management and operational processes. Used intelligently, these platforms can “predict” failures and support interventions in the network that drive down cost while also reducing operational impacts (i.e., outages or leakage).

                By leveraging these digital technologies and exploiting an improved operational awareness of the network, asset and operations managers can improve efficiencies, reduce costs, and ensure that all assets are tracked and managed effectively.

                IT/OT convergence:

                The convergence of IT and OT in asset management refers to the integration of two distinct areas of technology, information technology (IT) and operational technology (OT), to create a consistent digital ecosystem that enables organizations to manage their infrastructure efficiently. IT and OT have traditionally been separate areas of technology (and under separate “management” through CIOs and engineering directors, respectively), with IT focused on managing data and information systems while OT focused on managing physical assets, infrastructure, and processes. As digital transformation has accelerated, there has been a growing recognition of the need to integrate and combine these two areas to create a more holistic view of assets and infrastructure across the organization and so drive organizational structures, governance, and processes to change.
                By integrating IT and OT systems, organizations gain a more complete picture of their capital and IT assets in the context of the operational needs of the business, including data on performance, maintenance, and utilization. This enables organizations to identify areas for improvement, optimize asset utilization, and reduce downtime and maintenance costs. Furthermore, IT/OT convergence also enables organizations to align their asset management goals with their overall business objectives. By connecting asset management to business objectives, organizations can ensure that assets are managed in a way that supports their strategy and objectives.

                Commissioning and decommissioning:

                Commissioning and decommissioning are critical processes within infrastructure operations management that can have a disproportionate impact on a company’s strategy, objectives, and ultimately their business results. Bringing new assets online, ensuring they are functioning correctly, and retiring assets at the end of their useful life is a demanding and involved operational procedure. These processes require careful planning and management to ensure that assets are correctly brought into operations and removed from operations effectively and efficiently without disrupting the rest of the network and are key considerations to ensure efficient operation of any grid infrastructure.
                Commissioning involves a series of tests and checks to ensure that new assets are functioning correctly and safely. This process can involve everything from checking electrical and mechanical systems to verifying that the asset meets regulatory and safety standards. The commissioning process is critical to ensuring that assets are safe to operate and will perform as expected in the operational environments they are intended to be used in. Digital system integration is a core enabler to the smooth running of an effective commissioning process.
                Decommissioning, on the other hand, involves retiring assets that are no longer needed or have reached the end of their useful life. This process can involve everything from removing equipment and disposing of hazardous materials to shutting down systems and securing the site. Proper decommissioning is critical because it ensures that assets are retired safely and efficiently, reducing the risk of accidents and the impact on the environment. And in a digitally enabled environment, this also reduces the impact on other operational assets as dependencies are easier to identify and manage accordingly.

                Operations managers leverage data and analytics to gain insights into their infrastructure assets’ performance and make informed decisions about the risks associated with commissioning and decommissioning. By tracking performance metrics and analyzing data on asset utilization, maintenance costs, and downtime, operational managers can identify opportunities to optimize the timeline for when an asset should be decommissioned from operation. To manage this, operational managers must balance the risk of keeping an asset operational against the cost of replacing the asset with newer, more efficient equipment, minimizing impacts on customers and business operations alike.

                Related to the decommissioning process, the circular economy is an emerging trend in EAM that involves designing products, solutions, and systems with a focus on sustainability and circularity. The idea is that operational assets can be retired to secondary or tertiary operational roles or returned to the manufacturer for reconditioning or material reuse. It aims to minimize waste and the consumption of natural resources by keeping materials in use for as long as possible, through reusing, repairing, refurbishing, and recycling.

                Circular economy:

                In a circular economy, assets are designed and managed to ensure their long-term value, with a focus on minimizing waste and reducing environmental impact. Asset managers play a key role in promoting the circular economy by commissioning assets that are designed for maintenance and that support circularity and by implementing effective recycling and repurposing programs. If these requirements are not specified and driven hard through design and implementation, then the assets delivered and deployed are destined for landfill.
                Furthermore, asset managers can implement effective recycling and repurposing programs to ensure that materials are reused and recycled at the end of the asset’s lifecycle. This can involve everything from implementing a recycling program for electronic waste to repurposing old equipment for use in other applications.

                Security Considerations:

                Lastly, the most overlooked element in most operational environments is security. We have found it to be a critical consideration in EAM, particularly as assets become more connected and digitally enabled. COOs and operations directors need to ensure that all assets are protected from cyber threats and other security risks that cause financial, reputational, and operational damage to their business.
                To ensure that assets are protected from security threats, operations managers should implement security protocols such as zero-trust security methodologies, which assume that all devices are potentially compromised and implement measures to verify the identity of users and devices before granting access to the wider infrastructure. Other security measures can, and should, include network segmentation, access control, data encryption, and intrusion detection and prevention systems.

                Operations managers can also incorporate regular security patching and maintenance procedures into asset management processes to ensure that vulnerabilities are addressed promptly. This can involve everything from updating firmware and software to conducting regular vulnerability assessments and penetration testing. They should also prioritize security by incorporating security considerations into asset design and asset selection processes.

                Conclusion:

                In conclusion, EAM is an increasingly complex and evolving field that requires careful planning and management to ensure that assets are used effectively, safely, and efficiently throughout their lifecycle. By leveraging technologies such as IoT sensors and advanced analytical tools, embracing IT/OT convergence, prioritizing security, and promoting supply chain circularity, COOs and operations directors can optimize asset management processes and achieve greater success in the digital age. If one or more of the topics we touched on in this blog is of interest to you or you are curious to know more, please watch these videos where our SMEs Sven Strassburg (from our IBM partnership) and Mark Hewett, Capgemini discuss these very topics with a focus on the Energy Transition and Utilities sector.

                Co-authored by Mark Hewett, Sven Strassburg and Woody Falck.

                Authors

                Mark Hewett

                Vice President | Energy and Utilities
                As Vice President for our Energy Transition and Utilities team in the UK, I have a strong focus on energy networks and the intelligent transformation of network businesses across the UK to meet the challenges of the future. As a chartered engineer and former Army officer, I have worked across several sectors including global high tech and public sector and aviation before finding my home in Energy Transition and Utilities.

                  Software-defined vehicles (SDV): The answer to truck driver shortages?

                  Fredrik Almhöjd – Our expert
                  Fredrik Almhöjd
                  Aug 2, 2023

                  Although most truck OEMs acknowledge software-defined vehicles as a new norm for the commercial vehicle industry, they still need to convince their customers that these vehicles will add value to their businesses – especially around the top three objectives of improved uptime, productivity, and fuel efficiency.

                  SDVs have a major role to play in helping fleet operators overcome the international shortage of truck drivers, explain Fredrik Almhöjd and Jean-Marie Lapeyre, Chief Technology & Innovation Officer, Global Automotive Industry at Capgemini. That’s because SDVs can transform the driver experience, potentially attracting younger people and women who currently don’t see truck-driving as a career option.

                  “Without action to make the driver profession more accessible and attractive, Europe could lack over two million drivers by 2026, impacting half of all freight movements and millions of passenger journeys.” That is the stark prediction of the International Road Transport Union (IRU), commenting on a study it conducted in 2022. The outlook isn’t any more reassuring in other regions.

                  So what are transportation companies to do, and how can truck OEMs help? In this article, we’ll argue that software-defined vehicles (SDVs) could be a big part of the answer. We’ll be building on ideas from earlier blogs.

                  In the passenger car market, the concept of SDVs is often promoted on the basis that it will create a better customer experience for the driver. For commercial vehicle fleet operators, by contrast, the main focus has always been, and will continue to be, on TCO. Until recently, efforts to improve life for the driver, while important, have received less attention.

                  However, with driver shortages becoming critical, truck-driving needs to be made more attractive to jobseekers. The IRU suggests that attracting more women and young people is an important part of the solution – but current working conditions make that difficult.

                  SDVs can help with the challenge of recruiting and retaining staff.

                  SDV features can make drivers’ lives better

                  So what SDV features might improve driver experience? Truck drivers will enjoy many of the same benefits as car drivers, such as customized infotainment – though obviously, this must not distract them from the job.

                  Consumer-oriented SDV features can be tailored for trucks. For example, a framework for companion apps on smartphones could be adapted to support the needs of HGV drivers in finding places to stop, eat, and sleep, avoiding illegal and dangerous use of phones while driving. In addition, although software can’t improve the quality of facilities available to drivers, it can help direct them to the most satisfactory ones based on a driver’s personal preferences and ratings by other users.

                  With all the functionality they need integrated and automated (and configured for personal habits and preferences that they have already stored), the job can be done safely, easily, and legally. Similar technology could be used to help last-mile delivery drivers navigate between stops.

                  Integrate drivers’ digital lives

                  Many people, especially younger ones, now expect their digital lives to be streamlined and integrated across work and leisure. To appeal to these individuals, SDVs could be equipped to remember drivers’ preferences regarding infotainment modes and transfer them across trucks. Their preferred smartphone apps, or similar ones, could also be made available via the truck’s console.

                  By integrating various aspects of working life, we can make the driver’s job easier, as well as more pleasant. A common complaint from truck drivers is that they have to unload cargo themselves because there is nobody else to do it. An SDV can contact the destination to communicate the arrival time and nature of the cargo, increasing the chances of the relevant staff being on hand with the right equipment.

                  When a truck is an SDV, ADAS features can easily be added. Some of these features can help to make truck-driving more attractive to younger people and women by allowing multiple tasks to be performed simultaneously. Stress levels for the driver are reduced significantly if they can organize their working day – including route optimization and scheduling of pickups and deliveries – while they’re on the road. This can be achieved through partial automation of driving tasks, whether via assistant systems or fully autonomous driving (say up to level 4), paired with services that help with routing and scheduling.

                  Overcome negative perceptions of truck-driving careers

                  For women in particular, personal safety issues can be a deterrent to working as a truck driver. Connected vehicle software can help here too. For example, AI-enabled services can monitor sensor data and warn when someone is approaching a stationary truck, and biometrics can control who has access to the cabin. Predictive maintenance can reduce or eliminate the risk of breaking down in a lonely spot. (And with SDVs, we can go beyond preventive maintenance via telematics and alerts to their natural successor, self-diagnosis by the vehicle.)

                  Thanks to SDV connectedness, despatchers can more easily monitor drivers’ safety and send help if needed. The same communications facilities could streamline interaction between communities of drivers who can look out for one another, reducing any sense of isolation.

                  Long hours away from home are another turn-off for many potential drivers. SDVs’ communications technologies can improve their work-life balance, with social media style software, in-vehicle display screens, and cameras keeping the driver in touch with family or friends during stops.

                  Work-life balance can be further improved by advanced route optimization techniques. An SDV route can be automatically optimized to accommodate a driver’s personal preferences and constraints, as well as requirements such as refueling and rest stops. It can then be continuously adjusted to reflect the current circumstances such as weather and traffic conditions, helping drivers to finish work on schedule.

                  Deliver better driver experience and financial benefits for fleet operators

                  Despite their urgent need to recruit more drivers, at the end of the day truck buyers are still likely to focus on the more tangible benefits of SDVs. The good news is that many of the features that give drivers a better experience simultaneously increase productivity, uptime, or fuel efficiency – for example, predictive maintenance and real-time route optimization, both mentioned above.

                  The same is true of services that address electric vehicles’ range limitations and shortages of charging stations (as discussed in our recent e-mobility blog). Suppose the truck’s battery is getting flat, and the nearest charging station has a long wait time. An SDV can save energy in various ways: for example by modifying engine parameters or environmental settings such as aircon, or by advising changes in driving behavior. With these adjustments, the driver can continue to a charging station with an acceptable wait time, improving productivity and likely reducing frustration too.

                  Safer driving is yet another example of an SDV capability that benefits both employer and driver. Examples here include the use of sensors to detect when vehicles get too close to one another, or when drivers are tired and need a break. For example, a truck could raise an alert when its driver is blinking more frequently than is normal for them, indicating exhaustion.

                  Make driver appeal part of the business case for SDVs

                  For truck OEMs and tier 1s, the case for SDVs is clear. They can enhance revenue flows via a shift from one-off purchases to full lifecycle engagement, and improve automotive sustainability performance, for example by reducing waste in R&D processes. Ultimately, SDVs can help to make the brand central to customers’ businesses. In addition, selling SDVs makes sense as part of the journey to autonomous driving and in the context of companies’ overall digital transformation.

                  Software-defined vehicles as passenger cars

                  SDVs are already proving their worth in the passenger car market, where improved driver experience is a more obvious selling point. (Read our “point of view” report on software-driven transformation for more.)

                  An excerpt from a recent Connected Mobility infographic – please download the full version here

                  The question is how to demonstrate the value of SDVs to truck customers such as fleet operators. Industry concepts such as software-driven transformation are not always much help here. Instead, OEMs can point to the business benefits that result from SDV adoption. And right now, improved driver experience could be among the most important of those benefits because of its ability to help overcome driver shortages.

                  For more information, visit the commercial vehicles area of Capgemini’s website, and read the earlier articles in this blog series.

                  About Author

                  Fredrik Almhöjd – Our expert

                  Fredrik Almhöjd

                  Director, Capgemini Invent
                  Fredrik Almhöjd is Capgemini’s Go-to-Market Lead for Commercial Vehicles in the Nordics, with 25+ years of sector experience plus extensive knowhow in Sales & Marketing and Customer Services transformation.
                  Jean-Marie Lapeyre – Our expert

                  Jean-Marie Lapeyre

                  EVP and Chief Technology & Innovation Officer, Global Automotive Industry
                  Jean-Marie Lapeyre works with automotive clients to develop and launch actionable technology strategies to help them succeed in a data and software-driven world.

                    Expert Perspectives

                    How a system-based modular approach minimizes risk and accelerates SaMD go-to-market​

                    Capgemini
                    Capgemini
                    27 Jul 2023
                    capgemini-engineering

                    In the field of healthcare systems compliance, the difference between 99% and 100% is a chasm.  ​ 

                    “Software defect.” “False upstream alarms.” “Firmware error.” “Login error.” “Software bug.” “Sensor failure.” “Cybersecurity vulnerability.” The FDA list of software-related recalls is sobering reading. We see the story behind each event – the teams that worked overtime designing and building a new device, quality assurance professionals checking each and every vulnerability, last-minute adjustments, the elation of a seemingly successful release… And all the while a tiny flaw lay hidden from sight, with the power to derail everything compromising safety and exposing liabilities. ​

                    As a legal manufacturer, we specialize in safety and compliance for Software as a Medical Device (SaMD). It’s a fast-growing field, with risks hiding in every nook and cranny. How do we make sure we catch and neutralize every risk? How do we ensure safety and compliance, without sacrificing speed? Let’s dive in. ​

                    Compliance and agility by design​

                    There’s a common fear that safety and compliance slow down go to market. A justified concern? Yes and no. Of course, extra steps take extra time. That’s why, wherever possible, we work to engrain safety and compliance considerations into every process. For example, a remote patient monitoring system that records patient data needs to do many things, some are basic functions; others cross over into the “safety” category. Safety and compliance are mandatory and are integrated into the development process. Pharma and MedTech companies will speed development and improve quality if they adopt two changes: agile processes combined with modularity by design. To see how modularity adds value, let’s look closer at the challenges SaMD teams face.​

                    Challenges of connected systems ​

                    With today’s more connected and complex systems, the boundaries of what constitutes “Software as a Medical Device” are often blurred. For example, when the system is distributed, with parts of an application running on a wearable, mobile phone, or in the cloud, and  a combination of medical and non-medical functions, is the whole system SaMD? How do we manage a combination of safety and non-safety, administrative and other functions that all need to work together to fulfill a medical purpose? If the system is developed as monolithic, this increases effort to get regulatory approval and places additional compliance effort to modify, improve or add functions after the system has received initial market authorization. What’s the solution?​

                    The benefits of modularization for connected health ​

                    A pragmatic approach to accelerate SaMD development, manage safety and regulatory complexity, and maintain flexibility, is modularization. We segregate SaMD products by function and by risk category (high, medium and low risk). This makes it possible to apply the appropriate level of risk control and testing measures in each case. Using pre-built, ready-to-use SaMD modules developed under certified processes and qualified tools, assures reliable, fast and compliant software. ​

                    Modularization reduces regulatory complexity and – together with agile development models – speeds up the time to market, while providing the flexibility we need to continuously adapt functions. Most critically, it reduces risk. The “hidden flaw” we talked about earlier – in a modular system there’s no place to hide, and agile development makes it possible to and correct flaws early in the development cycles.​

                    Reducing risk and regulatory complexity​

                    We believe that risk is best managed when technical and regulatory responsibility go hand-in-hand. The closer a development team is to the consequences of success or failure – the more skin they have in the game – the more we count on them to scrupulously manage risk. We’d been working in the SaMD field from the start, at the intersection of software, life sciences and regulatory, so taking regulatory responsibility for our work was a natural step. How to manage regulatory compliance is an important question for every innovator – a far-reaching question with many dimensions. For us, most crucial is the link between technical and regulatory responsibility.​
                    ​You can find more about our offer and our Legal Manufacturer capabilities on our webpage, and we’re also available to consult on any aspect of risk and compliance.

                    When your innovations hit the market, dozens of factors affect their success. Avoidable mistakes should not be one of them. Let’s make your products flawless.  

                    Meet our experts

                    Andrew Koubatis

                    Intelligent Medical Products and Systems Lead, Capgemini Engineering
                    Providing Pharma and MedTech with service offers to accelerate and de-risk product development. “Intelligent products and systems allow us to break the traditional boundaries of the healthcare ecosystem, providing greater patient insights through data, more effective, reliable and personalized treatments, driving better outcomes and supporting value-based care with connected and interoperable technologies.”

                    Frédéric Burger Ph.D.

                    CTO Life Sciences and Regulatory Affairs, Global Life Sciences Center of Excellence Leader, Capgemini Engineering
                    Leading the global life sciences portfolio and solutions in Pharma and Medical Devices “This is undoubtedly a new stage in the use of data in the life sciences industry. With the combination of Regulatory Sciences and a clear strategy on Digital implementation, the data is now at the core of any new journey. We support our clients with expertise, strong assets and methodologies for accelerating their transformation”.