Skip to Content

Site reliability engineering 101: Ensuring the reliability of your IT system

Aliasgar Muchhala
30th May 2024

In simple terms, reliability is defined as the probability of success. However, in the application world, reliability is talked about in terms of availability and measured in the context of the frequency of failures.

Reliability is important as it can help build or lose confidence in a product and an organization’s brand reputation.

In the current IT landscape, which is complex, multi-layered, and cloud-based, the traditional approach to preventing system failures doesn’t quite work.  

With so many moving parts, there are bound to be disruptions that result in failures. This requires a change in mindset to expect failures, and to build systems that are resilient to these failures. Site reliability Engineering (SRE), also known as service reliability engineering, is the approach you need to anticipate and recover from failures.

SRE applies a software engineering mindset to system administration. As a software engineer, you look at the business requirements and develop the system aligned to those requirements. Likewise, a Site Reliability engineer needs to look at how each disruption can affect the business requirement and then find a solution for it accordingly.

An Agile-focused, product-driven approach and IT/OT integration have been key drivers for the growing demand for SRE today. 

SRE began at Google around 2003 as a method to ensure Google’s website remained “as available as possible.” The team responsible for site availability applied software engineering concepts to system administration methods, which later formed the basic tenets of SRE, as described in an online book published by Google.

Like most enterprise constructs, businesses don’t need to mimic the same methods used by Google. While we need to assess these practices in the context of the enterprise, there are certain basic tenets of SRE that must be followed: 

  • Agree upon a set of service-level indicators (SLIs) and service-level objectives (SLOs) to understand the targets and measures
  • Accept failure as normal and manage an “error budget” that is used to strike a balance between system updates and system stability
  • Understand that the site reliability engineers are neither part of the development team nor the operations team. It needs a separate central team that takes the end-to-end across apps, infra, backend, frontend, middleware, etc.
  • Automate processes. A key objective of SRE is to “reduce toil.”

Does this sound familiar? A bit like DevOps, perhaps? Then click here to read our next post on how SRE is different from DevOps.

Author

Aliasgar Muchhala

Global SRE Lead and Global Architects Lead
A strategic, focused, business-oriented leader and Capgemini Level 3 Certified Chief Architect, with an impressive record in architecting and building cutting edge systems that leverage new age technologies to enable clients transform their business, reduce costs and improve efficiency.

    Precision in personalization: The power of BPM platforms

    Dinesh Karanam
    29 May 2024

    In the ever-evolving financial landscape, personalization is no longer a luxury but a necessity for organizations seeking to retain and attract customers. Delivering the right message, tailored to individual needs and preferences, and at the optimal time and channel, is the crux of this journey. It’s about transcending generic offerings and forging genuine connections with clients. As per the 2023 report on the State of Personalization Maturity in Financial Services from Dynamic Yield by Mastercard, 86% of FIs stated that personalization is a clear, visible priority for the firm and its digital strategy, with 92% planning to invest further in the practice.

    While the destination is clear—enhanced customer satisfaction, loyalty, and business growth—the path towards achieving this is intricate. It demands a sophisticated incorporation of innovative technology, intense data analytics, and an intelligent understanding of customer behavior and preferences. Banks embarking on this journey must equip themselves with the right tools and knowledge to navigate this complex terrain.

    Setting the course: charting the path with personalization in financial services

    Personalization is the guiding light for banks towards a future where customer-centricity is paramount, and this journey can help across multiple avenues:

    • Enhanced Customer Experience: Personalization helps craft experiences that resonate with the client’s financial goals, challenges, and aspirations.
    • Unlocking Revenue Opportunities:  Clients who prefer the personal touch are more likely to engage with additional offerings, presenting lucrative cross-selling and up-selling opportunities.
    • Competitive Differentiation: In a crowded market, personalization can help be a differentiating factor, drawing in clients who seek a truly tailored experience.
    • Enhanced Operational Efficiency: Effective personalization helps to streamline processes, eliminate inefficiencies, and allocate resources with precision.
    • Risk Mitigation: Tailoring products and services to align with an individual’s risk profile allows banks to fine-tune their risk assessment and mitigation strategies.

    Implementation of these personalization initiatives rely heavily on innovative technology and advanced analytical skills, and Business Process Management (BPM) platforms are a great way to start your journey towards precision personalization. These platforms provide the necessary framework to seamlessly integrate personalization into every facet of customer interaction.

    Embarking on the journey: BPM platforms as your GPS for personalized experiences

    As banks prepare for this transformative journey, BPM platforms serve as the indispensable toolkit that help chart the way towards personalized client experiences through many key features:

    • Advanced Analytics: These tools analyze customer data and interactions to identify patterns and preferences, informing more personalized service delivery.
    • Customer Journey Mapping: BPM platforms allow businesses to create detailed customer journey maps, identifying key touchpoints for personalization.
    • Rule-Based Decisioning: Businesses can automate personalized responses or actions through rules, based on specific customer behaviors or attributes.
    • Integration Capabilities: The ability to integrate with CRM systems & other digital platforms helps ensure that data is effectively utilized for personalization.

    It is evident that BPM platforms emerge as the trusted GPS as we navigate the intricate terrain of personalized experiences. Banks can use BPM platforms in many ways throughout their personalization journey:

    • Automation of Personalized Workflows: BPM platforms empower banks to create tailored workflows that respond to specific customer triggers, ensuring timely and relevant interactions.
    • Consistency Across Channels: Whether a client prefers online banking or branch visits, BPM platforms ensure a consistent and personalized experience at every touchpoint.
    • Data Integration and Analysis: By aggregating data from various sources, BPM platforms provide a comprehensive view of the client, enabling tailored solutions and targeted communication.
    • Dynamic Process Adaptation: As client needs evolve, BPM platforms allow for real-time adaptation of processes, ensuring that personalization remains dynamic and responsive.

    A remarkable example of BPM platform implementation is Wells Fargo which employed the Pega Customer Decision Hub, enhancing personalization through real-time modeling and adaptive machine learning for tailored interactions. This helped boost customer engagement rates by 3-10x and increasing conversion rates across channels.

    Reaching the destination: the transformational impact of BPM-driven personalized customer experiences

    The impact of effectively implemented BPM-enabled personalization is overwhelming. Businesses can expect to see:

    • Increased Customer Engagement: Clients are more likely to engage with banks that demonstrate a genuine understanding of their needs and goals.
    • Enhanced Customer Satisfaction and Loyalty: Personalized experiences create a sense of loyalty and trust, leading to long-term client relationships.
    • Higher Conversion Rates: Personalization can lead to more effective marketing and sales strategies, resulting in higher conversion rates and increased revenues.
    • Operational Efficiency: Automation streamlines processes, reduces manual effort, and frees up valuable resources for strategic initiatives.
    • Competitive Differentiation: In a crowded market, personalization can be the key differentiator that sets a bank apart.

    This journey of precision personalization is not without its challenges. Although BPM platforms are the right way forward, choosing the right platform can be the differentiating factor in how clients resonate with the banks and drive positive engagement.  Capgemini takes pride in partnering with industry leaders that can help banks achieve precision personalization and ultimately, achieve lasting success in the ever-evolving financial landscape.


    Join the Capgemini experience at PegaWorld iNspire 2024. Visit our booth #22 and explore a world where intelligent connections unlock personalized journeys. 

    Please contact our experts

    Dinesh Karanam

    Senior Director, Business Processes and Augmented Services Leader for North America, Financial Services
    Dinesh leads business and technology transformations for global organizations, using his 25 years of expertise in diverse industries to drive strategic innovation and impactful changes. He enhances operational efficiency and spearheads global teams to deliver significant business achievements, including profit growth and digital advancements. ​

      Expert perspective

      Stay informed

      Subscribe to receive our retail banking thought leadership.

      From Telco to Tech Co.: How the shift to software-driven business is unlocking a new era of innovation in telecoms

      Shamik Mishra
      May 28, 2024

      For the past several decades, telecom operators focused primarily on connectivity, with end-user devices consuming network resources. However, the landscape has since evolved, with a significant amount of content now generated by applications such as over the top media services (OTT), direct-to-consumer platforms, and other media services.

      To stay relevant, operators must pivot towards providing value to content creators, which entails offering insights derived from the network to these applications. This shift requires implementing software-defined interfaces and adopting a consumption-based business model.

      Further, Telcos’ software strategies extend beyond network virtualization or softwarization. Rather, it encompasses the entire stack, from networks all the way through the customer experience. In taking a more holistic approach, operators can benefit from faster time to market, proactive service provisioning, automation, autonomous networks, precise asset management, and digitization of customer services.

      In our most recent research, The art of software: The new route to value creation across industries, we explore the shift from Telco to Tech Co. and the specific role of software in this evolution. In this post, we present three key takeaways from our research, as well as our own expert insights, that highlight how Telco companies can prepare for and lead the way to a software-driven future.

      3 ways Telco organizations can prepare for a software-driven future

      1. Accelerating the shift to cloud native

      To be a software-driven business, Telcos must also be a cloud native business. This point has become non-negotiable as most, if not all, new network elements and equipment demand a cloud native foundation. This means that companies across all subsectors of the industry must possess the capability to develop cloud-native software, implement robust tooling, automation, CI/CD pipelines, and other essential components inherent to the cloud-native model.

      The cloud is also a key to developing and scaling software-based services, which is a critical revenue driver in the Telecoms industry. In fact, our research shows that Telco executives, as a group, expect to see the highest increase for the share of total revenue by software by 2030 (39% for telecom vs 29% average). For companies that expect to tap growth lines outside core telco services—be it through cybersecurity, IOT solutions or edge services—they must become cloud native.

      2. Leveraging the power of AI to scale.

      Current network automation solutions often operate in silos, with custom automation resulting in a “spaghetti integration.” Intelligent inventory management and automation can address this challenge by effectively managing dependencies across network assets.

      Adopting a software-driven approach enables telcos to scale and extend automation capabilities, accommodating new generations of connectivity technology seamlessly. This approach offers various benefits, including reduced total cost of ownership (TCO), accelerated innovation, and avoidance of vendor lock-in.

      AI is instrumental in realizing this vision, alongside other software-based automation approaches such as rules-based, model-based, and ML-based automation. As such, Telco organizations are not only required to significantly expand their technical workforce but also build teams equipped with a profound understanding of AI, data consumption, and the ability to craft software snippets that facilitate automation.

      3. Creating a strong culture of collaboration and transformation

      Though it may seem counter-intuitive, at the heart of every successful software-driven business is not simply great technology, but an excellent experience. To deliver on this front, companies need to understand their customers, their needs, their challenges and the context in which they operate.

      For some Telcos, shifting to software means becoming a true customer-first business. This involves seamlessly integrating various aspects of the service, from network infrastructure and product development to service and marketing, all orchestrated with the primary goal of delivering an exceptional customer experience. This requires a fundamental reimagining of how teams operate, the talent they attract, retain, and nurture, and an ongoing focus on continuous product development and operation.

      Finally, it will be nearly impossible for Telco organizations to make this journey alone. Another key finding of our research is that success requires an ecosystem approach, bringing together multiple stakeholder groups, such as chipset providers, hardware and sensor vendors, cloud and platform providers, connectivity providers, testing and others, to assemble the necessary expertise and capabilities needed to take full advantage of the benefits of software-driven transformation.

      Reclaiming the innovation story: How Telcos can kickstart their softwarization journey

      While many Telco organizations have significant work ahead to become software-driven companies or build the maturity of their business models, the upshot of our research is that they are hardly alone in the struggle: Only 29 percent of organizations across industries have started to scale and utilize software to drive transformation – with only 5 percent fully scaling identified use cases.

      The rest of companies, and likely many Telco organizations, find themselves squarely in the experimentation stage, identifying application areas/use cases or implementing pilots/proofs of concept (PoCs). This means that there is still ample time to start the journey and shape the future.

      That said, we must remember that when it comes to technology, the future moves fast. Telco companies have time to act, but not time to wait.

      Are you ready to take the next step towards a software-driven future? Download our recent report, The art of software, and schedule a consultation with our authors to start your journey from Telco to Tech Co. today.

      TelcoInsights is a series of posts about the latest trends and opportunities in the telecommunications industry – powered by a community of global industry experts and thought leaders.

      Meet the author

      Shamik Mishra

      CTO of Connectivity, Capgemini Engineering
      Shamik Mishra is the Global CTO for connectivity, Capgemini Engineering. An experienced Technology and Innovation executive driving growth through technology innovation, strategy, roadmap, architecture, research, R&D in telecommunication & software domains. He has a rich experience in wireless, platform software and cloud computing domains, leading offer development & new product introduction for 5G, Edge Computing, Virtualisation, Intelligent network operations.

      Karl Bjurstroem

      EVP, Global Head of Tech & Telecom Industries, Capgemini Invent
      Strategy consultant and manager passionate about the use of digital technologies to gain strategic and operational advantages within customer experience, product development and marketing. Specific expertise in digital strategy formulation and realization, developed by working with CXO level clients in the high tech, telecom, media and banking industries across the globe.

        Unveiling the future with spatial computing

        Sven Boesen
        May 28, 2024
        capgemini-engineering

        Governments harness spatial computing for enhanced decison-making

        In today’s fast-paced digital era, governments worldwide constantly seek innovative ways to streamline operations, enhance public services, and make informed decisions. One groundbreaking technology paving the way for this transformation is spatial computing. By leveraging highly accurate virtual environments, governments can unlock many benefits, revolutionizing how they plan, manage, and engage with their constituents.

        At its core, spatial computing integrates the physical and digital worlds, providing immersive, interactive experiences that mirror reality. Whether simulating urban landscapes, modeling infrastructure projects, or analyzing complex data sets, this technology offers unparalleled insights and opportunities for governments at all levels.

        One notable example of this transformative power is the partnership between Capgemini and Unity, two industry leaders at the forefront of spatial computing innovation. Together, they have created a remarkable digital twin for the Orlando region, showcasing the immense potential of this technology.

        In Orlando’s regional digital twin, Capgemini’s expertise in digital transformation and Unity’s cutting-edge 3D visualization capabilities have converged to create a virtual replica of the city and its surroundings. This digital twin isn’t just a static model; it’s a dynamic, data-rich environment that enables real-time simulations, scenario planning, and predictive analytics.

        So, what are the benefits of governments tapping into the possibilities offered by highly accurate virtual environments like the Orlando regional digital twin?

        Firstly, enhanced decision-making becomes a reality. Policymakers can gain deeper insights into various scenarios and their potential outcomes by visualizing complex data in a spatial context. Whether it’s optimizing traffic flow, planning for natural disasters, or assessing the impact of new development projects, governments can make more informed decisions that benefit their constituents.

        Secondly, improved collaboration and stakeholder engagement are fostered. Virtual environments provide a common platform where diverse stakeholders can come together, visualize concepts, and co-create solutions. This fosters transparency, fosters inclusivity, and ensures that decisions are made with the input of all relevant parties.

        Thirdly, virtual simulations enable governments to realize significant cost and time savings through early intervention. By simulating projects in a virtual environment, potential issues can be identified and addressed before they become costly problems. Whether it’s identifying design flaws, optimizing resource allocation, or minimizing construction delays, the benefits of early intervention are manifold, instilling confidence in the effectiveness of this technology.

        In conclusion, spatial computing represents a paradigm shift in how governments operate and engage with their communities. Governments can unlock new possibilities for innovation, efficiency, and collaboration by harnessing the power of highly accurate virtual environments. The partnership between Capgemini and Unity, exemplified by the Orlando regional digital twin, serves as a testament to the transformative impact of this technology. As we look to the future, the possibilities are limitless, and governments worldwide stand to reap the benefits of embracing spatial computing in their decision-making processes. In Orlando’s regional digital twin, Capgemini’s expertise in digital transformation and Unity’s cutting-edge 3D visualization capabilities have converged to create a virtual replica of the city and its surroundings. This digital twin isn’t just a static model; it’s a dynamic, data-rich environment that enables real-time simulations, scenario planning, and predictive analytics.

        Meet our expert

        Sven Boesen

        Director Experience Engineering, Digital Studio at Capgemini Engineering
        Sven Boesen is an expert in digital twins for industry in particular using real time 3d to create highly interactive and immersive experiences. He has extensive experience in providing geospatial solutions that often involved simulation and optimisation to drive efficiencies in different industries.

          Resilient supply chains
          Supply chain quality management

          Gilles Bacquet
          24 May 2024
          capgemini-engineering

          How supply chain quality management (SCQM) can help suppliers in these increasingly uncertain times 

          In the first blog of this three part series, you learned about the importance of order management to supply chains, and how the order management process can be improved.

          In this blog (part two) you will learn about Supply Chain Quality Management – what it is, how it works and why it matters.

          Quality problems in the supply chain: a hypothetical example

          A retail company has been experiencing a surge in customer complaints about a particular smartphone model they sell. Customers report issues such as malfunctioning screens, battery failures, and overheating problems a few months into using the phones.

          Upon investigation, the company discovers that the defects stem from components supplied by one of their overseas vendors. The vendor, located in a different country, has been struggling with quality control issues in their manufacturing processes. However, due to the lack of robust supplier quality management procedures in place, the retail company failed to identify and address these issues promptly, and now struggles to find an alternative, more reliable source of components.

          The importance of supplier awareness

          In manufacturing, an average of 80% of a product’s value comes from suppliers.

          Mastering supplier management (and thus the quality of supplier goods) is critical for all organizations with a supply chain – especially in this era of global disruption and uncertainty. This involves mitigating supply risks, which is in the DNA of the Supply Chain Quality Management (SCQM) team – whose job it is to provide a situational awareness picture of the supply chain, as well as provide steps to solve the many problems that can occur.

          To this end, businesses need a robust action plan that contains, for example, a set of quick containment actions that support quality control or complaint management. One way to support this is through the Eight Disciplines (8D) approach. Originally developed at Ford Motor Company – this methodology can be used for supply chain problem identification and solving.

          For a company, being able to regularly assess its global supply chain is the first step in properly monitoring (and understanding) the global manufacturing capabilities of its suppliers. This understanding allows companies to, for example, pre-empt critical component shortages by changing the manufacturer for a specific part. To this end, we implement various specialized audit methodologies (eg. VDA6.3 and Aero Excellence) that provide detailed insights into the quality of your supply chain, and that leverage best practices from several industries.

          This methodology allows us to identify individual patterns – but also global ones. For example, we recently performed a complete assessment campaign for one of our clients, in which we audited more than 200 suppliers in 35 countries over 17 weeks.

          From this snapshot, we are able to create a supplier development program using robust methodologies initially developed for the automotive industry, but that are today widely applied in other sectors, such as Advanced Product Quality Planning (APQP). We can also create entirely bespoke audit methods for clients.

          ‘Rightshoring’ for your supply chain

          Rightshoring can be defined as locating a business’s manufacturing in areas that provide the best combination of cost and efficiency. To help our customers succeed with this approach, we rely on our ‘rightshore vision’. This leverages a network of 2500+ experts across the world.

          As part of this vision, our local consultants and audit teams quickly get to work on client premises, reducing travel time, costs and project eCO2 emissions. We interact, if possible, with suppliers in the local language – streamlining remediation. Through this rightshoring approach, we have demonstrated a reduction of 60% eCO2 when compared to the traditional approach of European experts traveling overseas.

          As is very clear for anyone who has bought an item that did not deliver upon expectations, quality control is essential. And it is increasingly important, the more complex that products become – as more components means more points of failure.

          As electronic goods become increasingly complex, and as supply chains continue to endure geopolitical instability – SCQM, and the people who do it, will be more important than ever.

          In the third and final part of this blog series, you will learn about the importance of sustainability in supply chains, and what steps you can take to make your supply chains more sustainable.

          If you are currently facing delivery disruptions, or if you need to ramp up your supply chain to meet changing demand, we can help. Capgemini has years of experience helping companies across sectors and countries with supply chain quality management, along with access to some of the world’s leading experts in the subject. To find out more, contact our expert .

          Author

          Gilles Bacquet

          Senior Portfolio & Product Manager, Resilient & Sustainable Supply Chain offers owner
          Gilles is a Production & Supply Chain engineer and has joined Capgemini group in 2001. Starting as consultant expert in Supplier Quality Management for Automobile & Aeronautic, he has extended his responsibilities in creating Supply Chain offer and developed business oversea. He is today leading Resilient & Sustainable Supply Chain offers for Capgemini Engineering.

              Resilient Supply Chains: Order Management

              Quality problems in the supply chain: a hypothetical example

              Resilient Supply Chains: sustainability

              Steps to take to make your supply chains more sustainable

                Generative AI is making life easier for product support engineers

                Capgemini
                Nikhil Gulati & Jalaj Pateria
                May 21, 2024
                capgemini-engineering

                Learn how Generative AI (GenAI) is revolutionizing Software Product Support and how to get started with this powerful technology in your business.

                Generative AI (GenAI) is beginning to transform many activities, and product support is no exception. Product support is vital for the ongoing function of all products, from Microsoft Office to niche robotics systems. Users need product support when installing systems, integrating with other software, working out how to use the product, and resolving issues when they arise. 

                Such work must be handled by experts who understand the product and its operation. The cost of this support must be factored into any product cost model, so improving the support process can unlock revenue by extending the life of products while reducing the costs of supporting them. This is particularly true as products reach “end of life”, when user numbers often shrink, and support costs relative to revenue can become problematic.

                The potential of GenAI in product support

                Because GenAI can process information and predict the answer to a question based on experience, it opens a world of possibilities for product support. Given sufficiently large training data of good quality, GenAI can be taught about the fundamental nature of systems and predict the most appropriate answers to questions about them. A few examples of GenAI’s potential uses in product support are developed below.

                • Tech support automation: GenAI’s ability to learn answers to common technical questions about problems and provide quick and detailed responses means such a service can be available 24/7. Further, GenAI responses can be adapted to the specific user query and context. This approach is an important improvement on the typical support model, based on asking a series of fixed questions and pointing the user to an off-the-shelf ‘how-to’ article.
                • Augmenting human support workers: GenAI can facilitate the work of human support workers by summarizing requests and providing these workers with the relevant information to solve these requests quickly. If support workers respond by email, GenAI can help them turn their response into text that will be easier for the user to follow, based on the GenAI model’s technical knowledge. It can also translate responses, allowing teams to offer support, even when they do not speak the user’s language.
                • Onboarding new hires in the support team: A support GenAI can be used to train new support engineers on common product issues.
                • Software product upgrades: Generative AI can be used by support engineers to facilitate software product upgrades, for example, translating software code into a newer language or modifying code to be more efficient as part of a green code sustainability initiative.
                • Streamlining processes: GenAI tools can automatically categorize emails and support tickets and learn to prioritize in order of importance, assigning these to the relevant experts or those with the most capacity.

                A well-composed suite of GenAI-powered tools can reduce time-to-solution, human error, and product support costs and so allow experts to focus on the more complex tasks that humans are best suited to.  

                GenAI in product support – the art of the possible

                Theoretical possibilities are all well and good, but what is happening in the real world? Capgemini is fortunate to have worked with multiple clients on projects to create value by harnessing GenAI in their product support processes and systems.

                In one example, a large computer hardware organization wanted a system to identify multiple ticket types, handle initial conversations with users, and respond in various languages. The GenAI system we developed provided the firm’s customers with step-by-step instructions on how to resolve their queries. These responses were based on information in product knowledge bases and user manuals. It also identified user queries that couldn’t be solved using this approach and then escalated them to human support engineers. Finally, the GenAI collated user feedback and used this to propose updates to the knowledge base. The outcome was considerably fewer tickets routed to human agents, saving time and money.

                In another case, we worked with a Network Equipment Provider to develop a chat assistant to provide ‘human-like’ first-level responses and summarize tickets for efficient handover to other support staff. Again, we saw reduced operational costs and improved SLA (Service Level Agreements) adherence in their 24/7 operations.

                In a final example, we built a do-it-yourself (DIY) tool and analytics generator for a leading telco. They needed to document the standard operating procedures (SOPs) of their support engineers for future training and generate role-based visualization and prediction. The customer required a centralized management dashboard that unified all IT platforms on a single pane and a GenAI-based tech stack for predictive and preventive monitoring. 

                The challenges of integrating GenAI in product support

                Developing, deploying, and running GenAI-powered systems is becoming ever more accessible, thanks to the increasing availability of large open-source language models. However, care needs to be taken when integrating AI into systems.

                Firstly, GenAI must be carefully crafted and trained for the specific use case – using up-to-date, high-quality data. The AI will be wrong if the user manual or knowledge base is wrong. This means that people who understand the product for which the GenAI support system is being developed must be involved in designing and testing it. They must ensure it has been trained correctly. Because GenAI is probabilistic, GenAI outputs can occasionally be wrong; this is often described as a ‘hallucination’ in the GenAI community. Consequently, quality control is vital.

                Secondly, there are IT practicalities to consider. The IT infrastructure must offer sufficient computational power to run a GenAI model and provide the connectivity needed for the GenAI to interact with knowledge management databases and issue management systems (including email, WhatsApp, etc.). There must also be a single source of truth so that any updates to the knowledge base – by humans or AI – feed into the GenAI’s model of the world. Organizations must be willing to share this data, regardless of its sensitivity.

                Finally, GenAI project timescales need to be calibrated to the business case. Training takes time, but no business wants to wait a year for a perfect GenAI support system that will be obsolete when launched. An AI that can solve 50% of queries and refer the rest to humans but takes three months to build and deploy may offer better value than one that can solve 60% of queries but takes two years to deliver.

                Ultimately, the recipe for success with support systems is the same as most data projects. Set clear goals and expectations. Work with experts who know the tech and the domain, and use frameworks that allow you to move efficiently through the development process.

                Capgemini has multiple software frameworks and project blueprints to accelerate the development, deployment, and operation of GenAI in product support systems. Contact our experts to learn more.

                Meet our experts

                Nikhil Gulati

                Head of Intelligent support and services
                Nikhil is a results-oriented professional with extensive experience in IT/Telecom, Project Management, Software Development/support, Client Rela-tionship Management, Business development and operations, and Pre-Sales.

                  Jalaj Pateria

                  Enterprise Architect
                  Jalaj is a Chief Automation Architect at Capgemini, Intelligent Support Services. He has over 16 years of experience working extensively on Digital Trans-formation Initiatives across BFSI, Health Care, Airlines, Industrial, and Telecoms. Currently working on next-gen initiatives in consulting, pre-sales, and solution phases, Jalaj’s research interests lie in Machine Learning, Explainable AI (XAI), Deep Learning, Sentiment Analysis, Digital Twins, AR/VR, and Automated Reason-ing.

                    Digital continuity for the semiconductor industry
                    Why we need it, and how to build it

                    Capgemini
                    Ravindra Jadhav & Shekhar Burande
                    21 May 2024
                    capgemini-engineering

                    Learn about major semiconductor industry trends, why digital continuity is important to the sector and what companies can do to create this continuity.

                    “Good companies manage Engineering. Great companies manage Product”.

                    – Thomas Schranz

                    Semiconductors are the backbone of modern technology, playing a pivotal role in virtually every aspect of our daily lives. These tiny electronic components – which manage the flow of electric current in a device – can be found in everything from smartphones and LED bulbs, to cars, kitchen white goods and medical devices.

                    Looking ahead, the importance of semiconductors is only expected to grow as society becomes increasingly reliant on digital technologies. The rise of the Internet of Things (IoT), autonomous vehicles, artificial intelligence (AI), and 5G networks all require sophisticated semiconductor technology. These advancements require faster, more energy-efficient, and smaller semiconductors to accommodate the ever expanding demands of a digitally connected world.

                    As such – semiconductor manufacturers must produce more efficiently, meeting the ever increasing need for ‘smaller, faster, cheaper’, whilst maintaining margins, and dealing with fluctuations of demand and uncertainty of supply. Because of this, it’s increasingly evident that these companies need to better manage the requirements of the semiconductor chip product lifecycle – eg. the mix of chip complexity and the need for specialized ‘mission-specific’ chips, regulatory constraints, and various other challenges. This will allow companies to gain R&D, operational and margin efficiency, and decrease their time to market – largely by enabling digital continuity across their systems.

                    Below we outline major trends affecting the semiconductor industry which, due to the impact of semiconductors, also have broader global significance.

                    • The growing importance of ecosystem partnering and selling across vertical industries: Proof of functionality and new next-gen technology, for example, AI, Metaverse, 5G, and Edge, are driving partnerships across the semiconductor ecosystem to address end markets with complete end-market platforms.
                    • 5G is accelerating the pace and possibilities of connectivity – and the use cases it enables: Next-gen connectivity is evolving, from wired and wireless networks to private 5G, which is revolutionizing use cases across a wide range of industries.
                    • Verticals are bringing chip design in-house: Semiconductor manufacturers are losing share to a growing number of product/system companies, which are designing chips in-house for use in their own products/services – allowing them to disrupt, differentiate and control the supply chain.
                    • The steady shift towards ‘Industry 4.0’ and fully automated manufacturing – Digitization is creating an array of challenges (and opportunities) related to collecting, managing, processing, analyzing, visualizing, and effectively utilizing data – a microchip-hungry endeavor.
                    • Increased product innovation and reimagined customer experiencesAn increased focus on co-innovation and co-design with the goal of establishing digital continuity and a single source of truth (SSoT) for all product data. The intent is to accelerate product innovation and consequentially delight customers.
                    • The value of Moore’s Law diminishing: Semiconductor manufacturers competing on performance, power, and area (PPA) seek creative ways to achieve a competitive advantage, while others add value by producing customizable modular chips called ‘chiplets’ that can be combined to form a complete system-on-chip (SoC).

                    Why is digital continuity important to the semiconductor industry?

                    ‘Digital continuity’ is an organization’s ability to maintain (and put to use) important information, despite ongoing changes to the organization’s ways of storing data, and relentless evolutions in digital technology. This allows an organization to connect the ‘digital threads’ (information flows) of this data across its systems. Through this intelligent information sharing and monitoring, digital continuity helps the company and its ecosystem to operate more efficiently.   

                    The ability to manage information will be a competitive differentiator. Success for companies that produce these chips will depend upon these companies achieving a faster time to market with a ‘first time right’ approach. Geopolitical changes and challenges (eg. certain countries ‘reshoring’ the production of core semiconductors for national security purposes) will continue to force the localization of production, and these new greenfield plants will only be able to meet the required pace with the right PLM backbone and digital continuity foundations.

                    The previous approach to development, ie. using custom homegrown disconnected systems, results in misaligned technical investments and technical debt – namely the accumulated cost of shortcuts taken during software development, creating increased complexity and maintenance efforts over time.

                    The traditional document-centric development approach often does not allow traceability between requirements, product design, or the front and back-end manufacturing of products. This loss of traceability creates additional costs, quality control issues, compliance risks and sustainability overheads for enterprises.

                    As such, there is a clear need for an industrialized PLM backbone that provide a single source of truth (SSoT), offering consistency, accuracy, and reliability in the use of data and information across all departments and processes. In addition, rapid integration into end-user product ecosystems (which is required to meet the competitive pace of business today) requires the use of simulation and model-based approaches.

                    What features would such a PLM backbone require?

                    • A complex hierarchical data model: Allowing oversight of variables like chip order part numbers and customer part numbers. It could also manage complex chip design, development & verification and tapeout, as well as finalize die designs. Other capabilities would include register-transfer layer (RTL) design, GDSII data management, mask set management, along with control over reticles & die variants, wafer fabrication & sort – and, ultimately, the chip assembly process to the final product.
                      • This data model would also address product complexity and technology development needs for the data models that are used by downstream systems to design and manufacture products. These include engineering bills of materials (EBOM), manufacturing bills of materials (MBOM), and production bills of materials (PBOM).
                    • Management of new product introduction (NPI) programs and product portfolios: Meeting the need for new markets, changing consumer demand and emerging technologies.
                    • Integrated fabless and foundry management: Once designs are ready to hand over to approved foundries or to outsourced semiconductor assembly and test (OSAT).
                    • IP management: Providing integrated IP reuse that is controlled and secured. Transparent IP management increases efficiency in R&D, operations, and margin efficiency – whilst avoiding the risk of IP infringement.
                    • Product sustainability and environmental compliance: Offering integrated material substance declarations and compliance checks for Integrated Device Manufacturer and Original Design Manufacturer compliance management. Ultimately, this can help to increase the effectiveness of semiconductor equipment, moving us towards the future of sustainability compliance in the factory.   

                    With all the above, the semiconductor industry is looking to build integrated close-loop quality system tracking into its design and manufacturing processes.

                    An opportunity, for those with the means to seize it

                    Once the hard work has been done to establish digital continuity across an organization’s many systems, companies can expect increased efficiency in R&D, operations and margins, decreased time to market and, of course, a significant competitive advantage.

                    Semiconductor manufacturers today have a major opportunity to support the next generation of technology, but will only be able to properly exploit it with the kind of digital continuity that sophisticated PLM provides.

                    Capgemini’s world class team of semiconductor industry experts is ready to help you build a next-gen secure, intelligent PLM backbone for your semiconductor business.

                    Our VLSI practice experts be an integral part of your team, working closely with your business product managers and fab leadership team. We also offer pre-configured semiconductor solutions and accelerators for PLM, MES, and ERP triptych products – to help you bring everything together.

                    Ready to progress, or want to learn more? Meet our experts.

                    Meet our experts

                    Ravindra Jadhav

                    Digital Continuity Presales and Delivery Director
                    Ravindra has 20+ years of experience, successfully delivering IT products and program solutions for a range of industries, including aerospace and defense, automotive and high-tech. He possesses deep knowledge of product lifecycle development, engineering, manufacturing and the supply chain – and has led many successful customer-centric programs

                      Shekhar Burande

                      Vice President, Digital Continuity & PLM
                      Shekhar is an expert in digital continuity, digital twin domains and is responsible for the Capgemini Engineering’s solution portfolio and Center of Excellence. Shekhar is also an active participant at events and has spoken in sessions on topics of digital continuity on subjects like cloud, battery and gigafactory and climate tech.

                        Towards a European GPU

                        Loïc Hamon & Jonathan Nussbaumer
                        20 May 2024
                        capgemini-engineering

                        Discover why a European GPU is so important, and how Europe can collaborate to make it happen.

                        What do these three very different technologies all have in common – generative AI, 6G mobile networks, and autonomous vehicles? Clearly they are all exciting, but they also all rely on the science – and the magic – of advanced processors. GPUs (Graphics Processing Units) are a member of this family; a family which boasts sophisticated architectures and exceptional miniaturization, enabling high performance for massive data processing while balancing size, speed, and energy efficiency. But are we in danger of taking this incredible technology for granted?

                        The journey of GPUs from the 1990s to today is a tale of innovation and competition. Initially designed to enhance gaming 3D graphics, GPUs have transcended their original purpose. Thanks to their powerful parallel processing abilities, they now play a pivotal role in artificial intelligence and deep learning. Today, GPUs have transformed from specialized gaming hardware to indispensable tools that enable cutting-edge technology and underpin critical day-to-day services.

                        Europe’s position and challenges

                        Despite the critical role of GPUs, Europe finds itself behind in the manufacture of these essential processors.

                        High costs of cutting-edge semiconductor manufacturing technology limit GPU production to just three global players, and Intel is the only one considering production in Europe by the end of the decade. However, the design of GPUs is as crucial as their production and offers a way for Europe to regain some sovereignty in the market. Once the design of a processor (its system architecture and key intellectual property blocks) is mastered, it is technically possible to have it produced anywhere. In terms of sovereignty, this offers freedom. That’s why Europe must design its own advanced processors, even if it must depend – and it has no choice in the short term – on non-European manufacturers.

                        Recognizing this, global tech giants and countries around the world are investing heavily to quickly design alternatives and take back control. Yet Europe is not currently part of this effort and until it enters the fray, the gap with the rest of the world will continue to widen.

                        Envisioning a European GPU

                        Designing a European GPU is therefore a strategic necessity. The European industrial fabric (health, defense, aeronautics, automotive, telecoms, data centers) cannot depend on a single source of supply – especially one outside the region. But it is a challenge that requires significant investment estimated at several billion euros over several years. This is not just to catch up, but to ensure a diverse supply chain for critical technology.

                        The focus of this investment is not only on creating a GPU, but on developing a complete advanced processor with a heterogeneous architecture combining different functionalities for efficiency (though the modular ‘chiplet’ approach also offers interesting possibilities). Progress has already started across Europe. France, Germany and the Nordics have already taken steps towards this goal, benefiting from collaborations with global tech firms like Thales, and smaller specialists such as Kalray, SiPearl, VSora, GreenYellow, Menta, Scalinx, GrAI Matter Labs, and many other leading lights in the world of CPU and accelerator technologies. Capgemini, following the acquisitions of Altran and HDL, is now Europe’s leading silicon engineering services company and can drive this project forward. Leading forces across the continent have never been brought together for a common sector project like this. Combined, they give Europe the foundational elements that, if properly assembled, could constitute the embryo of a sovereign advanced processor, potentially rivalling non-European tech giants in the long term.

                        A unified industrial vision

                        The realization of a European GPU hinges on aligning European stakeholders around a common industrial vision. This involves a coordinated effort to define a unique architecture, development roadmap, software environment and ecosystem, and market strategy, supported by both the state and private sector investment.

                        A rapid task force could kickstart this initiative, outlining a political and industrial framework within three months. This would catalyse collaboration among industrial clients and technology providers, addressing Europe’s pressing needs for digital sovereignty and technological independence.

                        In summary, developing a European GPU is not merely a technological endeavor but a strategic move towards securing Europe’s position in the global tech landscape, paving the way for technological sovereignty and innovation. This will ensure Europe can continue to enjoy the current and next wave of exceptional technologies that have the ability to change the world in which we live and work.

                        Meet our experts

                        Loïc Hamon

                        CMO for Silicon Engineering at Capgemini Engineering
                        Loïc Hamon is currently the CMO of Silicon Engineering at Capgemini. He orchestrates initiatives to maximize market impact and drive growth. This includes strategic positioning, offering articulation, ecosystem development, and business expansion.

                          Jonathan Nussbaumer

                          Vice-President and Global Head of Silicon Engineering
                          A silicon enthusiast, passionate about unlocking the power of chips in Intelligent Industry, Jonathan is obsessed with building sovereignty for all industries. He leads Capgemini’s silicon engineering journey.

                            The semiconductor industry is at the edge of a new discontinuity

                            Cost and complexity challenges are driving the evolution of new working models, business paradigms, and the emergence of new industry players.

                            Resilient supply chains
                            Order management

                            Gilles Bacquet
                            14 May 2024
                            capgemini-engineering

                            Resilience is essential to survive the global ‘black swan’ events which are becoming more frequent and severe, impacting both top and bottom lines.

                            This is the first in a short three part blog series that explores supply chain resilience – why it matters and how it can be achieved.

                            In this blog, you will learn about the importance of order management to supply chains, and how the order management process can be improved.

                            Taking ‘just in time’ for granted

                            Just in time supply chains appear to be very straightforward. You simply order a product from anywhere in the world and it arrives in a few days (or hours, in some cases).

                            This incredible convenience (and the customer expectations around it) did not exist just a few decades ago – and is something that many of us take for granted today. But such speed and convenience take a lot of planning and continual work behind the scenes – this is called ‘order management’.

                            Meet the supply officer (SO)

                            Indeed, to achieve this efficiency, the role of the SO – who handles much of the administration around the movement of goods (ie. order management) – is critical. For example, an SO will properly configure all parameters in the company’s logistic system. This includes master data management – which is related to any information required to pass an order. eg. the supplier’s name, price, logistic conditions, etc. They will also manage the nominal flow of parts and information which represent up to 90% of the transaction, but only one third of the workload.

                            In fact, SOs spend about two thirds of their time managing non nominal activities, like collaborating with suppliers to confirm quantities and delivery dates, or reconsolidating documents (eg. purchase orders against delivery notes) – thus minimizing impacts on deliveries. This may not seem particularly important, but these small tasks must be done (and done properly) in order for supply chains to properly operate – regardless of their size, and if they operate on the just in time model.

                            Of course, much of this concerns the movement of goods from warehouses to the end customer. However, another increasingly important aspect of order management is an understanding of ‘reverse logistics’ – namely the work required to bring back products to suppliers and organize repairs. This often requires a different order management process – for example, a much less automated workflow – because it can be difficult to forecast where parts will fail, or be broken. Reverse logistics can also entail additional negotiation with suppliers – but is critical to extend the lifetime of products and maintain customer satisfaction.

                            What we learned about better order management

                            From our experiences delivering supply chain services for multiple clients in multiple sectors over the years, we have learned a lot and created some unique capabilities:

                            • Managing the complete portfolio of suppliers with an end to end vision. This allows us not only to manage day to day activities, but to set commitments that your suppliers will keep to, and minimize missing parts
                            • A Supply Officer framework that describes a global order management task in 30 individual activities. We can either manage one supply chain task (for example, good receipt disputes), or your entire supply chain
                            • Up to 30% demonstrated cost savings by using our service when compared to the more common ‘Time and Material’ approach
                            • A range of dashboards and tools to enhance your ways of working, and offer you a complete view of the performance of your supply chain and order management process

                            In conclusion

                            Supply officers could be thought of as unsung heroes of the supply chain. Improved technology and procedures for order management can help them to be more effective, consequentially increasing the resilience of your supply chains – for example, by better utilizing available resources and streamlining operations. 

                            In the second of this three-part blog series, you’ll learn about Supply Chain Quality Management – what it is, how it works and why it matters.

                            Struggling with on time delivery or supplier visibility? We can help. Capgemini has helped a range of companies across industries and countries to improve their order management processes and strengthen their supply chains. Choose a world leader – find out how we can help you – contact our expert.

                            Author

                            Gilles Bacquet

                            Senior Portfolio & Product Manager, Resilient & Sustainable Supply Chain offers owner
                            Gilles is a Production & Supply Chain engineer and has joined Capgemini group in 2001. Starting as consultant expert in Supplier Quality Management for Automobile & Aeronautic, he has extended his responsibilities in creating Supply Chain offer and developed business oversea. He is today leading Resilient & Sustainable Supply Chain offers for Capgemini Engineering.

                                Resilient Supply Chains: Supply Chain Quality Management

                                Quality problems in the supply chain: a hypothetical example

                                Resilient Supply Chains: sustainability

                                Steps to take to make your supply chains more sustainable

                                  Driving efficiency in public safety: Integrating IoT with MCX broadband solutions to enhance accuracy, safety, and security

                                  Patrice Crutel
                                  May 13, 2024

                                  The frequency and magnitude of environmental, political, and socio-economic crises are increasing, which requires enhanced interoperability among international and national public safety organizations, their users, and first responders.

                                  To face these increasingly complex and broadcasted events, the enhancement of situational awareness and safety is a strong requirement. Voice-only communication services, currently provided by PMR technologies, such as TETRA and P25, are no longer sufficient for public safety users, agents, and dispatchers to exchange advanced situational information that is crucial in an emergency. The more multimedia information public safety users receive, the better they can coordinate their actions to protect people and assets, expedite rescue efforts, and improve accuracy, safety, and security.

                                  Broadband technologies can help by boosting the capabilities of public safety organizations. 4G/5G Mission Critical Communications (MCX) is now ready to surpass the current PMR capabilities and even improve them with ultra-reliable data and video. Public safety authorities must modernize their communications infrastructure to drive innovation and advanced value-added services. Technologies like drones, robots, wearables, and IoT devices have made a considerable impact in recent years across a variety of use cases and situations. Emerging innovation like the metaverse and generative AI will present even more opportunities through enhanced situational awareness.

                                  Communication is paramount in an emergency. By integrating rich media and data into control rooms, public safety agencies can leverage unprecedented insight and visibility into their daily activities. This new technology offers improved information, enabling the creation of more efficient response plans that can adapt in real-time to evolving events and emergencies.

                                  Improving public safety with IoT devices and applications

                                  Here we offer a series of examples of how IoT devices, when integrated with MCX broadband solutions, can improve public safety:

                                  • A fleet of drones equipped with high-definition cameras and thermal sensors, enabled by AI, can quickly monitor and survey large disaster scenes, public gatherings, and other areas in real time. This can improve planning and response times of public safety users, ultimately leading to better outcomes for the public.
                                  • Wearable sensors can monitor first responders, such as firefighters, transmitting vital health data to dispatchers and alerting them when personnel need assistance. This information can also be shared with neighboring public safety leaders immediately, without the need for manual intervention.
                                  • Autonomous robotic devices, like robots and vehicles, fitted with high-definition video and sensors, are deployable in high-risk environments. They can assess situations and potential dangers to public safety users, facilitating the creation of more effective action plans. Augmented with AR/VR technology, these devices enhance understanding and situational awareness by complementing on-site surveillance, leading to a more robust comprehension of conditions.

                                  Improving situational awareness with wearables and drones

                                  All the aforementioned technologies and devices can assist public safety agencies with fundamental activities, such as navigating emergency events like building fires, wildfires, earthquakes, tsunamis, hurricanes, and storms. They can also consider external conditions, such as weather and gas presence, as well as the 3D plan of the building and its equipment for increased situational awareness. These inputs can facilitate better decision-making among first responders and dispatchers.

                                  Here we explore some additional examples of how advancing technologies can improve public safety efforts:

                                  • In the case of a forest fire, wearable sensors can monitor and protect public safety agents in action and quickly detect when people may be in danger. Fleets of drones can enable better visualization of the fire and account for conditions such as wind and event cartography. This will enable dispatchers to locate public safety users and people in need of assistance, optimizing resources to improve response capabilities and save lives.
                                  • In the case of earthquakes or tsunamis, a drone fleet can quickly assess the situation, detecting people in need of help and their circumstances. This information will help dispatchers better coordinate actions, based on the location of various public safety users, and quickly reach a specified location. It will also consider external factors such as cartography.

                                  Enhancing operations with AR/VR, digital twins and the metaverse

                                  The metaverse, augmented/virtual reality (AR/VR), and digital twins serve as invaluable tools for public safety users during sudden emergencies, enhancing response capabilities and facilitating swift actions. They can provide comprehensive analyses of the environment and provide inputs on how to better handle the situation. These technologies can also assist public safety agencies in training and preparing users to handle emergencies effectively, enhancing preparedness for situations that may be rare or challenging to simulate in the field.

                                  Using digital twins and/or generative AI, agencies can emulate a real-world environment in a virtual replica and optimize training, as well as the actions of users. It can also be used, based on specific location images and contextual information, to reconstruct how an event occurred.

                                  4G/5G as an enabler for innovation

                                  Leveraging the MCX feature in 4G/5G enables seamless voice, data, and video sessions among individual users, groups, or dispatchers. Further innovation is possible thanks to the deployment of APIs that allow integration of new software applications that can improve the productivity and safety of first responders.

                                  These innovative applications include features like automatic voice translation for users speaking different languages, facilitating communication between groups from diverse foreign organizations, such as during large wildfires involving public safety agencies from different countries. Additionally, they offer voice transcription and automatic report generation from various data sources for enhanced user efficiency.

                                  Taking the next step in public safety communications with Capgemini

                                  As environmental, political, and socio-economic crises escalate, the need for enhanced interoperability among public safety organizations is becoming even more important.

                                  Traditional voice-only communication services are no longer adequate for exchanging advanced situational information in emergencies. Broadband technologies like 4G/5G MCX offer the potential to surpass current capabilities and improve efficiency with ultra-reliable data and video. Integration of drones, robots, wearables, IoT devices, and emerging innovations like the metaverse and generative AI further enhance situational awareness.

                                  By modernizing communications infrastructure and embracing these advancements, public safety agencies can achieve unprecedented insight and visibility, facilitating more efficient response plans that adapt in real-time to evolving emergencies.

                                  To learn more about how your organization can evolve and advance your public safety communication capabilities with IoT and MCX, please contact our authors below.

                                  Join us at the Critical Communications World 2024 to explore more on the topic.

                                  TelcoInsights is a series of posts about the latest trends and opportunities in the telecommunications industry – powered by a community of global industry experts and thought leaders.

                                  Co-authors – Nazirali Rajvani | Sylvain Allard | Fotis Karonis | Pierre Fortier

                                  Meet the author

                                  Patrice Crutel

                                  Senior Director 5G and Critical Comms services strategy, Invent
                                  Patrice is an expert in the Strategy of Evolution towards 4G/5G, IOT, PMR, Multimedia