Skip to Content

The Peugeot 9X8 AI-powered hybrid hypercar paves the way for the future of mobility

Clément Portier – Our Expert
Clément Portier
26 Jul 2022

Le Mans 2023 is the target for the Peugeot 9X8, a next-generation hypercar with a hybrid engine whose performance is powered by AI.

The Peugeot 9X8, the result of a partnership between Peugeot Sport and Capgemini, is set to make its mark on the next edition of the 24 Hours of Le Mans in June 2023. In July 2022, the hypercar took to the track at the 6 Hours of Monza for its first appearance in the FIA (Fédération Internationale Automobile) WEC endurance championship.

To meet the challenge of Le Mans 2023, in a precision discipline where every detail of driving, strategy, and setting determines who wins and loses, the newly introduced 9X8 will have to perform at the highest level. Working to achieve this, Peugeot Sport knew from the outset that it would not be able to alter the structure of the hypercar, whose main characteristics (engine, transmission, aerodynamics) are homologated by the FIA. As the hardware is fixed for several years, major optimizations could only be made to the software.

Placing digital, data, and AI at the heart of the 9X8 was therefore an obvious choice from the design phase. Our expertise, the power of our R&D, the ability to collaborate with industry experts, and our historical links with the Stellantis group opened the door to this partnership. But it was really the meeting of personalities, driven by a common vision and goal, that made it possible to build this partnership with Peugeot Sport.

Data for a more efficient hypercar

Injecting data and AI into motorsports is not a revolution in itself. But with the Peugeot 9X8 hypercar, its level of integration goes a step further. Here, the software makes the most of the power of the hybrid 4X4 and accelerates the engineers’ decision-making in the simulator, on the test bench, and on the track.

Energy management, a major component of any hybrid vehicle design, is optimized by an onboard artificial intelligence. In particular, the AI controls the distribution of power between the combustion engine on the rear axle and the electric motor on the front axle. As a result, the AI stabilizes the car, avoids skidding and, ultimately, reduces tire wear and stress on the chassis. It also optimizes the energy consumption of the two motors throughout an endurance race and, ultimately, limits the number of pit stops, which is decisive for victory. The software/hardware pairing thus represents a formidable lever for improving the hypercar’s endurance.

Virtual sensors

Data intelligence provides Peugeot Sport’s engineers with new resources. Thanks to the collection of internal data from the car, which can then be cross-checked with external information (weather, circuit constraints, driver’s assessment, etc.), these experts have access to a higher level of knowledge for greater speed and efficiency in their pre-race adjustments. Capgemini has created virtual sensors that, thanks to AI, are capable of generating new data from that which was physically captured.

This virtualization opens up new perspectives, particularly for strategic decision-making during races. Soon, it will be possible to ask a driver to downshift at a given moment to avoid overheating a part or to predict the ideal moment to overtake. By optimizing the data set, it becomes possible to control uncertainty and refine decision-making. Collaboration between data engineers and mechanics is therefore set to intensify, even in the paddock.

Sustainable and innovative solutions

As a long-standing partner of Stellantis, Capgemini has committed itself to a championship in endurance racing with Peugeot Sport, which has become a veritable laboratory for the automobile of tomorrow.

Tested in real conditions and subjected to the most extreme stresses, the Peugeot 9X8 opens the way to more sustainable mobility. Race after race, it will provide insights that will make it possible to improve the performance and reliability of hybrid engines by controlling the consumption of electricity and biofuel. These lessons learned will enable the transfer of technology to improve production vehicles in the years to come, making them safer, more energy-efficient, and less polluting.

For us, this partnership represents a valuable opportunity to shape the future of mobility through sustainable and innovative solutions, all with controlled development budgets. In every respect, the Peugeot 9X8 is a winner.

How to lead in the sustainable health revolution

Simone Wessling
7 Oct 2022

For forward-thinking members of the healthcare community: it’s time to start the transition to sustainable healthcare.

Mounting pressure

One by one, industries are waking up. Sustainable practices have taken hold in the automotive sector, in agriculture, in fashion; even the energy sector is transforming. Until recently the healthcare industry had escaped scrutiny. And yet healthcare in its current form is far from sustainable. It contributes about 4.5% of the world’s greenhouse gasses, and everyday procedures create quantities of waste that, if a heart patient saw them, would land him back in the IC ward. (For me, it’s the pitching of unopened packages of surgical equipment that I can never get used to.) I believe that we stand on the brink of sustainable healthcare, making this the moment to lead.

The wheels are already in motion. Last November, the World Health Organization announced the commitment by 42 countries to cut greenhouse-gas emissions across their health systems. The UK and US have defined national strategies and set up, respectively, the NHS Carbon Reduction Strategy and the Office of Climate Change and Health Equity. Germany and Australia have created similar administrative bodies of their own. But change won’t come until forward thinkers within healthcare take action. For healthcare leaders (or those looking to become leaders), here are three steps to cutting waste and emissions, and achieving sustainable healthcare.

Waste reduction


Around the end of 2020, Dutch artist Maria Koijck underwent a mastectomy. “I am more than happy to have had the chance to heal,” she later wrote, “but I am also shocked by six garbage bags full of waste for one operation, my operation….” It begs the question for me: People want to get better, but at what cost?” Koijck went on to channel those impressions into a work that has become iconic in the push for sustainable healthcare. The installation features the artist at the center of all the medical materials that were used – and disposed of – in the course of her surgery.

Waste in hospitals is endemic, with an average of 29 lbs of waste per bed per day, and it’s understandable why. In high-tension, high-cost, high-stakes healthcare situations such as surgeries, sustainable practices naturally fall lower on the priorities list (“we lost the patient, but we saved a medical gown”). The first step, then, is to make space for decision making outside of stressful situations.

Many of the needed fixes are surprisingly simple, but they need a leader in the institution to take the initiative – to organize a team, start planning, and start executing. Small changes such as providing a receptacle for the reuse of unopened surgical instruments are easy; they just need a push. Procedure and habit are powerful tools that, once established, continue to bring value for long after.

Sustainable operations in healthcare

Reducing waste will go a long way to transforming healthcare; the other leading challenge is cutting CO2 emissions. Much of an organization’s carbon footprint lurks behind closed doors. The first step to optimizing operations for sustainability is to align across the organization, with administration, representatives of each department or ward, and organizational leaders fully engaged. Some specific areas of operations to look at include:

Connected health – work with stakeholders throughout the patient care value chain to reduce indirect emissions. This may be an opportunity to search for new partners with more sustainable practices, such as digital record keeping.

Automated reporting – build transparency and ensure accountability and recognition. Automated reporting provides a trove of data, with healthcare applications that extend far beyond sustainability.

Culture – ingrain sustainable ways of working into operating culture. Demonstrate your organization’s progress visibly and with pride, and reward employees at all levels for innovative solutions.

Invest in tech – the right technology and data-driven solutions ensure sustainability goes hand in hand with profitability. Hospitals around the world are leveraging their physical footprints to reduce their carbon footprints with solar panels and water heaters. Newer equipment is nearly always more efficient, providing another excellent opportunity for upgrades.

Sustainable IT in healthcare

If operational waste is hidden, digital waste is invisible. Yet the effects are very real, resulting in over 50 million tons of e-waste per year. And the solutions are brimming with positive side effects in the form of newer equipment and better data management. If there’s one formula to remember, it’s this:

INEFFICIENCY = EMISSIONS

• Old, inefficient systems? Emissions.
• Doubling work? Emissions.
• Hardware that stalls and crashes? Emissions.
• Printing out hard copies of files that could be managed digitally? Emissions.

If there’s a light bulb forming over your head right now – you’re right. As a rule of thumb, anything that frustrates or impedes your work is probably also needlessly inflating your organization’s carbon footprint. The added benefits to tackling sustainable IT in healthcare are very positive for everyone involved.

Looking ahead, efficient IT also unleashes the potential of smart technologies to drive future environmental innovations and improvements. AI excels at discerning energy-saving trends and opportunities for improvement. IT systems aren’t generating the art-inspiring waste that catches people’s attention, but sustainable IT is absolutely central to a sustainable healthcare strategy.

New buildings are raising the bar

One place where sustainable thinking has taken root in the healthcare industry is in the design of new hospitals, which have seen an explosion of innovation over the last decade. Features include:

• Renewed attention to paints, insulation and other building materials (the Dell Children’s Medical Center of Central Texas)

• Stormwater collection and water-efficient toilets (Kiowa County Memorial Hospital)

• Optimized use of natural lighting (Seijo Kinoshita Hospital, Tokyo)

• Paper-free operations (Children’s hospital of Pittsburgh)

In the Netherlands, the Máxima Center recently committed itself to four simultaneous goals: reducing CO2 emissions, circular economy, reducing the amount of medicine residues in the water and promoting a healthy environment in healthcare. But while some institutions are surging ahead, many have yet to take their first steps. If they don’t act quickly, traditional hospitals will begin to look old-fashioned.

Leading the way

The unsustainable practices of the healthcare industry are just that – unsustainable. Inefficient systems will not stand up when measured against a holistic view of health, where the responsibility to reduce pollution and prevent illness is on par with the ability to treat conditions medically. The only question is, do you want to follow that trend, or do you want to lead?

Capgemini has been helping organizations across industries to achieve the carbon reduction targets linked to the international Paris Agreement and other national sustainability agreements. Our sustainability offerings in the healthcare and other industries contribute to our dual ambition: to become carbon neutral by 2025 and net zero by 2030, and to help clients save 10 million tons of CO2 by 2030. To share your experiences and learn more about building sustainable healthcare for the future, contact me below.

Author

Simone Wessling

Lead Consultant BTS Health, NL
Inspired by nature, health and human beings, I bring green health to the next level. Contact me for help getting started, and let’s transform your business for a more sustainable future.

    Truly scalable data labeling starts with experience

    Vijay Bansal Director - Global Head - Data Labeling Services, Capgemini Business Services
    Vijay Bansal
    6 Sep 2022

    Data labeling challenges come in all shapes and sizes. However, with the right experience, and the right service provider, there is nothing stopping your organization from making your data truly scalable.

    Quick, no-hassle data labeling at your service

    Artificial intelligence (AI) systems are effective only if they’re trained on quality data. Before that can happen, we need to gather enough relevant raw data and accurately label it – a long and potentially costly part of any AI project. But it doesn’t have to be.

    It’s understandable that some companies choose to keep data labeling in-house, thinking they’ll save time and money while keeping a close eye on the quality of the work. But, if you consider that data preparation accounts for more than 40% of all efforts in any AI project, is this a wise decision?

    Having high-salaried software developers or machine learning (ML) engineering employees spend countless hours labeling data means their daily tasks will be neglected, affecting productivity within the company. Data labeling rarely requires data scientists with PhDs – in fact, anyone with good analytical skills can be a promising candidate. So, let your employees focus on the tasks they’ve been specifically hired to do.

    Another option is to hire people and build your own data annotation teams. Although this is usually less expensive, it’s time-consuming and requires project expertise. Who will be in charge of ensuring they’re properly trained to accurately label your data? If the teams are dispersed, can you really guarantee consistency in the quality and speed of service delivered? And more importantly, what will happen if you suddenly require more or less of their services?

    The sensible approach is to enlist the help of a service provider that already has a managed global annotation workforce, from which the right domain experts can be chosen. This eliminates the time normally needed to look for them and train them. It also ensures all data labeling – and the challenges that come with it – are handled by one responsible entity.

    Demand goes up, demand goes down, but does the price stay fixed?

    Cost is intricately tied to scalability, and scalability is all about having instant access to skilled, project-certified people to meet fluctuating data demand. For data labeling, this demand is usually high at the start, but once a certain level of data annotation is reached for ML training purposes, it falls, affecting the number of annotators required.

    However, if results don’t meet expectations or the project’s scope changes or evolves, the algorithms need to be retrained with additional training datasets, so the demand goes back up. The type and amount of training data needed will depend on how diverse it is (to eliminate biases) and how accurate the ML model predictions should be. In any case, the estimated costs at this stage shouldn’t exceed the budget, regardless of how much data and retraining are necessary.

    Experience goes a long way… right to a complete, properly labeled dataset

    Managing internal and external employees and the fluctuation in demand is an inconvenience few companies are willing to endure, especially that it involves juggling multiple contracts, worrying over annotators sitting idle, and not having full transparency into how the data is being used.

    Not only is Capgemini’s data-labeling service convenient, fast, and secure, it’s also substantially cheaper and less complicated than if a company wanted to do everything alone. We have extensive experience working on many data-related projects to know exactly how to estimate the time, effort, and data required.

    This gives you the freedom to plan ahead with certainty. And, since it’s our responsibility to scale the workforce up and down based on project needs, idle annotators affect our bottom line, not yours.

    To learn how Capgemini’s Data Labeling Services leverages frictionless data labeling operations to deliver data at true scale, contact: vijay.bansal@capgemini.com

    About author

    Vijay Bansal Director - Global Head - Data Labeling Services, Capgemini Business Services

    Vijay Bansal

    Director – Global Head – Data Labeling Services, Capgemini Business Services
    Vijay has extensive experience working in map production, geo-spatial data production, management, data labeling and annotation, and validation roles. In these positions, he aids machine learning and technical support initiatives for Sales teams, coordinates between clients, and leads project teams in a back-office capacity.

      Data labeling operations – leveraging an end-to-end service provider

      Vijay Bansal Director - Global Head - Data Labeling Services, Capgemini Business Services
      Vijay Bansal
      6 Sep 2022

      Engaging an end-to-end data labeling service provider can help your organization implement machine learning engineering to build effective AI solutions.

      Building an artificial intelligence (AI) solution from A to Z is not easy. Companies must prepare for a lengthy, multi-stage process. Most choose to outsource each stage of the project.

      But as the partly finished solution exchanges hands multiple times between multiple teams and service providers, each party accepts responsibility for only what they delivered, with no one having an overarching view of how the AI project is progressing.
       
      The result is a ragtag of different systems and approaches forcefully stitched together instead of one seamless, coordinated solution. If the project is a failure, who do you hold accountable?

      Aligned processes drive enhanced business outcomes

      We act as an end-to-end service provider that’s able to direct your AI project so that each stage aligns with the next, from concept right through to completion. This may mean bringing in different technologies at any time, such as data estate modernization, synthetic data generation, and robotic process automation (RPA).

      Our services are not limited to just data preparation, data labeling, or machine learning (ML) engineering and solution development. The experienced data and AI community of Capgemini experts and partners looks at each project holistically to propose services that complement one another for each stage using the right set of tools and approaches while always having the bigger picture in mind. It leads to quick and successful outcomes, and it means we’re responsible for getting you there every step of the way.

      Data labeling to the power of three

      One dataset can be leveraged in multiple projects through features and labels prepared accordingly to match project requirements. To assess how much effort is needed to collect and label the data, and meet the quality expectations of the client, we start off with a pilot.

      Then we present the client with a tailor-made proposal that considers the data complexity and volumes to be processed. Our multi-tiered maturity-based approach helps us adequately address company challenges to determine the best action:

      • Tier 1 – pure manual data labeling using our skilled, project-certified data annotators
      • Tier 2 – introduction of ML automation to speed up data labeling and reduce the cost per annotation for large datasets
      • Tier 3 – using proprietary tools to generate never-before-seen synthetic data that complies with data privacy regulations such as GDPR.

      Data labeling is just one of many stepping stones towards a full-fledged AI solution. Settling for a fragmented approach with many different providers along the way is not only time-consuming and costly, but the chances of creating a solution exactly as originally intended are slim.

      To learn how Capgemini’s Data Labeling Services leverages frictionless data labeling operations to deliver data at true scale, contact: vijay.bansal@capgemini.com

      About author

      Vijay Bansal Director - Global Head - Data Labeling Services, Capgemini Business Services

      Vijay Bansal

      Director – Global Head – Data Labeling Services, Capgemini Business Services
      Vijay has extensive experience working in map production, geo-spatial data production, management, data labeling and annotation, and validation roles. In these positions, he aids machine learning and technical support initiatives for Sales teams, coordinates between clients, and leads project teams in a back-office capacity.

        The four factors behind scalable, high-quality data

        Vijay Bansal Director - Global Head - Data Labeling Services, Capgemini Business Services
        Vijay Bansal
        6 Sep 2022

        Everyone wants high-quality data, but making this data scalable is often where a lot of organizations fall short. If you want to avoid this issue, take a look at the four factors underpinning data quality below.

        Four ways to guarantee high-quality, scalable data (and faster results)

        Research suggests that one of the main reasons why 96% of AI projects fail is a lack of high-quality data. Most companies would agree that for artificial intelligence (AI) models to work as intended, machine learning (ML) algorithms need to be trained with the best possible data available. Otherwise, the solution created could give inaccurate, biased, and inconsistent results.

        There are many hurdles to building an AI solution – data collection, preparation, labeling, validation, and testing, to name a few. It’s like running a long-distance race except here, in addition to consuming energy, we’re also incurring enormous expenses while devoting precious time and resources to the project.

        Knowing when to pace yourself is key. It can spell the difference between a triumphant win and a debilitating loss. In other words, speeding through preliminary data-related processes with a lack of focus may result in a project that’s over budget and behind schedule, with poor data security safeguards in place.

        That’s why aiming for scalable high-quality data should always be a top priority. Cost, scalability, technology, and security, however, play an integral role in reaching that quality milestone – and can directly impact whether a project is destined to succeed or fail.

        The four factors underpinning quality

        Cost – having your own data science or AI engineering employees prepare and annotate data will put a major dent in your budget as these tasks can be done by a more budget-friendly but also professionally trained workforce. However, taking this approach could also slow down the speed at which you hope to complete the tasks since your own staff could make costly blunders affecting the quality of the training datasets.

        Scalability: trying to complete your business objectives by scaling resources up and down depending on data demand may leave you scrambling for additional annotators at key moments when more data is required, or needlessly paying for their services during low peaks. Outsourcing data tasks to a service provider will give you more agility.

        Technology: for large-scale projects, it’s simply not feasible to manually label all your data. However, an ML-assisted labeling approach can save the day here. Having an ML model pre-label data can add a high level of consistency, plus the annotation effort can be reduced by up to 70% for single tasks. Even if a medium level of accuracy can be attained with the help of technology, it means less manual work and faster annotation to get to the accuracy level we desire.

        Security: if data labeling is not done in a secure business environment – for example in-house – your sensitive data will be exposed meaning it will be easier to compromise. Capgemini’s robust IT infrastructure and network security keeps your data safe. We leverage a trusted delivery platform to track and manage all data changes made by annotators. The platform also provides quality assurance tooling so that data annotation is always up to quality standards.

        To learn how our Data Labeling Services leverages frictionless data labeling operations to deliver true scalable, high-quality data contact: vijay.bansal@capgemini.com

        About author

        Vijay Bansal Director - Global Head - Data Labeling Services, Capgemini Business Services

        Vijay Bansal

        Director – Global Head – Data Labeling Services, Capgemini Business Services
        Vijay has extensive experience working in map production, geo-spatial data production, management, data labeling and annotation, and validation roles. In these positions, he aids machine learning and technical support initiatives for Sales teams, coordinates between clients, and leads project teams in a back-office capacity.

          Execution does not have to eat strategy for lunch when scaling Agile across your enterprise

          Tanya Anand
          6 Sep 2022
          capgemini-invent

          FS Institutions: the challenges of aligning strategy and execution when scaling Agile

          Large banks and insurers have been experimenting with Agile ways of working for several years. In particular, they have struggled to align business strategy, execution and value delivery across different organizational layers. This is often due to the scale and complexity of such organizations. Additionally, the pace of change in business strategies, customer needs and market disruption have hindered progress.

          Often, such organizations continue to manage their portfolio of work using traditional management practices that tend to be administrative thus being unable to keep pace with change. This, inevitably, leads to waste, lesser time to market, lower innovation capacity, lower customer value, and more inefficiencies.

          At Capgemini Invent, we work with our partners to overcome several common portfolio management challenges. Each case is unique, with clients encountering specific challenges for specific reasons. However, there is a model that is proving to be an effective solution for all our clients. The Lean Portfolio Management (LPM) model (recommended by SAFe) helps support clients scaling Agile across their enterprise. Its vast toolkit of techniques is versatile and addresses a multitude of common issues for portfolio management.  These need to be adapted to an organisation’s context before they are applied.

          Common challenges for portfolio management

          Funding and Strategy

          • Funding and capacity are agreed upon annually, often with centralized decision making. The process lacks the flexibility to respond to changing priorities. The centralized budget management also adds a governance overhead and slows down organizational agility.
          • At the start of a project or product initiation, detailed business cases or initiation documents are drawn up with full list of costs and benefits to support the annual funding requests. These commit benefits too early and are based on unproven assumptions leading to waste and poor value delivery.

          Demand Management

          • There are inconsistent or inadequate demand intake standards across the organizational layers. Often, such processes are not transparent. Receiving teams deal with a deluge of requests, creating a large overhead for demand and stakeholder management. Additionally, difficulties in identifying impediments, bottlenecks, and duplications prevent timely mitigations.
          • Work is prioritized in siloed teams. Often, ‘the person who shouts the loudest gets their work prioritized. This means teams are not prioritizing work that optimizes value or is connected to the strategy.

          Governance

          • Onerous and long-drawn governance processes hinging on centralized decision making, in compliance with controls, corporate processes, and manual reporting leads to a large headcount dedicated to governance. This compromises the benefits of agility.

          Key Agile techniques to successfully align strategy to execution

          In order to gain greater benefits of agility, FS organizations need to step away from their traditional ways of working. The solution is to adopt Lean Portfolio Management (LPM) practices. LPM connects their strategy to execution and planning, using a Lean Agile approach for managing and governing work.

          Lean portfolio management (LPM)

          Lean Portfolio Management applies Lean and systems-thinking approaches to connect strategy and investment funding, Agile portfolio operations, and Lean governance.[1] Organizations need to develop competencies in all three areas to gain the benefits of LPM.

          Strategy and Investment
               
          Agile Portfolio Operations
          Lean governance
           
          Align and fund the portfolio to help meet business targets, ensuring all funded work is traceable to strategic outcomes.Enable operational excellence in portfolio management processes. Implement decentralized execution of delivery, underpinned by coordination across organizational layers. This helps drive efficient flow of demand and achievement of value across organizational layers.Ensure appropriate guardrails and an engaging and collaborative governance mechanism for  spending, audit and compliance, forecasting expenses, and portfolio performance is in place.

          LPM is enabled by an LPM function that operates across organizational layers. The function typically consists of portfolio managers and analysts who help facilitate, coordinate, and govern LPM activities, and drive consistent LPM standards across organizational layers. The function also ensures that there is continuous collaboration between business and technology stakeholders across all LPM activities.

          Strategy and funding

          Here, the function must first achieve translation of strategic themes into business or customer outcomes. These must be tracked using measurable Objectives and Key Results (OKRs), ensuring the objectives and activities are aligned and tracked across organization levels.

          Illustrative Example:

          All work should be broken down into a standard work item hierarchy (example initiatives, epics, stories) for delivery. This makes it possible to track the benefits and achieve them within short periods of time, as with every Program Increment (PI). These work items should be linked back to OKRs. This helps establish traceability from strategy to delivery. Moreover, it decentralizes delivery governance across different organizational levels.

          An annual budget for each value stream should be set up with some funding guardrails, enabling decentralized decision making. However, this annual process should be supplemented with more regular business review cycles (e.g. Quarterly). This results in frequent evaluation of portfolio progress, changes to OKRs, and a strategic reprioritization of work, capacity, and funding. These reviews should be planned in line with the PI planning cycles to ensure that the most valuable work is planned in the PI.

          Additionally, detailed business cases should be replaced with light weight lean business cases focused on business hypothesis, scope, plausibility, MVP cost etc. that are reviewed frequently.

          Agile portfolio operations

          To achieve an efficient flow of value across organizational layers, it is necessary to implement consistent demand management processes managed through agile tooling. A standardized Kanban workflow set up in an agile took set is one proven approach that can help achieve this. A Kanban helps to visualise and manage the flow of work at each level of the organisational layer. Each Kanban stage has a predefined set of activities, template, and exitcriteria that should be understood by the relevant stakeholders. The transparency facilitates the visibility of impediments, status of work items, and easier prioritization of work. Demand requesters should also be made aware of the criteria to raise demand to a portfolio Kanban. This makes it easier to manage demand intake and the stakeholders.

          Illustrative example:

          Source: Portfolio Kanban – Scaled Agile Framework[2]

          The demand is sequenced based on data-driven factors. A lightweight, economic framework, ‘Weighted Shortest Job First (WSJF),’ is used to continuously prioritize work that delivers the highest economic value in the shortest time. WSJF calculates the relative Cost of Delay (CoD) and job size, accounting for relative user and business value, time factors, risk reduction, opportunity enablement, and relative job size. WSJF calculations require a close collaboration between stakeholders to quantify the true value of cost of delay from both tech and business perspectives.

          Lean Governance

          In a fast-paced and continuously changing environment, it is imperative that organizations have decentralized decision making and ensure collaborative governance to minimize delays, impediments, and governance overheads. Hence, governance processes, including metrics and reporting, must be simple.

          A regular cadence of governance ceremonies facilitated by the portfolio function can help coordinate and govern work. Such ceremonies should in the least be focused on:

          • Strategic Portfolio Reviews

          These are to review portfolio progress and provide the direction of the portfolio, ensuring it is aligned to strategy.  Make decisions necessary to respond to new and changing portfolio opportunities and context and ensure there is delivery capacity and funding available for the prioritized work

          • Portfolio Roadmap and Impediments Review

          These are to review delivery progress, address and coordinate risks, blockers, and dependencies. Additionally, they help to coordinate and update the portfolio roadmap based on decisions to address impediments

          • Demand Intake Reviews

          These make it possible to review new demand, progress portfolio items to the backlog, and repriotize the backlog.

          These events are effective only if there is attendance from the right quorum of empowered stakeholders and there is a focus on making decisions versus providing updates.

          Making delivery and portfolio performance reports and insights available at hand empowers people and enables decentralized governance and decision making. Thus, such reports should be made available through self-serve automated dashboards that also minimize the governance overhead. These days, most Agile tools provide dashboards with the option to build in additional visualizations.

          It is important that a set of focused performance metrics are agreed upon and understood with stakeholders. This helps to focus efforts on the optimization of workflow, predictability, and value.

          LPM offers many more techniques to help build a connected agile enterprise. Our experience is that that these techniques must be adapted to the organization’s context, tested, and matured incrementally and iteratively. Most importantly, employees must be taken on a journey to becoming agile, empowered with the right skillset, an agile mindset, and a forward-thinking culture.


          [1] SAFe (2021). Lean Portfolio Management.

          About Author

          Tanya Anand

          Management Consultant,Capgemini Invent
          Tanya is strategy and operating model expert enabling organisations to grow & perform through enterprise agility, customer centricity and adoption of digital capabilities. She has over 12 years of experience in advising and supporting numerous industry leading organizations in the Banking, Consumer and the Media sector.

            Deep dive into software bill of materials standards

            Clemens Reijnen
            29 Aug 2022

            Ever-changing business challenges and customizations can add to software complexity. These modern software applications are developed from a large number of commercial as well as open-source components.

            According to the 2021 Open Source Security and Risk Analysis report, open-source software accounts for 75% of codebases on average. This is where an SBoM comes into play. The lack of systemic visibility into the composition and functionality of these software solutions significantly contributes to cybersecurity risk as well as development, procurement, and maintenance costs. Security aspects of complex IT environments and supply chain integrity are encouraging the development of a standardized information manual describing the internals and sources of software components to achieve software transparency.

            What is a software bill of materials (SBoM)?

            SBoM is a list of tools including libraries, modules, and components that are used in a specific piece of software. It is based on a manufacturing bill of materials, which is an inventory of all the elements involved in a product. Manufacturers in the automotive industry, for example, keep a comprehensive bill of materials for each vehicle. The parts made by the original equipment manufacturer and from third-party vendors are listed in this BoM. When a problematic item is detected, the automaker can determine which vehicles are affected and inform owners of the need for repair or replacement.

            It is important to create an SBoM to ensure the security of the software and developers should maintain a complete list of components in each release of a software application to identify vulnerable and obsolete code. Additionally, SBoMs can be used to monitor the security of each application after deployment by identifying the potential impact of new vulnerabilities.

            The need for a software bill of materials

            The major advantage of having an SBoM is that it allows enterprises to control risk by enabling early identification and mitigation of vulnerable systems or license infringing source code. Maintaining proper SBoMs will help organizations to measure risk from internal as well as external threats. In addition, it helps cybersecurity vendors to introduce solutions that can use the SBoM in a standardized way and protect software solutions and components from known as well as new vulnerabilities. It will also help cybersecurity vendors to provide recommendations for software patching and to provide auto-remediations to block malicious software from running. In addition, SBoMs can help to improve governance over software licenses. Every software consists of a license that indicates the legal use and distribution of the software. Different codes in supply chain applications can have many different licenses for the individual components. The company deploying any particular application has a legal obligation to comply with the licenses. An SBoM guides the deployer to understand what licenses are required and how to comply with them.

            SBoM standards

            An SBoM standard is a schema designed to provide a uniform language for describing software composition in a way that other tools, such as vulnerability scanners, can understand. Several SBoM standards have been created to facilitate the wider use of SBoMs by providing a common structure for both creating SBoMs internally and sharing them upstream with end-users and consumers. The most widely used standards are CycloneDX, Software Package Data Exchange (SPDX), and SWID Tags.

            • CycloneDX:

            CycloneDX is an SBoM specification introduced explicitly for software security requirements and related risk analysis. It is written in XML with JSON in development. It’s designed to be adaptable and flexible, with support for popular build systems. The specification supports SPDX license IDs, a license information indicator from package to source code level, and expressions, as well as provenance and external references, and encourages the use of ecosystem-specific naming conventions. It also supports the Package URL specification and the mapping of components to CPEs out of the box.

            Figure 1 Integration of CycloneDX with Dependency Track

            • Application of CycloneDX

            Checkov’s (a static analysis tool) results can be integrated with CycloneDX XML, which can help users to create an infrastructure as code (IaC) SBoM in a standardized format for use in Security Operation Center (SOC) tools.

            Figure 2 How to generate an IaC SBOM with Checkov

            • SPDX:

            The Software Package Data Exchange (SPDX) is an SBoM specification that defines a standard language for sharing software components, licenses, security information, and copyrights across numerous file formats. An SPDX document can be linked to a single software component, a group of software components, a single file, or even a piece of code. SPDX project is primarily introduced to create and extend a “language” to define the data that can be exchanged as part of an SBoM, and to be able to express that language in multiple file formats (RDFa,.xlsx,.spdx, and soon.xml, .json, .yaml) so that information regarding software content and packages can be quickly collected and shared, saving time and increasing accuracy. This SPDX community is sponsored by the Linux Foundation (LF) and SPDX specification has been adopted as one of the key elements of the Linux Foundation’s Open Compliance Program.

            Figure 3 Overview of an SPDX document

            • Application of SPDX

            Synk, a US-based security platform developer, introduced a tool called snyk2spdx to validate the work on the specification. snyk2spdx uses Snyk test data including the SBoM information and outputs it to SPDX v3.0, including the vulnerability profile.

            Figure 4 synk2spdx output

            • Software Identification (SWID) tag:

            SWID tags are transparent software identification tags that allow enterprises to trace the software installed on their managed devices. The International Organization for Standards (ISO) defined it in 2012 and it was updated in 2015 as ISO/IEC 19770-2:2015. SWID tag files include descriptive information about a specific release of a software product. A SWID tag is applied to an endpoint as part of the software product’s installation process, and then erased by the product’s uninstall process, according to the SWID standard. The presence of a certain SWID tag in this lifecycle correlates to the presence of the software product that the tag defines. SWID tags are used by several standards groups, including the Trusted Computing Group (TCG) and the Internet Engineering Task Force (IETF).

            Figure 5: The lifecycle of software on an endpoint documented by SWID tags

            Table 1: SBoM standards, use cases, and features

             SPDXCycloneDXSWID tag
            OrganizationSPDX workgroup (~20 orgs) under the Linux FoundationA “meritocratic, consensus-based community project” with an Industry Working GroupISO
            Initial draft201020172012
            FormatRDF, XLS, SPDX, YAML, JSONXML, JSONXML
            SpecificationsSPDX specificationsCycloneDX specificationsSWID Tag specifications
            Use casesLicense managementFor use with OWASP Dependency-TrackDescriptive information about the specific release of a software product
            Unique featuresExtensive support for expressing license detailsExtensible format and integrates SPDX license IDs, URL, and other external identifiersProvides stable software identifiers, standardizes software information, and enables the correlation of information related to software

            Major organizations engaged in SBoM standards developments

            • NTIA (National Telecommunications and Information Administration)

            The US Department of Commerce’s National Telecommunications and Information Administration (NTIA) is developing a multi-stakeholder approach to reach an agreement on an SBoM’s usage and structure by forming multiple working groups that involve stakeholders and collect data. Some of the active NTIAs are found in these working groups:

            1. Framing Working Group: Establishing introductory SBoM documentation for framing the concept of an SBoM to interested parties
            2. Awareness and Adoption Working Group: Creating an agreed overview of use cases for achieving the benefits of an SBoM
            3. Formats & Tooling Working Group: Performing an ongoing survey of SBoM related tooling, data formats, and standards
            4. Healthcare Proof of Concept Working Group: Facilitating and reporting on an ongoing Proof of Concept (PoC) on SBoM usage in the healthcare sector (US)

            In Jul 2021, the Department of Commerce and NTIA published a report on the minimum elements for an SBoM: “The minimum elements as defined in the report are the essential pieces that support basic SBoM functionality and will serve as the foundation for an evolving approach to software transparency.”

            • CISQ and OMG

            The CISQ Working Group “Tool-to-Tool Software Bill of Materials Exchange” is a joint working group of CISQ (Consortium for Information & Software Quality) and the OMG (Object Management Group) to define an exchangeable tool-to-tool BoM metamodel for software (SBoMs) and other items needing BoMs. The first purpose of this working group is to trade SBoMs. The study builds on the NTIA’s Software Component Transparency initiative, focusing on the exchange of SBoMs between and among software development tools that produce, update, manage, orchestrate, and/or otherwise alter, assess, or audit software.

            • MITRE Corporation and NIST

            MITRE Corporation and NIST introduced an approach for using the SBoM to record and validate software “provenance” (chain of custody, ‘the path of the software between organizations’) and pedigree (lineage, ‘record of origin steps in the software supply chain’)

            • The future of SBoMs

            With rising software complexity and an increasing number of cyber-attacks, SBoMs are expected to grow in popularity and become a critical component of controlling and securing software supply chains. According to President Biden’s Cybersecurity Executive Order 14028 issued in May 2021, it is mandatory to provide SBoMs for any company selling software solutions to the federal government. If we see such mandates globally, SBoMs will become a “need to have” rather than a “nice to have” model for security as well as development teams.

            About author

            Clemens Reijnen

            Chief Technology and Capability Leader for Microsoft
            Clemens is a creative thinker, solution and service builder. He has 20+ years of success with complex innovative software systems.

              Our holistic approach to the BMW Group’s quantum computing challenge

              Julian van Velzen
              Julian van Velzen
              30 Aug 2022

              Quantum computing could well become transformational for the automotive industry, among other sectors – that much is clear despite the current immaturity of the technology. The opportunities are discussed in a recent Capgemini report, Quantum technologies: How to prepare your organization for a quantum advantage now

              The BMW Group is among the first automotive companies to take a practical interest in the potential of quantum computing. In summer 2021, the BMW Group issued a Quantum Computing Challenge, in collaboration with AWS, to crowdsource innovation around four specific use cases where it believed quantum computing could benefit its business by solving complex computational problems.

              As a long-standing partner of BMW, Capgemini had the opportunity to compete in the BMW Group Quantum Computing Challenge using the expertise of its own established quantum computing community and lab. We welcomed this opportunity to collaborate with the BMW Group in this exciting area, and to enable our quantum community to compete with some of the world’s other best brains in this field.

              The Challenge

              The focus was on four specific challenges where it was believed that quantum computing could deliver an advantage over classical computing methods: optimization of sensor positions for automated driving functions, simulation of material deformation in the production process, optimization of pre-production vehicle configuration, and machine learning (ML) for automated quality assessment.

              Of these use cases, Capgemini focused on machine learning (ML) for automated quality assessment.

              The BMW Group’s statement of the use case

              “Due to the rapid development of hardware and software, the past decades have drastically shifted quality control from manual examination towards automated inspection. In light of the required human expertise to hand-tune algorithms, machine learning (ML) techniques promise a more general and scalable approach to quality control. The remarkable success of convolutional neural networks (CNNs) in image processing has revolutionized automated quality inspection. Of course, any technology has its limitation, and for CNNs, it is computation power. As high-performance CNNs usually assume large datasets, datacenters ultimately end up with large numerical workloads and expensive GPUs. Quantum computing may one day break through classical computational bottlenecks, providing faster and more efficient training with higher accuracy.”

              The challenge’s first round focused on proposing ideas for applying quantum technology to the chosen use case. Capgemini’s submission was well received, and the team was one of a handful chosen (from around 70 participants) to compete in the second, and final, round. Here, the team had the opportunity to work with BMW’s live data under a non-disclosure agreement.

              Capgemini’s multidisciplinary approach

              Capgemini was delighted to make it through to the final, especially given that most of our competitors were quantum pure plays that had been working on these technologies for a long time. Our success was due to strong collaboration across a wide-ranging team comprising quantum and automotive experts.

              Unlike most other competitors who focused on the specific issue of quantum machine learning (QML), Capgemini considered the breadth of the quality assurance process. To support this holistic approach, the team was expanded to bring in different types of expertise from across the business when needed. Our colleagues at Cambridge Consultants, some of whom are quantum specialists, played a pivotal role alongside our experts in automotive, classical ML, and several other areas.

              Benefits of our approach

              This wide-ranging approach enabled the team to develop a pathbreaking QML model in just a few weeks, as we’ll describe in our next article. What’s more, our holistic, multidisciplinary perspective meant that, in the same timescale, we could take a wider look at the applicability of quantum to automotive quality assessment more generally, identifying some new opportunities.

              For example, we considered quantum sensing, and how it could help with problems such as obscuration of images by stray particles and relieving potential bottlenecks around the ML model. We’ll discuss this approach in a future article in this series.

              To help BMW assess scalability and viability, Capgemini also laid the foundations for a roadmap for quantum adoption – a topic that will again be covered in a future article.

              Conclusion

              The project has revealed the potential for applying quantum computing techniques to real problems, now – without waiting for quantum hardware to mature. This is true both in automotive contexts and also in other industries, such as aerospace and life sciences.

              More generally, BMW’s Quantum Computing Challenge has provided a great example of how industry can tap into expertise around the world to help solve its biggest problems and leverage complex technology.

              We hope we’ve communicated our excitement about taking part in BMW’s challenge and shown you what we achieved at a general level. Our next article will focus on BMW’s central requirement: applying QML to quality assessment.

              Authors include: Julian van Velzen, Edmund Owen, Christian Metzl, Barry Reese and Joseph Tedds.

              Building a FAIR culture

              Jeroen de Jong
              24 Aug 2022

              Data management and the FAIR data principles may feel very new to some R&D organizations, meaning that a significant amount of change effort might be needed to establish the ideas and make them stick. How can a data management-conscious culture be built?

              Cultural change and change management are often challenging around any new business initiative, but they’re absolutely essential to establishing effective data management and are not something to take shortcuts around.

              The two most important aspects of getting this right are: first, working with the existing culture to get a data management initiative up and running, and second, setting up some key roles to ensure that the initial momentum is built on and data management practices and habits become self-perpetuating.

              Launching data management by capitalizing on existing culture


              There are four key relationships around any data-producing unit in a typical R&D organization – the relationships the unit has with the IT department, with other units in the business, with senior management, and with data itself. Any of them could introduce resistance to getting a data management effort started.

              IT relationship: Mistrust of IT departments is sadly not unusual among research scientists. The IT group often feels distant and impersonal, or like they are always trying to ‘do something to’ R&D rather than facilitate it. Data is owned by the business, of course, not IT, but technological enablement comes from IT and they will also have a lot of data expertise that, if not put to good use, represents a missed opportunity. One solution to this problem is to embed temporary, bridge-building teams directly in and alongside R&D teams. From there they can coach scientists in how to be more responsible data stewards and teach by example by tidying up and curating data sets.

              Other departments: Low trust between peer groups can cause ‘not invented here’ syndrome, leading to undervaluing the potential of the work and data of other departments and missed opportunities. Data management itself is often an idea injected from outside. One way to address this is to find entry points into data management that build on activities that the group already believes in strongly. For example, in a quality- and metric-focused engineering department you could echo existing reporting conventions by showing progress on data management as a traffic-light quality metric. This might then encourage the group to attend to the underlying data management issues so that their quality dashboards remain green.

              Senior management: Sometimes a research group might be persuaded of the value of data management but need help getting support from senior management to make time and resources available to properly address it. The best approach here is to build a strong business case for data management in terms that appeal to managers. For example, this could be by building evidence that well-managed data makes data handling and processing easier, resulting in cost reductions, or alternatively by finding costly negative examples of the consequences not managing data actively.

              Data relationship: Very few people get into scientific R&D because they want to do data management. Successful R&D organizations are experts at something else entirely, e.g. discovering new medicines, developing new vehicles or optimizing new energy generation methods. Data of course powers these activities, but while the need to become a data-driven organization is often acknowledged, it’s not always acted on by researchers, who usually prefer to focus on their specialism. One way to increase the level of interest in data management is to start small, for example by sharing experimental metadata across global teams (scientists often love to know what other groups are doing). This often reveals data gaps (prompting an interest in data ownership) and lower quality data (prompting an appreciation of the role of a data steward).

              Establishing dedicated data management roles

              Once a data management initiative is up and running, it’s useful to define and establish some specific roles in the organization to accelerate and embed data management and FAIR principles further.

              There are three roles that we typically recommend. They aren’t necessarily full-time roles, and they don’t even have to be held by different people – the important thing is to define the responsibilities and scale the roles in proportion to the overall scope of the data management effort.

              Data policy lead: A policy and governance guru and advocate of data management. They are responsible for developing policy and guidance around data management principles covering industry and regulatory rules (such as GxP), data lifecycles and infrastructure for different data types.

              Domain specialization steward: An expert within their scientific domain and again a data management advocate, the steward handles the day-to-day management of master data (e.g. vocabularies and ontologies) within their group and acts as a bridge between their research group and the other two roles.

              Infrastructure steward: A data infrastructure expert who makes sure all technology supporting domain research both adheres to the policies created by the policy lead and meets domain users’ needs.

              The three roles are designed to create a virtuous circle where the outputs of each role strengthens the work of the other two. Adoption and evolution of data management practices and technology then becomes continuous and self-perpetuating.

              Conclusion

              Sum up of the thoughts in the post. What did we learn, and what should we do next?
              Overcoming cultural resistance to data management can be one of the toughest parts of a digital transformation initiative within an R&D organization. At Capgemini we have years of experience of identifying and removing these barriers with creative solutions. Take a look at the data management services we offer as part of our vision for Data-Driven R&D and get in touch if this is something you’re struggling with.

              Author

              Jeroen de Jong

              Senior consultant, Capgemini Engineering Hybrid Intelligence
              Jeroen is an experienced data consultant with a specialism in AI techniques. He has a proven track record in data management, data science, research, and business intelligence. He helps clients in a hands-on manner, by giving training, and by implementing processes that fit into the culture of an organization.

                How wellness-as-a-service can drive growth for life and health insurers

                Samantha Chow
                Samantha Chow
                30 Aug 2022

                After a few years of navigating massive market uncertainty, large-scale claims payouts, an intense war for talent and low interest rates, life and health insurers and annuity providers are looking to spark new growth. In many cases, they are aiming to build on the digital operations they established during the pandemic to stay connected to customers, agents and brokers and their own workers.

                Important initiatives to automate processes, modernize legacy systems and enhance key customer experiences are underway across the industry. And pressure remains to increase operational efficiency and reduce costs.

                But insurers must also consider the opportunities represented by changing customer needs and attitudes. For instance, rising consumer and employer interest in holistic wellness – incorporating both physical and financial health – is one area where insurers are well positioned to develop new offerings and even entirely new value propositions.

                Rising consumer interest in wellness can be attributed to several factors. Certainly, the COVID-19 pandemic played a role. The expanded use of wearable devices that track heart rate, sleep cycles, and fitness activity has motivated many consumers to live healthier lives. And a booming wellness economy demonstrates that these consumers are willing to invest in feeling and looking great.

                Taking advantage of consumer interest in wellness

                For insurers, the rise of wellness is an invitation to expand and enhance the value propositions. Standalone programs for financial wellness or physical fitness are not new, of course, and many have been quite effective in repositioning insurers as trusted advisors. But only recently have innovative insurers begun connecting the dots between physical and financial wellness and putting health at the heart of the business.

                Integrating these offerings is critical because of the growing evidence that physical and financial health are closely related. Indeed, the pandemic made clear the intrinsic connection between physical health and financial security. More employers now recognize that healthier employees are more engaged, productive and loyal.

                Of course, seizing the wellness opportunity won’t be easy. Most insurers have yet to establish compelling wellness-centric value propositions or build the necessary capabilities to deliver on them. To take advantage of the opening, insurers must undergo a profound shift in thinking as they move from being product-driven to truly customer-centric.

                Delivering personalized wellness solutions at scale requires the use of powerful technology and more sophisticated data management and analytical capabilities. Both everyday operations and organizational cultures must become more data-driven and insight-led. The foundation will be advanced, integrated modular tech platforms that use artificial intelligence (AI), machine learning and other enabling technologies to deliver personalization at scale.

                Enabling wellness-as-a-service

                Wellness-as-a-Service offers a flexible model for life and health insurers to unlock a new era of growth and profitability by empowering employees and policyholders with tools, tips and guidance so they can take meaningful action in line with their goals and live the life they want to lead.

                One advantage of Wellness-as-a-Service models is that it allows insurers to more deeply understand customer behaviors, engage more frequently and offer personalized services that are more likely to improve physical fitness, overall health and financial security for customers. They will also help insurers boost retention, reduce claims, and increase the accuracy of risk assessment and pricing. In other words, strong wellness strategies and programs are a win-win for consumers and insurers alike.

                So what will it take to make these programs impactful for customers and beneficial for the industry? We believe the keys are:

                Connecting with the right partners: Insurers can’t go it alone in delivering wellness-as-a-service. But they can build on their traditional risk management strengths and engage partners for additional offerings, capabilities and tech. For instance, health insurers might partner with life insurers and banks to develop richer financial education programs. The broadest partner networks might include grocery stores, gyms and yoga studios, fitness equipment companies, drug stores, pharmaceutical firms and hospitals and health systems. InsurTechs, HealthTechs and big tech platforms are other potential partners. In evaluating partners, insurers should consider whether their offerings are complementary and their cultures collaborative.

                Developing ecosystems to expand product offerings and opportunities: The most successful partnerships will be operationalized via tech-driven ecosystems that provide customers with easy access to a range of services and solutions, as well as intuitive sales and service processes. Large employers may look to their insurance providers to build out dedicated ecosystems for their employees.

                Strong APIs will enable ecosystem partners to connect and share data seamlessly. Robust security and data privacy protocols will allay consumer concerns and meet regulatory requirements. While first movers and early adopters are already showing what’s possible, most insurers have a long way to go to harness the power of ecosystems; today, only 40% effectively co-create or innovate with strategic or ecosystem partners.

                Matching products and solutions to customer needs: Customers want from insurers what they receive from the other companies they do business with – on-demand services and tailored recommendations and rewards. The priority isn’t necessarily to develop new types of policies, but rather new product features and enhancements to specific touchpoints that enrich the overall experience and extend the core value proposition.

                Gamified wellness apps for health tracking and expense management are proving effective in providing personalized, goal-based nudges and actionable tips and recommendations. These tools can be augmented with regular touchpoints for customized planning (e.g., portfolio rebalancing advice meetings) that suit some types of customers, while hyper-personalized rewards with tangible incentives for customers that follow tips and meet their goals are likely to increase engagement and boost loyalty over time.

                Our upcoming inaugural World Life and Health Insurance report offers a roadmap to better understand customer needs, operationalize wellness strategies and unleash data-driven customer engagement.

                Meet our Experts

                Samantha Chow

                Samantha Chow

                Global Life and Annuity Sector Leader
                Samantha Chow is an expert in the global life, annuity, and benefits markets and has 25 years of experience. She has deep expertise in driving the growth of enterprise-wide capabilities that facilitate transformational and cultural change, focusing on customer experience, operational efficiency, legacy modernization, and innovation to support competitive advancement