Skip to Content

Embedded software is changing how companies operate

Walter Paranque-Monnet
23 April 2024
capgemini-engineering

Discover why embedded software is increasingly important for industries – creating intelligent ecosystems, enhancing user experiences and reducing costs.

Twenty years ago, we bought mobile phones for their hardware. Since then, a lot has changed, and now, embedded software delivers the primary value – offering entertainment, navigation, augmented reality, productivity apps, and so on.

However, such software does not work alone. It requires the phone’s hardware (connectivity, cameras accelerometers, etc.), and a cloud ecosystem to download new apps and share data. But it is the software – the operating system and firmware on the phone – that runs the show.

As a result, consumers now have sky-high expectations of technology. And if industrial companies can’t deliver products with a similar software-driven user experience, they will lose these customers. Manufacturers of cars, planes, trains, satellites, solar panels, cameras, home appliances, and so on are all undergoing a similar shift driven by embedded software.

That shift has huge implications – not just for the product itself, but for the company designing it.

Ever more products become software-driven

Let’s start with the product. Take a car or a plane – products that are increasingly software-driven. Both are developing software for automation and route optimization on the one hand, and to improve user experience and entertainment on the other.

They are not alone. Trains need one type of software with smart signal controls for optimal route planning, and another type that allows users to order food from the buffet car on their phone. Satellites must make real-time decisions about trajectory, data capture, and energy management. In-home batteries must control energy in and out, and track what they sell back to the grid.

Embedded software drives a change in organizational thinking

Embedded software is not entirely new in these industries – cars and planes, for example, have long had bits of control software. But its scale and sophistication are now skyrocketing.

A Capgemini Research Institute (CRI) survey – of 1,350 $1bn+ revenue companies with goals to become software-driven – found software accounted for 7% of revenue in 2022, but was expected to rise to 29% by 2030. That same report also found that 63% of Aerospace & Defense organizations believe software is critical to future products and services, with industries from automotive to energy making comparable claims.

But getting there will mean some big changes at these organizations.

Unlike a phone – which was designed to be a single integrated device – cars, planes, satellites, drones and other industrial systems were originally designed with multiple ECUs (electronic control units), each running multiple pieces of software. Each ECU was developed separately by different parts of the organization.

But now there is a need to integrate everything. For example, autopilot won’t work if its underpinning software can’t communicate seamlessly with the separate control units for sensors, steering, and brakes.

The importance of transversal software

Doing this in the current siloed way would create unmanageable complexity. Software needs to be ‘transversal’ – ie. developed consistently across the organization, rather than in silos. There must be a centralized team defining strategy, and managing and developing embedded software as a product across the organization. This must all be done with the same standards to facilitate interoperability, scalability, upgrades and reuse – whether it’s a landing control system, energy management system, in-flight infotainment, or smart cockpit. This transversal operating model makes software teams the backbone of software-defined organizations, continuously developing software solutions across the company.

That doesn’t mean all software must be connected to the final system, or that everything will be developed in the same way. Software can be very different. For example, rear-seat entertainment software can offload some data-heavy functions to the cloud, and developers can launch beta versions to get user feedback. On the other hand, high-integrity software for braking must do everything on board, work every time, and be separate from any hackable entry points into the system.

There are separate development tracks for different software components, so that less safety-critical software can quickly get to market, while more safety-critical parts can be carefully managed through verification and validation (V&V), and certification. But all development tracks should be within a centralized software team, which works together, sharing a consistent system architecture, standards and learnings, and creating products the entire business can access once complete.

A positive example

Consider Stellantis, which owns multiple car brands, including Opel, Peugeot, Dodge and Fiat, among others. It has invested in developing three core software platforms: one which is the backbone of the car (STLA brain), one for safety-critical assisted driving (STLA AutoDrive), and one for the connectivity and cockpit services (STLA SmartCockpit).

It implemented centralized software standards that are systematically used across all brands and models. This is similar to a trend we’re seeing across all markets – ‘platforming’. The platforming approach leverages generic components (computer vision, voice command, navigation services, etc.) that are applied to several projects, products and use cases – sometimes used with customizations to different brands and marketings – all without needing to build, test and certify everything from scratch.

Innovate or fail

All of this requires a major shift in thinking from organizations. But they must make this shift to survive.

And largely, they are. The auto industry is taking the threat from Tesla (and its advanced on-board computing) seriously. They may soon be pushed to move faster by software-driven Chinese competitors, like BYD and Nio, whose car interiors can transform into immersive cinemas at the push of a button. Industries from aviation to energy are no longer complacent – all recognize that embedded software is critical to their future. And all know they must undergo radical organizational change to turn legacy hardware into future-proof, software-driven products.

See how embedded software is helping industries transform their business – and how Capgemini can help along your journey.

Meet our experts

Walter Paranque-Monnet

Solution Director Capgemini Engineering
Walter is passionate about helping organizations build high-value products and services driven by creativity, innovation, and business results. He has helped teams create a culture driven by software and innovation. For more than 12 years, Walter has supported software organizations along their chip-to-cloud transformation journey and designed embedded software roadmaps for acceleration.

    Conversational twins
    The virtual engineering assistants of the (near) future

    David Granger
    May 15, 2025
    capgemini-engineering

    What will happen when Gen AI meets VR meets Digital Twins meets high-powered chips? Enter the ‘Conversational Twin’ – a virtual, 3D, generative AI assistant that can visually guide you through complex tasks.

    Imagine your car breaks down and, to save a bit of money, you decide to fix it yourself.

    You head to your garage with your smartphone and start looking up YouTube tutorials. Eventually, you find one that covers your problem and start watching. As you get to the key part, you start fiddling around with the engine. The presenter is explaining much faster than you can act, so you keep going back to your phone to scroll back 20 seconds and rewatch. After watching the key bit several times, you find the problem. A new part is needed. You spend an hour online trying to find the right one, amongst 100 identical looking options with names like ‘CC01-15/06’ and ‘CC01-15/06e’. A few days later, it arrives and it’s back to the garage. Another hour fixing and scrolling, and your car is finally ready to go.

    For all their flaws, the popularity of online tutorials shows the enormous demand for information on how to fix things, and, perhaps, a deeper need to feel in control. And that’s just private citizens. Mechanics and engineers have an even greater need to access vast amounts of information on a vast range of processes and parts, and how to apply them to different models of cars, aircraft, machine tools, etc.

    YouTube is certainly better than thick, boring instruction manuals. But really, people want to interact in natural human ways. They process information in different ways, and have different starting knowledge, making start-to-finish tutorials an inefficient way to deliver information. In an ideal world, you would have someone nearby who understands the problem, and can explain what you should do as you go, and who can answer questions if you didn’t understand the instructions.

    Could digitally delivered instructions become more like that human expert? We think so. Particularly due to advances in generative AI, virtual reality, digital twins, and advanced chips.

    The instruction manual of the future

    We can imagine, in the not-too-distant future, that same smartphone could contain an app with a digital twin of the car, that has been trained on the car’s instruction manuals (we’ll stick with the car analogy, but this could be applied to any complex engineered product).

    The result would be that, when you arrive at a problem that needs fixing, you open the app and verbally describe the problem to a virtual assistant. The app then generates a visual step-by-step guide to solve the problem, which can be communicated via a mix of AR overlays, demonstrations by avatars, and spoken instructions, through your phone, tablet or VR headset.

    Rather than simply following a series of steps, it would leverage generative AI to contextualize your challenge and explain exactly what you need to do to fix it, using 3D visualizations, and adjusting them according to the verbal questions you ask it, then waiting patiently for you to finish one task, and ask for the next instruction (or to clarify the last one). We call this a Conversational Twin, because you are effectively conversing with a digital twin of the car, which knows everything about it.

    By harnessing the phone camera, the app could even watch your movements and guide you in real-time (“unscrew the cap, no not that one, the one 10 cm to your left”) by comparing the video feed to its internal model of the vehicle. When you reach the problem, you could hold up the broken part and the Twin would recognize it and order you a new one.

    Such a Conversational Twin will significantly benefit many people who want to fix things themselves. But its real value will be as a huge cost saver to companies with large maintenance and engineering teams, allowing those people to access much more expertise, and thereby enabling smaller teams to perform more tasks, more quickly, even if they’ve never seen the problem before.

    How to do it

    Technically, most of what is described above could be created today. But it would be a lot of work. Each product would need to be carefully mapped and digitized, conversational flows would need to be carefully scripted and programmed, and a library of animations would need to be pre-designed. 

    Generative AI is rapidly changing the game here. Already, dedicated AI models can be trained on information from manuals to YouTube videos to online trade forums, so they can find answers as they are requested, and return them as contextualized text or spoken instructions.

    The more challenging part is mapping those text-based instructions onto 3D models of the product. The Conversational Twin would need to interpret a mix of text and visual inputs, turn them all into prompts for itself, find the answers, match those text instructions onto its internal 3D model of the specific car, then overlay its responses as 3D objects onto the physical car it sees via the camera. We are not quite there yet.

    But such technology is coming. Virtual and augmented reality have come on leaps and bounds in the past few years, and it is only a matter of time before virtual objects can be generated in response to generative AI instructions. Equally, today’s large language models (LLMs) deal with text, but they will need to output machine-readable instructions in order to generate virtual overlays. That is not something LLMs do yet, but bright minds – including those at Capgemini – are working on making that connection between LLMs and Real-time 3D engines. Once these two areas advance a little further, it is a matter of carefully connecting everything.

    Of course, generative AI is not a ‘magic bullet’ that can just be told what to do and automatically produce the result you want. It will need a well-defined architecture and effective rules for how to ‘prompt’ it to generate the right responses, outputted in ways that can be reliably converted into 3D visuals.

    Finally, we still need some microchip advancements to deliver all this on a device. Today, we use edge computing devices and the cloud to process these advanced workloads, and indeed much can be done using these approaches that will lay the foundations for Conversational Twins. But we suspect that in the next few years, chips will be sufficiently more advanced to do all the processing on a smartphone, tablet, or VR headset.

    What to do to get ready for Conversational Twins

    Even if Conversational Twins are a few years away, there is a lot that companies can do now to prepare for them, which will also have immediate value elsewhere.

    The first is investing in Real-time 3D. This is a rapidly growing technology with exciting possibilities, like the ability to showcase products to customers without them leaving their homes or create virtual working environments that can train employees without risk.

    A related point is to start preparing existing assets for training Gen AI and building 3D assets. Many companies already have 3D product models, rendered marketing materials, and so on. But they are often held in silos and can be of inconsistent quality and formats. Complex projects like Conversational Twins will not be reliable if the underlying 3D model of the product – on which they base their recommendations – does not match the real product.

    Those that have not already done so, should create centralized virtual models of their products and businesses, as a single source of truth. That way, anyone in the company producing 3D materials – whether for new product design, marketing, or building Gen AI-powered assistants – is working from the same high-quality version. In time, this ‘virtual twin’ will provide the digital foundation for your Conversational Twin.

    Why you should start now

    Once the above comes to fruition, companies making products like cars or planes could offer a corresponding app that guides users on how to maintain and fix them. That could be sold as a subscription to professional mechanics, maintenance engineers and training organizations, and made available free or for a fee to people who have bought the product, as a differentiator from their competition.

    Many aerospace and industrial companies are already exploring how to simplify the maintenance, training, and configuration of products – rather than relying on complicated documents or fixed training modules. As engineering companies move from selling products, to managing the entire lifecycle, Conversational Twins can provide customers with added value that can save them time and money, extend the life of products, and provide a valuable source of data on how to improve future designs.

    If we start getting our data and models ready now, and embarking upon proof-of-concepts, Conversational Twins could be with us this decade.guide organizations to integrate AI carefully, following sensible adaption and risk management frameworks and deploying appropriate training, ensuring both its potential and limitations are carefully navigated.

    Discover the next generation of user experiences powered by real-time 3D. Click to learn more about Capgemini Engineering’s Real-time 3D solutions.

    Gen AI in software

    Report from the Capgemini Research Institute

    Meet the author

    David Granger

    Director of Engineering – Experience Engineering
    David and his expert team lead the development of advanced solutions that integrate real-time 3D (RT3D) visualization with generative AI to drive innovation across industries, known as ‘Experience Engineering’. His team specializes in crafting intelligent experiences that reshape how businesses engage with digital content.

      From pilots to production
      Overcoming challenges to generative AI adoption across the software engineering lifecycle

      Keith Glendon
      Apr 24, 2025
      capgemini-engineering

      Generative AI is rapidly revolutionizing the world of software engineering, driving efficiency, innovation, and business value from the earliest stages of design through to deployment and maintenance. This explosive development in technology enhances and transforms every phase of the software development lifecycle: from analyzing demand and modeling use cases in the design phase, to modernizing legacy code, assisting with documentation, identifying vulnerabilities during testing, and monitoring software post-rollout.

      Given its transformative power, it’s no surprise that the Capgemini Research Institute report, Turbocharging Software with Gen AI, reveals that four out of five software professionals expect to use generative AI tools by 2026.

      However, our experience and research find that to fully realize the benefits, software engineering organizations must overcome several key challenges. These include unauthorized use, upskilling, and governance. This blog explores these challenges and offers recommendations to help navigate them effectively.

      Prevent unauthorized use from becoming a blocker

      Our research indicates that 63% of software professionals currently using generative AI are doing so with unauthorized tools, or in a non-governed manner. This highlights both the eagerness of developers to leverage the benefits of AI and the frustration caused by slow or incomplete official adoption processes. This research is validated in our field experience across hundreds of client projects and interactions. Often, such issues arise from an overly ‘experimental’ versus programmatic approach to adoption and scale.

      Unauthorized use exposes organizations to various risks, including hallucinated code (AI-generated code that appears correct but is flawed), code leakage, and intellectual property (IP) issues. Such risks can lead to functional failures, security breaches, and legal complications.

      Our Capgemini Research Institute report emphasizes that using unauthorized tools without proper governance exposes organizations to significant risks, potentially undermining their efforts to harness the transformative business value of generative AI effectively.

      To mitigate unauthorized use, organizations should channel the curiosity of their development teams constructively and in the context of managed transformation roadmaps. This approach should include consistently explaining the pitfalls of unauthorized use, researching available options, learning about best practices, and adopting necessary generative AI tools in a controlled manner that maintains security and integrity throughout the software development process.

      Upskilling your workforce

      Upskilling is another critical challenge. According to our Capgemini Research Institute findings, only 40% of software professionals receive adequate training from their organizations to use generative AI effectively. The remaining 60% are either self-training (32%) or not training at all (28%). Self-training can lead to inconsistent quality and potential risks, as nearly a third of professionals may lack the necessary skills, resulting in functional and legal vulnerabilities.

      A consistent observation from our field experiences is that alongside the issue of training is a correlated barrier to making sufficient time available for teams to apply training in practical ways, and to evolve the training outcomes into pragmatic, lasting culture change.  Because generative AI is such a seismic shift in the way we build software products and platforms, the upskilling curve is about far more than incremental training.

      Managing skill development in this new frontier of software engineering will require an ongoing commitment to evolving skills, practices, culture, ways of working and even the ways teams are composed and organized.   As a result, software engineering organizations should embrace a long-term view of upskilling for success.

      Those that are most successful in adopting generative AI have invested in comprehensive training programs, which cover essential skills such as prompt engineering, AI model interpretation, and supervision of AI-driven tasks. They have begun to build organizational change management programs and transformation roadmaps that look at the human element, upskilling and culture shift as a vital foundation of success.

      Additionally, fostering cross-functional collaboration between data scientists, domain experts, and software engineers is crucial to bridge knowledge gaps, as generative AI brings new levels of data dependency into the software engineering domain. Capgemini’s research shows that successful organizations realizing productivity gains from AI are channeling these gains toward innovative work (50%) and upskilling (47%), rather than reducing headcount.

      Establishing strong governance

      Despite massive and accelerating interest in generative AI, 61% of organizations lack a governance framework to guide its use, as highlighted in the Capgemini Research Institute report. Governance should go beyond technical oversight to include ethical considerations, such as responsible AI practices and privacy concerns.

      A strong governance framework aligns generative AI initiatives with organizational priorities and objectives, addressing issues like bias, explainability, IP and copyright concerns, dependency on external platforms, data leakage, and vulnerability to malicious actors.

      Without proper governance, the risks associated with generative AI in software engineering — like hallucinated code, biased outputs, unauthorized data & IP usage, and other issues ranging from security to compliance risks, can outweigh its benefits. Establishing clear policies, driven in practice through strategic transformation planning will help mitigate these potential risks and ensure that AI adoption aligns with business goals.

      Best practices for leveraging generative AI in the software engineering domain

      Generative AI in software engineering is still in its early stages, but a phased, well-managed approach toward a bold, transformative vision will help organizations maximize its benefits across the development lifecycle. In following this path, here are some important actions to consider:

      Prioritize high-benefit use cases as building blocks

      • Focus on use cases that offer quick wins to generate buy-in across the organization. These use cases might include generating documentation, assisting with coding, debugging, testing, identifying security vulnerabilities, and modernizing code through migration or translation.
      • Capgemini’s research shows that 39% of organizations currently use generative AI for coding, 29% for debugging, and 29% for code review and quality assurance. The critical point here, however, is that organizations take a ‘use case as building blocks’ approach. Many currently struggle with what could be called ‘the ideation trap’. This trap comes about when the focus is too much on experiments, proofs of concept and use cases that aren’t a planned, stepwise part of a broader transformation vision. 
      • When high-benefit use cases are purposely defined to create building blocks toward a north star transformation vision, the impact is far greater. An example of this concept is our own software product engineering approach within Capgemini Engineering Research & Development. In late 2023 we set out on an ambitious vision of an agentive, autonomous software engineering transformation and a future in which Gen AI-driven agents autonomously handle the complex engineering tasks of building software products and platforms from inception to deployment. Since that time, our use cases and experiments all align toward the realization of that goal, with each new building block adding capability and breadth to our agentive framework for software engineering.

      Mitigate risks

      • All productivity gains must be balanced within a risk management framework. Generative AI introduces new risks that must be assessed in line with the organization’s existing risk analysis protocols. This includes considerations around cybersecurity, data protection, compliance and IP management. Developing usage frameworks, checks and quality stopgaps to mitigate these risks is essential.

      Support your teams

      • Providing comprehensive training for all team members who will interact with generative AI is crucial. This training should cover the analysis of AI outputs, iterative refinement of AI-generated content, and supervision of AI-driven tasks. As our Capgemini Research Institute report suggests, organizations with robust upskilling programs are better positioned to improve workforce productivity, expand innovation and creative possibilities, and mitigate potential risks.

      Implement the right platforms and tools

      • Effective use of generative AI requires a range of platforms and tools, such as AI-enhanced integrated development environments (IDEs), automation and testing tools, and collaboration tools.
      • However, only 27% of organizations report having above-average availability of these tools, highlighting a critical area for improvement.  Beyond the current view of Gen AI as a high-productivity assistant or enabler, we strongly encourage every organization in the business of software engineering to look beyond the ‘copilot mentality’ and over the horizon to what Forrester recently deemed “The Age Of Agents”.  The first wave of Gen AI and the popularity of these technologies as assistive tools will be a great benefit to routine application development tasks.
      • For the enterprises that are building industrialized, commercial software products and platforms – and for the experience engineering of the next generation, we believe that the value and even the essentials of competitive survival depend on adopting and building a vision of far more sophisticated AI software engineering capability than basic ‘off the shelf’ code assist tools deliver.

      Develop appropriate metrics

      • Without the right systems to monitor the effectiveness of generative AI, organizations cannot learn from their experiences or build on successes. Despite this, nearly half of organizations (48%) lack standard metrics to evaluate the success of generative AI use in software engineering. Establishing clear metrics, such as time saved in coding, reduction in bugs, or improvements in customer satisfaction, is vital.
      • We believe that organization-specific KPIs and qualitative metrics around things like DevEx (Developer Experience), creativity, innovation and flow are vital to consider, as the power of the generative era lies far more in the impact these intangibles have on the potential of business models, products and platforms than on the cost savings many leaders erroneously focus on. This is absolutely an inflection point, in which the value of the abundance mindset applies.

      In conclusion

      Generative AI is already well underway in demonstrating its potential to transform the software engineering lifecycle, improve quality, creativity, innovation and the impact of software products and platforms – as well as streamline essential processes like testing, quality assurance, support and maintenance. We expect its use to grow rapidly in the coming years, with continued growth in both investment and business impact.

      Organizations that succeed in adopting generative AI as a transformative force in their software engineering ethos will be those that fully integrate it into their processes rather than treating it as a piecemeal solution. Achieving this requires a bold, cohesive vision, changes in governance, the adoption of new tools, the establishment of meaningful metrics, and, most importantly, robust support for teams across the software development lifecycle. 

      At Capgemini Engineering Software, we are ambitiously transforming our own world of capability, vision, approach, tools, skills, practices and culture in the way we view and build software products and platforms.  We’re here for you, to help you and your teams strike out on your journey of transformation in the generative software engineering era.

      Download our Capgemini Research Institute report: Turbocharging software with Gen AI to learn more.


      Gen AI in software

      Report from the Capgemini Research Institute

      Meet the author

      Keith Glendon

      Senior Director, Generative AI and Software Product Innovation
      Keith is an experienced technologist, entrepreneur, and strategist, with a proven track record of driving and supporting innovation and software-led transformation in various industries over the past 25+ years. He’s demonstrated results in multinational enterprises, as well as high-tech startups, through creative disruption and expert application of the entrepreneurial mindset.

        Boosting productivity in software engineering with generative AI
        Real-world insights and benefits

        Jiani Zhang
        Apr 16, 2025
        capgemini-engineering

        Software engineers may have once stated that software doesn’t write itself. That’s not true anymore. Generative AI is perfectly capable of taking on at least some of the simple tasks involved in coding, as well as other aspects of the software development life cycle. In fact, research published in our new Capgemini Research Institute report, Turbocharging software with Gen AI, shows that organizations using generative AI have seen a 7–18% productivity improvement in software engineering.

        So, what does this mean for those working in the software industry? It would be reasonable to expect some fear of change, after all, status quo bias is a well-documented human behavior. But our research data – which involved both developers and senior executives – shows that software engineers and their employers expect generative AI to enhance the profession and deliver increased value with software quality and the daily workload of software engineers, as companies demand ever more complex software across all parts of their business and product lines.

        Let’s look in more detail at some of these key benefits.

        Accelerate faster with greater accuracy

        The old idea that moving too fast opens the door to mistakes can be turned on its head with the careful use of generative AI during software development. Because generative AI can automate some simple tasks, and complete them more quickly, it can help speed up a whole host of non-safety-critical processes, leaving more time to spend on complex software development.This can include paying extra attention to safety-critical systems, where human oversight will still play a crucial role in rigorous oversight to maintain the highest safety standards.

        Of course, generative AI is not a ‘magic bullet’ that can just be told what to do and automatically produce the result you want. It will need a well-defined architecture and effective rules for how to ‘prompt’ it to generate code that is repeatable and maintainable, and which meets company needs and compliance rules.

        But with the right processes in place, Gen AI clearly holds great promise, and these fundamental benefits are widely acknowledged among software developers. Our research indicates that its use is projected to grow significantly, with over a quarter of all work in software design, development, testing, and quality expected to be augmented by generative AI in two years. By 2026, we anticipate that more than four of every five software professionals will utilize generative AI tools.

        Make room for talent to shine

        Improved speed and accuracy are only part of the picture. They are very much enablers for other key advances, most notably allowing software engineers to spend the time required to develop the complex code they were hired to create.

        Software engineers possess a wealth of talents that extend beyond writing quality, complex code. However, these talents can be stifled if they spend the vast majority of their time on the more mundane – even repetitive – aspects of coding. By freeing them of these tasks, tools like generative AI can unlock engineers’ creativity, enabling them to be creative, think of new ways of addressing problems, or imagine entirely new aspects of a software solution.

        The challenge of balancing mundane tasks with creative thinking is not unique to software engineers. People in many professions often find that their most profound or innovative thoughts emerge when they are not immersed in the more day-to-day aspects of their work.

        However, software engineers still need to spend time writing code, and time must be allocated for it. By automating those everyday tasks, generative AI can free up more time for innovative thinking and creative problem-solving – like allowing software engineers to spend more time thinking through the user experience. Software professionals are aware of this, and we found they see multiple pathways for creativity to emerge. We found that 61% of software leaders have already seen the benefits of generative AI in enabling innovative work, and 36% have seen benefits in collaborative work.

        Advantages like this can be experienced across many different job grades. One technical leader told us, “While senior professionals are leveraging generative AI combined with their domain expertise for product innovation, junior professionals see value in AI process and tool innovation, and in automation and productivity optimization.”

        Increase job satisfaction and retention

        Despite initial fears, firms are not seeing that generative AI is reducing the software engineering workforce. Instead of considering generative AI as a standalone team member, the prevailing view is to use it as a tool to empower team members and enhance their effectiveness.

        When we examined how firms plan to utilize the productivity gains they reap from generative AI, we discovered that only a mere 4% intend to reduce the workforce. The overwhelming majority are committed to enhancing more meaningful work opportunities for their software professionals, such as innovation and new feature development (50%), upskilling (47%), and focusing on complex, high-value tasks (46%).

        This is not really surprising. The reality is that most engineering companies cannot hire anywhere near the number of software engineers they need. So, far from reducing headcount, generative AI is more about allowing the existing software workforce to get closer to what the company dreams it will deliver.

        Our research found that 69% of senior software professionals believe generative AI will positively impact job satisfaction. When we asked software professionals how they see generative AI, 24% felt excited or happy to use it in their work, and an additional 35% felt it left them assisted and augmented. These factors can also benefit staff retention: people who are happy in their work are less likely to look at moving on.

        In conclusion

        It is still very early days for generative AI in the software development life cycle. Still, we have already found that it is being leveraged to speed up development time, enhance products, free up software engineers to move from the mundane to more innovative work, and in doing all this, boost both productivity and job satisfaction. With uptake predicted to grow significantly over the coming few years, we expect exciting things for developers, their products, and their customers.

        Download our Capgemini Research Institute report Turbocharging software with Gen AI to learn more.

        Gen AI in software

        Report from the Capgemini Research Institute

        Meet the author

        Jiani Zhang

        EVP and Chief Software Officer, Capgemini Engineering
        As the Capgemini Software Engineering leader, Jiani has proven a track record for supporting organizations of all sizes to drive business growth through software. With over 15 years of experience in the IT and Software industry, including strategy and consulting, she has helped business transform to compete in today’s digital landscape.

          Should we use generative AI for embedded and safety software development?

          Vivien Leger
          May 6, 2025
          capgemini-engineering

          The idea of deploying generative AI (Gen AI) in software for safety critical systems may sound like a non-starter. With AI coding implicated in declines in code quality, it’s hard to imagine it playing a role in the safety-critical or embedded software used in applications like automatic braking, energy distribution management, or heart rate monitoring.

          Engineering teams are right to be cautious about Gen AI. But they should also keep an open mind. Software development is about much more than coding. Design, specification, and validation can collectively consume more time than actual coding, and here, Gen AI can significantly reduce overall development time and cost. It could even improve quality.

          Incorporating Gen AI in safety-critical environments

          Before we come onto these areas, let’s quickly address the elephant in the room: Gen AI coding. AI code generation for safety-critical software is not impossible, but it would need extensive training of the AI algorithms, rigorous testing processes, and will bring a lot of complexities. Right now, Gen AI should never directly touch a safety-critical line of code. But we should certainly keep an eye on it, as Gen AI code writing as it advances in other sectors.

          However, other areas – from specification to validation – are ripe for Gen AI innovation. Our recent Capgemini Research Institute report, Turbocharging software with Gen AI, found that software professionals felt Gen AI could assist with 28% of software design, 26% of development, and 25% of testing in the next two years. In the report, one Senior Director of Software Product Engineering at a major global pharmaceutical company was quoted as saying: “use cases like bug fixing and documentation are fast emerging, with others like UX design, requirement writing, etc. just around the corner.”

          Software design

          Let’s consider how the software development journey may look, just a few years from now. Let’s say you are designing a control system for car steering, plane landing gear, or a medical device (pick a product in your industry).

          Right at the start, you probably have a project brief. Your company or customer has given you a high-level description of the software’s purpose. Gen AI can analyze this, alongside regulatory standards, to propose functional and non-functional requirements. It will still need work to get it perfect, but it has saved you a lot of time.

          However, you want to go beyond technical requirements and ensure this works for the user. Thus, you ask Gen AI to develop a wide range of user stories, so you can design solutions that pre-empt problems. That includes the obvious ones you would have come up with, Gen AI just writes them more quickly. But it includes all the weird and wonderful ways that future customers will use and abuse your product, ways that never would have occurred to a sensible software engineer like you.

          In most cases, this is about improving the user experience, but it could also prevent disasters. For example, many of Boeing’s recent troubles stem from its MCAS software, which led to two crashes. While the software was a technically well-designed safety feature, its implementation overlooked pilot training requirements and risks from sensor failures. This is the sort of real-world possibility that Gen AI can help identify, getting engineers who are laser-focused on a specific problem to see the bigger picture.

          Armed with this insight, you start writing the code. While the AI doesn’t have any direct influence on the code, you may let it take a hands-off look at your code at each milestone, and make recommendations for improvements against the initial brief, which you can decide whether to act upon.

          Test and validation

          Once you have a software product you are happy with, Gen AI is back in the game for testing. This is perhaps one of its most valuable roles in safety-critical systems. In our CRI report, 54% of professionals cited improved testing speed as one of the top sources of Gen AI productivity improvements.

          Gen AI can start the verification process by conducting a first code review, comparing code industry standards (eg. MISRA for automotive, DO-178 for aerospace), to check for errors, bugs, and security risks. You still need to review it, but a lot of the basic stuff you would have spent time looking for has been sorted in the first pass, saving you time, and giving you more headspace to ensure everything is perfect.

          Once you are satisfied with the product, you want to test it. Your Gen AI assistant can quickly generate test cases – sets of inputs to determine whether a software application behaves as expected – faster and more accurately than when you did it manually. This is already a reality in critical industries, as Fabio Veronese, Head of ICT Industrial Delivery at Enel Grids noted in our report that his company uses generative AI for user acceptance tests.

          And, when you are confident your software product is robust, Gen AI can help generate the ‘proofs’ to show it works and will function under all specified conditions. For example, in the rail industry, trains rely on automated systems to process signals, ensuring trains stop, go, or slow down at the right times. Gen AI can look at data readouts and create ‘proofs’ that show each step of the signal processing is done correctly and on time under various conditions – and generate the associated documents.

          In fact, as you progress through these processes, Gen AI can expedite the creation and completion of required documentation, by populating predefined templates and compliance matrices with test logs. This ensures consistency and accuracy in reporting and saves engineering time.

          Automating processes

          Gen AI can also help you automate many laborious processes that can be so mundane that human brains struggle to stay focused, thus creating the risk of error.

          Take the example of the process used in the space industry for addressing software defects. When a defect is discovered, developers must create a report documenting this defect, develop a test to reproduce the defect, correct the defect in a sandbox, put the updated software through a verification process, reimplement the corrected code back into the main project, and finally test it in within the product.

          A five-minute code fix may take hours of meetings and tens of emails. This is exactly the sort of task Gen AI is well suited to support. Any organization writing safety-critical software will have hundreds of such tedious documentation and procedural compliance processes. We believe (in some cases) that as much as 80% of the time could be saved in such processes by deploying Gen AI for routine work.

          Don’t just take our word for it. Speaking to us for our report, Akram Sheriff, Senior Software Engineering Leader at Cisco Systems notes that, “One of the biggest drivers of generative AI adoption is innovation. Not just on the product side but also on the process side. While senior professionals leverage generative AI combined with their domain expertise for product innovation, junior professionals see value in AI process and tool innovation, and in automation and productivity optimization.”

          Managing the risks to get the rewards

          Despite all these opportunities, we must acknowledge that this is a new and fast-moving field. There are risks, including the correctness of outputs (Gen AI can hallucinate plausible but wrong answers), inherited risk from underlying models, and bias in training data. But there are also risks of not acting out of fear, and missing out on huge rewards while your competitors speed ahead.

          Gen AI needs safeguards, but also a flexible architecture that allows companies to quickly adopt, test, and use new Gen AI technologies, and evolve their uses as needs demand.

          In our report, we propose a risk model (see image 1). It states that any use of Gen AI requires (a) a proper assessment of the risks and (b) that – where mistakes could have serious consequences – you have the expertise to assess whether the outputs are correct.

          Image 1: A risk assessment framework to kickstart generative AI implementation in software engineering

          For now, safety-critical code creation will fall into ‘Not safe to use’, because the consequence of error is high, and the expertise needed to assess the code would probably be more of a burden than starting from scratch. However, testing would fall into ‘Use with caution’, because it would provide valuable insights about software behavior, that experts can assess.

          Finally, a key part of managing risks is comprehensive user training to understand how Gen AI works and its strengths and weaknesses. In our research, 51% of senior executives said that leveraging Gen AI in software engineering will require significant investment to upskill the software workforce. Yet only 39% of organizations have a generative AI upskilling program for software engineering.

          There is a real risk of becoming overly reliant on, or trusting of, Gen AI. We must ensure that humans retain their ability to think critically about the fundamental nature of software and safety. Software engineers must be well-informed and remain actively engaged in verification and decision-making processes, so they can spot problems and be ready to step in if Gen AI reaches its limits.

          In conclusion

          While Gen AI won’t be building safety-critical software on its own anytime soon, it has the potential to enhance development, documentation, and quality assurance right across the software development lifecycle. In doing so, it can not only save time and money, and speed time to market, but it can even improve safety.

          Companies like Capgemini can help shape achievable, phased roadmaps for Gen AI adoption. We guide organizations to integrate AI carefully, following sensible adaption and risk management frameworks and deploying appropriate training, ensuring both its potential and limitations are carefully navigated.

          Download our Capgemini Research Institute report Turbocharging software with Gen AI to learn more.

          Gen AI in software

          Report from the Capgemini Research Institute

          Meet the author

          Vivien Leger

          Head of Embedded Software Engineering
          With over 14 years of experience, Vivien has led teams in building a culture focused on technical excellence and customer satisfaction. He has successfully guided software organizations through their transformation journeys, aligning technology with business goals and designing strategic roadmaps that accelerate growth and profitability.

            The Power of Zero from Capgemini’s ADMnext

            Capgemini
            10 April 2020
            capgemini-engineering

            Driving a lean, efficient, and optimized core is your pathway to enabling infinite possibilities.

            Reap the full benefits of your transformation efforts and become a truly digital enterprise by bolstering your core IT foundation first

            With large-scale market disruptions and the advent of newer-age digital technologies, constant evolution is essential. And, with digital services giving more power to consumers, applications have become the default source of business value – to the extent that application loyalty is now synonymous with brand loyalty. So, understandably, leading CIOs are increasingly becoming more “apps-focused” by default.

            But, this emphasis on applications and future digital transformation can blur your focus on your core – your here-and-now operations – and this can hinder your ability to lay a foundation for a digitally empowered enterprise.

            The key to achieving this digitally empowered enterprise and your digital transformation visions lies in getting the basics right first. This means ensuring your legacy estate is made rock solid so that it acts as your digital transformation launchpad. And in order to do this, a clear and solid operational framework is crucial.

            Introducing the Power of Zero: An actionable framework for achieving business excellence through hyper-efficient core IT

            The Power of Zero is an actionable framework for solidifying your legacy IT estate as a launchpad for your digital transformation, so that you attain all the speed and agility needed for a truly digital enterprise.

            In putting your current state of IT and applications in order, the Power of Zero enables you to achieve maximum impact from your core applications. This means a future state with zero defects, zero touch, zero applications debt, and zero business interruption – all leading to zero innovation latency. The Power of Zero is driven by speed and agility and delivers business value throughout your entire applications realm by helping you to:

            1. Drive down to zero defects and tickets through preventive, predictive, and perfective maintenance
            2. Foster zero touch through an AI-infused intelligent platform
            3. Get down to zero applications debt through effective portfolio management
            4. Enable zero business interruption through insights, competitiveness, and efficiency
            5. Create a state of zero innovation latency through disruptive services

            ADMnext and the Power of Zero: Business-focused ADM Services for accelerated growth

            In applying the Power of Zero, Capgemini’s ADMnext moves applications development and maintenance (ADM) from an insurance-based function to investment-focused, business value driver. Essentially, ADMnext equips you with the ability to rapidly respond to change – or rather to embody the change, the innovation, and the outcomes you want for your business.

            In building a lean, efficient, and resilient core with zero human touch, ADMnext enables clients to drive operational agility and helps them restore services quickly in times of crisis.

            At Capgemini, we fully believe in this simple, yet powerful vision, and we are committed to bringing everything ADMnext – and the Power of Zero – can offer your applications.

            Download the whitepaper on the right to learn more about what the Power of Zero and ADMnext can do for your business.

            Reinventing Life Sciences & Healthcare is about digital meeting physical
            Bridging digital innovation and physical engineering in a regulated world

            Nirlipta Panda
            July 21, 2025
            capgemini-engineering

            Life Sciences & Healthcare organizations are under unprecedented pressure. Against a backdrop of a growing and ageing population, and with care therapies, drugs and diagnostics becoming more complex and expensive, these industries must deliver personalized, cost-effective, and compliant solutions faster than ever.

            At the same time, they face global disruption from geopolitical shifts, sustainability mandates, and increasing competition – both from digital-native new entrants and generic alternatives purchased by increasingly savvy consumers.

            The sector has a dual challenge: increasing patient value, and bringing down costs – all within a heavily regulated and safety-conscious environment.

            At Capgemini, we believe there are three critical ways for life sciences and healthcare organizations to meet these challenges: infusing operations with digital technology, upgrading legacy engineering systems, and building globally agile and resilient operations.

            All three revolve around a central idea: transforming the digital and physical worlds together.

            1. Infuse digital into the physical world

            Life sciences companies have long dealt with sophisticated physical systems – complex manufacturing equipment, labs, and regulated medical devices. But much is also legacy-driven. Its many siloed, often paper-based systems create costs and hurdles, resulting in slower and inefficient go-to-market of critical drugs, devices and therapies.

            Using digital technologies (AI, software, IoT, Digital twin etc) at scale can deliver improved ways of operating, faster time to market, and lower costs, whilst also delivering sustainability through reduced waste and energy.

            Take connected factories and digital twins. These allow for real-time monitoring, simulation, and optimization of physical processes. Pharmaceutical companies can test and refine manufacturing changes virtually before deploying them in the real world, ensuring compliance and accelerating time to market.

            Projects abound across life sciences which illustrate such transformation. An example is CALIPSO, a €20 million+ bioproduction initiative involving Sanofi, Capgemini, and others, which used micro-sensors, AI, and digital twins to enhance predictive control of bioproduction processes.

            Digital also enables people to work together more efficiently. Data platforms and cloud – increasingly with built in supportive AI agents – provide spaces for scientists to collaborate across drug development silos – creating a digital feedback loop that can significantly reduce R&D cycles.

            2. Upgrade core engineering

            Many life sciences companies are constrained by aging infrastructure and fragmented legacy systems. These block innovation and cost-efficient operations.

            While individual upgrades are fine, the key to unlocking transformation at scale is to streamline and standardize these systems, allowing new technologies to be easily integrated, whether digital or physical.

            Predictive maintenance on the shop floor is one such example. Capgemini delivered a predictive maintenance solution for a large global biopharma organization which reduced risk of human error by 80%, improving delivery and yield, whilst boosting asset and capacity utilization by 20%. But it was only possible because the correct data foundation had first been put in place to modernize legacy systems.

            Standardization can also underpin more radical changes. Some legacy systems are so outdated, and local skills so hard to find, that companies decide to lift the entire function into a more optimized and smarter ecosystem.

            This approach is often seen with activities like product sustenance. For global medical OEM leaders, it is a challenge to maintain and update their large portfolios of Class I, II or III regulated medical products, including lines that have been discontinued but still need maintaining. Our experience shows that a standardized engineering platform across the portfolio of products yields large-scale optimization efficiencies in managing and maintaining these regulated products. This standardized approach also allows such activities to be delivered from anywhere in the world – enabling easy outsourcing of cost centers.

            Here again, the magic happens when digital solutions are applied to physical engineering – but this time in a whole new context, on the other side of the world.

            3. Build agile, resilient operations around the world

            Life Sciences companies are grappling with the fact that legacy, monolithic operations are less and less viable in this globalized yet geopolitically fragile world.

            The modern world needs agility – allowing it to rapidly deploy products and services across regions, adapt to changing local regulations, and scale engineering operations quickly. This is vital for quickly getting products to the widest possible market, especially in uncertain environments.

            One way to achieve agility is through smaller, distributed manufacturing sites and engineering hubs, strategically placed around the globe where they can be close to customers, suppliers, or talent pools, or where transport emissions are minimized. Such operations require an ecosystem of partners, and require digital solutions to manage, similar to those developed to manage global supply chains.

            But agility isn’t just geographic, it’s operational. This is not just about physical manufacturing facilities, but centers of excellence which unblock barriers to innovation and production efficiency. Regulatory compliance, commissioning, qualification, verification, or process heavy activities are classic candidates. Employing dedicated specialist teams to deliver these functions, not only saves money but allows organizations to be faster and more agile.

            Capgemini has pioneered a concept called Engineering Factories. These ‘Factories’ redefine traditional outsourcing. Each is designed around a specific engineering domain and business goals, such as delivering products at a specific cost or weight; or managing specific operations such as the supply chain, MES, sustainable product design; or delivering capabilities like compliance, validation, or quality assurance. Each factory combines a team of specialists and engineers with operational and digital expertise. They are transversal, working across industries to bring the best of all worlds synthesized together.

            Consider a large-scale Manufacturing Execution System (MES) deployment. Normally, rolling out MES solutions across plants would be time-consuming, resource heavy, and associated with high costs and high stakes. In our experience, a focused MES Factory helps clients standardize processes and achieve faster results. For example, one global pharmaceutical company implementing a global MES solution saw a 75% reduction in quality review time and an 80% reduction in deviations.

            A similar example is our Commissioning, Qualification and Validation Factory, based out of centers in Portugal and Morocco, serving highly regulated manufacturing sites focusing on compliance, and complex global regulations with a standardized and efficient approach. Another is the Intelligent Testing Factory, based out of India, that provides full lifecycle product management and intelligent testing for global medical device clients, including a human sample testing lab, ensuring global readiness and regulatory alignment.

            Such a factory approach creates a centralized hub that can deploy new capabilities in a standardized, agile way, which is often accessed via a front office on the client site. The result is a more resilient, adaptive, cost-effective engineering organization.

            Built for both worlds

            Of course, these areas all overlap. A successful life sciences and healthcare organization could digitize its entire value chain to optimize digital and physical processes. This could provide a foundation to quickly upgrade operations or move business functions to centers of excellence that take advantage of high tech setups, cost reduction and global talent pools.

            As the world changes around us, what sets successful organizations apart is their ability to operate fluently in both the digital and physical worlds. They will embed intelligence into every part of the product and production lifecycle, whilst shifting from isolated physical systems to joined up digital-physical ecosystems. They will quickly take advantage of cost savings and innovation opportunities, whether by optimizing operations at home, or delivering them elsewhere to take advantage of the benefits of smarter factory setups or favorable business and talent environments around the world.

            All of this requires agile physical operations with a digital underpinning.

            Meet the author

            Nirlipta Panda

            Global Life Sciences, Capgemini Engineering

              Elevate your cloud strategy – Cloud economics & optimization

              Capgemini
              Capgemini
              28 Jan 2022

              Leveraging Public Cloud Platforms is not a hype anymore, adopting Public Cloud is a must to compete in today’s digital market. With the flexibility and speed of delivery, Public Cloud helps companies lower the time-to-market and explore new innovative technologies. This explains why many companies have already adopted some form of the Public Cloud.

              However, lack of cloud governance may raise your IT spend drastically, do you still have control over your Public Cloud spend? Read the interview of Rijk van den Bosch, Cloud CoE Lead, Capgemini, about the real cost challenges of Cloud

              Taming the spiraling costs of Cloud

              In the market, Capgemini sees that one of the biggest challenges is Public Cloud Governance and how to keep control over Security, Costs, Risks and Architecture compliance. Cost is one of the biggest aspects of this challenge and includes:

              • Controlling (over-) spending: When the growing usage of Public Cloud reaches the office of the CFO, generally teams are overspending.
              • Transparency in costs: With different teams and departments using Public Cloud features, consumption of Public Cloud increases as the number of services grow, leading to a lack of transparency in costs.
              • Utilization of resources: Using the flexibility and scalability of Public Cloud is fundamental to keep the costs low. This means that resources that are not- or underutilized can be shut- or scaled down to reduce the consumption costs.
              • IaaS implementations that can be modernized: Modernizing your application landscape to microservices, containers of even serverless applications gives you the opportunity to use the real flexibility and scalability of the Public Cloud.

              See-Decide-Act: Capgemini’s Cloud Economics and Optimization

              Our Cloud Economics and Optimisation assessment, is a consultation that aims to detect, identify and implement necessary cost optimisation to minimize cloud operating costs, through a structured approach. This Service also provides greater emphasis on creating a culture of cloud cost accountability and transparency. The assessment combines our expertise, best practices and a toolset to:

              • Visualize Cloud Consumption
              • Create an inventory of your Public Cloud estate
              • Improve on Cloud Costs efficiency

              Identify quick wins in cost reduction for your IT landscape

              Capgemini offers this as a 7-8-week assessment, that involves a high-level assessment of your Public Cloud environment, analysis of your Public Cloud consumption and the delivery of an advice report to reduce and control your Public Cloud spend. The offer is Cloud Agnostic, this means that it can be based on any Cloud Service Provider like Microsoft Azure, Amazon Web Services and Google Cloud Platform.

              With minimal effort and costs the assessment will gain you the following valuable insights.

              • Quick Wins to reduce your Public Cloud spend in just a few steps.
              • Visibility and transparency in your actual Public Cloud consumption and the forecasted consumption.
              • Advice on Application Modernization and the usage of Platform as a Service features like containers, webservices and/or serverless functions.
              • Help create a culture of cloud cost accountability and transparency.

              Data as the new sunshine

              Dr. Rainer Mehl
              22 Jun 2022

              Unlimited potential for new data-driven business models in the automotive industry.

              Most of us know the phrase “data is the new oil”. Coined by British entrepreneur Clive Humby back in 2006, the phrase reflected data’s value in the digital economy and the realization that, like oil, data in-creases significantly in value once it has been processed and used for a specific purpose.

              As the automotive industry progresses toward autonomous driving, softwarization, and new mobility-focused business models, the importance of data – what is captured and how it is used – is growing. But, in the context of the industry’s collective shift toward electrification and sustainable mobility, the idea of data as a “new oil” simply doesn’t fit.

              Instead, I like to think of data as the new sunshine for automotive companies – not only because of its connotations with renewable energy, but also because sunshine is virtually unlimited, should be availa-ble to everybody, and has the potential to cast light and imbue positive energy across organizations and the industry at large.

              The data opportunity for automotive companies

              Data is a multi-billion-dollar opportunity. Recent Capgemini research shows that:

              • In Europe alone, there are currently 57 million connected cars. By 2030, there will be more than 230 million.
              • Today, the revenue generated from data and data-related services in the automotive industry is around 0.35 EUR cents per vehicle, per month. This will rise ten-fold to 3-4 EUR by 2030.
              • An autonomous vehicle could be generating close to 100 TB of data a day by 2025 (for reference, a Tesla Model S using Tesla’s self-declared semi-autonomous functions generates about 4 GB per day). That’s a lot of data and it begs the question of how much of this data is valuable and relevant.

              Source: Capgemini Invent | The Vehicle Data Big Bang

              Where is all the data coming from?

              There are five key sources of data for automotive companies to consider: telematics (GPS navigation, automatic emergency call, advanced driving assistance systems, and car sensor data in general sense), R&D and production, environmental data (like VW’s Car2X or Michelin’s DDI), customer data, and supply chain.

              Every moving part and every action taken by a car or driver can generate data, from driver behavior as indicators of mood and stress, through to consumption, navigation, and wear and tear of components. If something is happening in your vehicle, there is a way for it to be captured as data. Then there is the connected smartphone and all the information it carries and communicates, and individual infotainment interactions – they all provide data signals that can be used to understand what drivers and passengers expect from their mobility experience.

              If it seems like a lot already, it’s going to get a lot bigger with 5G, autonomous driving, and enhanced in-passenger experiences. The same is happening in production, across supply chains, and with customers … every action and interaction is a data signal that can be captured and used to optimize or transform processes or to create new sources of revenue.

              How can all this data be used?

              These five sources of data – telematics, R&D & production, environment, customer, and supply chain – can be used for a variety of different use cases, functions, features, and new business models.

              In my view, there are currently nine main types of business use cases for the growing volumes of data. They are around: Products and Services, Sales, Fleet and Maintenance, Research and Development, On-demand Functions, In-car Experience, Infrastructure Optimization, Safety and Security, and New Insurance Models.

              Here are just a few examples of how automotive companies can transform data into enhanced business performance:

              • Selling data to third parties to generate revenue.
              • Using data to inform new products, services, and business models (like in-car entertainment as a subscription service, new engine modes ‘on demand’ or insurance policies based on driver habits).
              • Combining data from across all connections on the supply chain to create a holistic view, and then plan, simulate, and anticipate multiple scenarios to prevent shortfalls as early as possible.
              • To improve reporting capabilities around environmental impact (i.e. CO2), and then using the insights to identify opportunities for reduction and ongoing performance measurement. As the importance of ESG credentials and performance against publicly stated sustainability (e.g. net zero) goals grows, this area has significant reputational and financial implications for businesses. Data is fundamental to measuring and improving performance.

              To sell or not to sell?

              Automotive companies have a choice in terms of whether to sell raw data to third parties or retain it for internal purposes and proprietary monetization opportunities. For example, Honda sells anonymous camera and sensor data to third parties in order for them to derive insight about vehicle usage and entertainment preferences. Many automotive companies are selling telematics data for insurance and Fleet & Maintenance purposes. (e.g. predictive maintenance) to companies like Otonomo, Wejo or Caruso. Selling data can provide a quick and lucrative source of revenue but OEMs need to first understand whether they could be generating more and differentiating value themselves from this same data, and whether they have the skills, capacity, and inclination to do so. Decisions about whether and when to sell data to third parties should be taken within the context of a holistic data strategy.  

              Data is a different ballgame for automotive companies

              In a way, automotive OEMs are like farmers who have just discovered that their businesses are sitting on top of huge reserves of a valuable natural resource. The farmer knows he is sitting on something valuable but lacks the expertise to process it and extract maximum value. In this situation, there are three options:

              • Do nothing.
              • Sell the data to third parties.
              • Seek to transform data into valuable insight that is used to develop new services, business models, and revenue streams.

              The first option is not really an option at all. The second option, to sell raw data to third parties, reduces automotive OEMs to the role of pure manufacturer and means waving ‘goodbye’ to the customer relationship. It also means being left behind while data-focused new entrants like Nio and Tesla increase their market shares, and traditional tech giants like Google and Amazon – equipped with their industry-leading data and software capabilities – grow their presence in the rapidly evolving mobility competitive landscape. This leaves us with the third option – to try and transform data into valuable insight that can be used to inform business and product strategy.

              Automotive OEMs have a unique advantage – customer trust

              Despite being relatively inexperienced when it comes to dealing with such huge amounts of data, automotive OEMs have a key advantage over newer competition in this space – trust.

              According to Capgemini research, customers are happier to share their data with automotive OEMs than they are with insurance companies, public authorities, and platform providers. This represents a real opportunity to win customer loyalty and maximize the value of relationships by designing and building mobility experiences informed by data and by promoting services known to be relevant.

              As we move towards a future of autonomous mobility, there is potential for vehicles to become third living spaces, where we can work, rest or be entertained while getting from A to B (as articulated by Audi in the promotion of its urbansphere concept). This vision is based on connectivity and a suite of data-based services being provided … all of which represent intriguing revenue-generating opportunities. If indeed vehicles do become third living spaces for us, then building trust between provider and customer and maximizing business value from that trust will be key to increasing market share and success.

              4 tips on how to maximize the data monetization opportunity

              • Find out which data is worth collecting and contextualizing. It’s easy to get bogged down trying to capture, store, and process every byte of data in the belief that it’s all useful. It’s not about how much data you have – it’s how you use it. Identify the opportunities you want to pursue and shape your data management strategy accordingly.
              • Establish a holistic data strategy and a cross-function data office that identifies and reviews monetization opportunities, ensures data is appropriately accessible across the organization, and evaluates when it makes sense to sell data to third parties or share with partners in order to achieve differentiating new value propositions.
              • Build on customer trust to gain competitive advantage. Today, OEMs are in the driving seat, with a relatively high level of trust from customers. This represents a fantastic opportunity to build loyalty and deeper, broader relationships with customers.  
              • Enable smart and quick decisions and collaboration. Data, for all its value, can also be burdensome. Collecting and storing data can eat up valuable resources – human, IT capacity and financial. Through data democratization, partnerships and dedicated, cross-function data capabilities better results can be achieved faster.

              No ‘one size fits all’. Consider regulatory and cultural differences toward data use

              What might be a “no go” culturally and legally in the EU, can be a great opportunity for data monetization in the Japanese, American or South-Korean markets. Monetizing data requires the ability to harness global scale and expertise, while being nimble enough to adapt to individual markets and their specific regulations and preferences. Customer trust is key to success in the mobility industry of the future. Responsible and ethical use of customer data is vital in order to build and maintain customer trust. 

              The race is on. Let the sunshine in.

              The volume of data being generated by vehicles is growing every day and organizations from across (and beyond) the automotive landscape are identifying and acting on data monetization opportunities. What’s your stance? The time to act is now. Are you sitting, selling, or leading the pack by harnessing data sunshine as part of your enterprise-wide strategy?

              Learn more. Check out the point of view from Capgemini Invent – The Vehicle Data Big Bang.

              The key ingredients of the car user experience

              Mike Welch
              14 Dec 2023
              capgemini-engineering

              The humble car – where we are reported to spend over four years of our life – is being reinvented around the user

              The car of the future will be very different to the one of today. As vehicles become digitized, high-end cars may change from a luxury driving experience to a luxury chauffeur experience, with music, mood lighting, or wellbeing features. A car interior could adjust its layout and color at the touch of a digital dashboard, to become a conference room, a meditation temple, or a bedroom. Other uses will arise that we can’t yet imagine.

              But today’s market is increasingly competitive – with a long-term downward trend in vehicle ownership. Premium carmakers must stay competitive, and the in-vehicle user experience will be a key element of this brand differentiation. Successfully delivering this new vehicle user experience requires significant changes, both to the vehicle and the ecosystems around it, as well as the skills, culture and business models of carmakers.

              In the first part of this two blog series, we will discuss the key ingredients of this new Mobility Experience.

              What will success take?

              It will ultimately come down to three areas:

              1. The in-vehicle user experience
              2. The vehicle communications system
              3. The mobility services ecosystem

              These are outlined below.

              Requirement 1: In-Vehicle User Experience

              The in-vehicle experience depends on elements of the vehicle that enable new and richer experiences for the driver and passengers. Such elements include head up displays and windscreen AR overlays which can present driver insights.

              It includes infotainment, which naturally includes screens and speakers, but may also include smart lighting, connected temperature and movement sensors and seat adjustments (eg. actuators and heating pads). The future mobility experience will also include functionality we haven’t even thought of yet – this is really just the beginning.

              Getting this right requires a range of hardware and software that must be designed, deployed and regularly updated, in order for new digital experiences to be easily (and securely) uploaded to the car’s computer.

              Requirement 2: Vehicle communications

              The key driver of the digital user experience is connectivity. This allows vehicles to add new digital services, make updates, and share data to generate real-time insights, as well as talk to smart homes, and smart city infrastructure.

              Good connectivity means implementing Wi-Fi, cellular, Bluetooth and other wireless communication technologies which in the future will likely be extended to include Low Earth Orbit satellite communications. It will also mean integrating the communications protocols and software that allow these to operate reliably, and communicate via the car’s telematics control unit (TCU). Cellular is probably the most pressing challenge.  Most vehicles can already connect over Wi-Fi, but that relies on a local signal. Cellular – especially using new 5G networks – would allow ‘always on’ connectivity that allows real time data processing, over-the-air updates and features on demand.

              Requirement 3: Mobility services

              This last bit is about launching software services, or apps, which interact with the in-vehicle technologies to offer new user experiences, such as restaurant recommendations, or information about points of interest the vehicle is passing.

              Work must be done to design the in-vehicle systems to accommodate such apps, and set up processes to safely download and integrate them. Vehicle makers may have teams working on their own apps, but even these will need processes for certification, and dedicated channels in app stores. This will be doubly important if OEMs wish to welcome third-party applications.

              Want to find out more?

              In the second part of this blog, you’ll learn what it will take to redesign vehicles for smart mobility: Part 2: How to set yourself up to redesign the car around the user.

              Author

              Mike Welch

              Australia Head of Telecom and Entertainment, Capgemini Invent
              Michael is responsible for defining and executing the technology strategy and roadmap across the ER&D Automotive Portfolio. He is also currently the Offer owner for Mobility Experience which includes Intelligent Cockpit, Vehicle Communication and Mobility Services.