Skip to Content

Quantum computing: The hype is real—how to get going?

Christian Knopf
27 Apr 2023

We are witnessing remarkable advancements in quantum computing, regarding the hardware but also its theory and usage.

Now is the age of exploring: How for example will quantum machine learning differ from the classical, will it be beneficial or malicious for cyber security? Together with Fraunhofer and the German Federal Office for Information Security (BSI), we explored that unsettled question and found something sensible to do today. There are two effective ways in which organizations can start preparing for the quantum revolution.

The progress in quantum computing is accelerating

The first quantum computers were introduced 25 years ago (2 and 3 qubits), the first commercially available annealing systems are now 10 years old. During the last 5 years, we have seen bigger steps forward, for example systems with more than twenty qubits. Recent developments include the Osprey chip with 433 qubits by IBM, first results of quantum error correction by Google, as well as important results in interconnecting quantum chips announced by the MIT.

From hype to realistic expectations

Where some see steady progress and concrete steps forward, others remain skeptical and point out missing results or unkept promises—the most prominent of which is found in the field of the factorization into large prime numbers: There still is a complete lack of tangible results in breaking the RSA cryptosystem.

However, development in quantum computing has already passed various important milestones. Dismissing it as mere hype that will pass eventually now becomes increasingly difficult. In all likelihood, this discussion can soon be laid to rest, or at least refocused towards very specific quantum computing frontiers.

The domain of machine learning has a natural symbiosis with quantum computing. Especially from a theoretical perspective, research in this field is considered fairly advanced. Various research directions and study routes have been taken, and a multitude of results are available. While much research is done through the simulation of quantum computers, there are also various results of experiments run on actual, non-simulated quantum devices.

As both the interest and the potential of quantum machine learning is remarkably high, Capgemini and the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, have delved deeply into this topic. On request of the German Federal Office for Information Security (BSI), we went as far as analyzing the potential use for, as well as against, cyber security. One of the major results of this collaboration is the report “Quantum Machine Learning in the Context of IT Security“, published by the BSI. Current developments indicate that there is trust into quantum machine learning as a research direction and it’s (perceived) future potential.

Laggards increasingly lack opportunities

The ever-growing availability of better and more efficient IT technologies and products is not always reasonable to implement and often difficult to mirror in an organization. Nevertheless, innovation means that a certain “technology inflation” constantly devalues existing solutions. Therefore, an important responsibility of every IT department is to keep up with this inflation by implementing upgrades and deploying new technologies.

Let us consider a company that still delays the adoption of cloud computing. While this may have been reasonable for some in the early days, the technology has matured. Over time, companies that have shied away from adoption have missed out on various cloud computing benefits while others took the chance to gain a competitive advantage. Even more, the longer the adoption was delayed or the slower it was conducted, the further the company has allowed itself to fall behind.

Time to jump on the quantum computing bandwagon?

Certainly, quantum technology is still too new, too unstable, and too limited today to adopt it in a productive environment right away. In that sense, a pressure to design and implement plans for incorporating quantum computing into the day-to-day business does not exist today.

However, is that the whole story? Let us consider two important pre-implementation aspects: The first of these is to ensure everyone’s attention for the topic: For an eventual adoption, a widespread appreciation for what might be gained is crucial to get people on board. Without it, there is a high risk of failing­—after all, every new technology comes with various challenges and affords some dedication. But developing the motivation to adopt something new and tackle the challenges takes time. So, it’s best to start early with building awareness and basic understanding of the benefits throughout all levels and (IT) departments.

The second aspect is even more difficult to achieve: experience. This translates to know-how, participation, and practice within the organization to get prepared for the adoption of technologies once they are ready for productive deployment. In the case of quantum computing, gaining experience is harder to achieve than with other recent innovations: In contrast for example to cloud computing—which constitutes a different way of doing the same thing, and thus allows companies to get used to them slowly—quantum technologies represent a fundamentally new way of computation, as well as a completely new approach of solving problems and answering questions.

The key to the coming quantum revolution is a quantum of agility

Bearing in mind the scale of both pre-implementation aspects and of the uncertainty of when exactly quantum is going to deliver advantage in the real world, organizations need to start getting ready now. On a technical level, and in the realm of security, the solution for the threat of quantum cryptanalysis is deployment of post-quantum cryptography. However, on an organizational level, the solution is crypto agility : having done the necessary homework to be able to adopt quickly to the changes, whenever they come. Applying the same concept, quantum agility represents having the means to adapt quickly to the fundamental transformations that will come with quantum computing.

Thus, building awareness and changing minds now will have a considerable pay-off in the future. But how can organizations best initiate this shift in mindset towards quantum? Building awareness is a gradual process that can be promoted by a working group even with small investments. This core group might for example look out for possible use cases specific to the respective sector. Through various paths of internal communication, they can spread the information in the proper form and depth to all functions across the organization.

To build up knowledge and experience, the focus should not be on viable products, aiming to replace existing solutions within the company. Instead, it is a way of playing around with new possibilities, of venturing down paths that might not ever yield any tangible results but aiming to discover guard rails subjective to each corporation and examine fields where quantum computing might eventually be the way to substantial competitive advantages.

Frontrunners are gaining experience in every sector

For example, some financial institutions are already exploring the use of quantum computing for portfolio optimization and risk analysis, which will enable them to make better financial predictions in the future. Within the pharma sector, similar efforts are made, gauging the potential of new ways of drug discovery.

In the space of quantum cyber security, together with the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Capgemini has built a quantum demonstration: performing spam filtering on a quantum computer . While this might be the most overpriced—and under engineered—spam filter ever, it is a functioning proof of concept.

Justifying investment in quantum computing requires long-term thinking

The gap between companies in raising organizational awareness and gaining experience with the new technology is gradually growing. Laggards have a considerable risk of experiencing the coming quantum computing revolution as a steamroller, flattening everyone that finds themselves unprepared.

The risks and challenges associated with quantum technology certainly include the cost of adoption, the availability of expertise and knowledgeable talent, as well as the high potential of unsuccessful research approaches. However, the cost of doing nothing would be the highest. So, it’s best to start now.

We don’t know when exactly the quantum revolution will take place, but it’s obvious that IBM, Google and many more are betting on it—and in the Capgemini’s Quantum Lab, we are exploring the future as well.

Christian Knopf

Senior Manager Cyber Security
Christian Knopf is a cyber defence advisor and security architect at Capgemini and has a particular consulting focus on security strategy. Future innovations such as quantum algorithms are also in his field of interest, as are the recent successes of deep neural networks and their implications to the security of clients he works with.

    New Quantum-Safe Cryptographic Standards: Future-Proofing Financial Security in the Quantum Age

    Adrian Neal
    Sep 4, 2024

    The National Institute of Standards and Technology (NIST) has released three new cryptographic standards

    ML-KEM, ML-DSA, and SLH-DSA, designed to protect sensitive data against the emerging threat of quantum computing. These standards are crucial for the financial services sector, which is particularly vulnerable to the risks posed by quantum technology.

    The quantum computing threat: A looming crisis

    Quantum computers are on the horizon, poised to revolutionize computing with their ability to solve complex problems exponentially faster than classical computers. This technological leap, however, comes with significant security implications. Current cryptographic methods, which safeguard everything from financial transactions to customer data, could be rendered obsolete by quantum algorithms capable of breaking these traditional encryption methods.

    One of the most pressing concerns is the “harvest-now, decrypt-later” threat. Malicious actors may already be intercepting and storing encrypted data today, with the intent to decrypt it later once quantum computers become sufficiently powerful. This means that sensitive financial data, thought to be secure now, could be exposed in the future when quantum technology matures.

    Future-proofing financial security

    NIST’s newly released standards are the result of extensive research and development aimed at countering the quantum threat. These standards are designed to withstand the capabilities of quantum computers, providing a robust defense against future decryption attempts.

    • ML-KEM (modular lattice key encapsulation mechanism): A general-purpose encryption standard suitable for securing data in transit across various applications.
    • ML-DSA (modular lattice digital signature algorithm): A general-purpose lattice-based algorithm for general-purpose digital signature protocols.
    • SLH-DSA (supersingular lattice-hard digital signature algorithm): A stateless hash-based digital signature scheme, primarily as a backup to ML-DSA.

    The complexity of implementing quantum-safe standards

    While the importance of adopting these new standards cannot be overstated, the complexity of their implementation must also be acknowledged. Transitioning to quantum-safe algorithms involves more than simply updating software; it requires a deep understanding of cryptographic principles and the potential pitfalls of implementation, and almost certainly requiring new crpto-agile architectures.

    One critical component in this transition is the use of cryptographically secure random number generators (CSPRNGs). CSPRNGs are essential for generating keys that are truly random and, therefore, secure. With the introduction of these new algorithms, inadequate random number generation can very quickly lead to vulnerabilities, undermining the strength of even the most advanced cryptographic algorithms. Ensuring that CSPRNGs are correctly implemented is a foundational step in securing cryptographic systems against both current and future threats.

    Moreover, knowing how to implement these quantum-safe algorithms is crucial to avoid side-channel attacks. Side-channel attacks exploit physical or logical data leaks during the encryption process, such as timing information or power consumption, to gain unauthorized access to the encrypted data or cryptographic key material. Proper implementation of the new standards must account for these risks by employing best practices in algorithm deployment, hardware security, and system architecture.

    Performance considerations and challenges in high-performance environments

    While the newly introduced quantum-safe cryptographic standards offer robust security against quantum threats, they come with certain trade-offs, particularly in terms of performance. Compared to traditional algorithms like RSA, these new standards generally require more computational resources, leading to a reduction in performance. This is especially true in high-performance environments where encryption, decryption and signing processes need to be conducted rapidly, such as in real-time financial transactions or high-frequency trading systems.

    To mitigate the performance impact, hardware acceleration may be necessary. Specialized hardware, such as field-programmable gate arrays (FPGAs) or dedicated cryptographic processors, can be employed to offload the computational burden and maintain the required performance levels. However, this introduces additional complexity, especially in virtualized environments where such hardware is not typically available or easily integrated.

    In virtualized or cloud-based infrastructures, where scalability and flexibility are paramount, the introduction of quantum-safe algorithms may necessitate significant re-architecting of systems. The reliance on hardware acceleration in such environments could undermine the inherent benefits of virtualization, such as resource pooling and dynamic provisioning. Consequently, organizations may need to rethink their infrastructure design to balance security with performance, potentially leading to increased costs and complexity.

    Additionally, the increased computational demands of ML-KEM, ML-DSA, and SLH-DSA could also impact latency-sensitive applications, necessitating further optimization and tuning of systems to ensure that service-level agreements (SLAs) are met without compromising security.

    The urgency of early implementation

    The timeline for the practical deployment of quantum computers remains uncertain, but the potential risks they pose are immediate. The financial services sector must act now to integrate these quantum-safe standards to protect against future threats. Failure to do so could lead to catastrophic breaches, resulting in severe financial and reputational damage.

    The “harvest-now, decrypt-later” threat underscores the urgency: data compromised today can be exploited in the future, making it imperative that financial institutions, particularly, transition to quantum-safe encryption as quickly as possible. The adoption of these standards is not merely a strategic advantage but a necessity to ensure long-term data security.

    Preparing for these challenges

    To address these challenges, organizations must not only adopt the new cryptographic standards but also invest in the necessary infrastructure upgrades and optimizations. This might include deploying hardware acceleration in data centers, re-architecting virtualized environments to better accommodate these new algorithms, and conducting thorough performance testing to identify and mitigate potential bottlenecks.

    Challenges of implementing quantum-safe standards in mainframe and legacy systems

    One of the most significant challenges facing financial institutions is the implementation of the new quantum-safe cryptographic standards in mainframes and other legacy systems. These systems, which are often critical to the core operations of financial services, were not designed with quantum-safe cryptography in mind and may lack the capacity for straightforward upgrades.

    Mainframes, in particular, are known for their reliability, scalability, and ability to process large volumes of transactions. However, they often run on proprietary or outdated software and hardware architectures that may not be easily compatible with the computational demands of the new cryptographic algorithms. Implementing ML-KEM, ML-DSA, and SLH-DSA in such environments could require extensive modifications to existing systems, which can be both costly and time-consuming, even when scarce mainframe resources are available.

    Furthermore, some legacy systems might not support the necessary hardware acceleration required to maintain performance when using these more resource-intensive algorithms. This could lead to a significant degradation in system performance, which is particularly problematic in environments where transaction speed and efficiency are paramount.

    In cases where direct upgrades are not feasible, financial institutions may need to consider alternative approaches, such as:

    • Middleware solutions: Deploying middleware that can interface between legacy systems and newer cryptographic standards, ensuring secure communication without requiring a complete overhaul of existing infrastructure.
    • System segmentation: Isolating and segmenting critical legacy systems that cannot be upgraded, while introducing quantum-safe encryption in other parts of the infrastructure to mitigate overall risk.
    • Gradual migration: Planning a phased migration to newer systems or platforms that are designed to support quantum-safe algorithms, thereby reducing reliance on legacy infrastructure over time.

    Preparing for legacy system challenges

    Addressing the challenge of implementing quantum-safe standards in mainframe and legacy environments requires a strategic approach. Financial institutions must carefully assess the capabilities of their existing infrastructure and explore viable paths for integration. This might involve working closely with vendors to develop custom solutions or investing in the modernization of critical systems to ensure they are future-proof.

    Adrian Neal

    Senior Director | Capgemini | Global Lead – Post-Quantum Cryptography
    Adrian Neal, a two-time NATO Defence Innovation Challenge winner, is a globally recognized cybersecurity expert and Senior Director at Capgemini. With an Oxford Master’s degree in Software Engineering, he has 40 years of experience across multiple sectors worldwide.

      Generative AI: A powerful tool, with security risks

      Matthew O’Connor
      23rd August 2023

      Generative AI is a powerful technology that can be used to create new content, improve customer service, automate tasks, and generate new ideas. However, generative AI also poses some security risks, such as data security, model security, bias, and fairness, explainability, monitoring and auditing, and privacy. Organizations can mitigate these risks by following best practices to ensure that generative AI is used in a safe and responsible manner.

      Generative AI is a rapidly emerging technology that has the potential to revolutionize many aspects of our lives. Generative AI can create new data, such as text, images, or audio, from scratch. This is in contrast to discriminative AI, which can only identify patterns in existing data.

      Generative AI is made possible by deep learning, a type of machine learning that allows computers to learn from large amounts of data. Deep learning has been used to train generative AI systems to create realistic-looking images, generate human-quality text, and even compose music.

      There are many potential benefits to using generative AI.

      • Create new content: Generative AI can create new content, such as articles, blog posts, or even books. This can be a valuable tool for businesses that need to produce a lot of content regularly. The technology can also support the reduction in time it takes to generate work, enabling a steady stream of fresh content for marketing purposes.
      • Improve customer service: Generative AI can improve customer service by providing personalized assistance. Generative AI can create chatbots that can answer customer questions or resolve issues. These types of uses can support both an enterprise’s employees and customers.
      • Automate tasks: The technology can be used to automate tasks that are currently done by humans. This can free up human workers to focus on more creative or strategic work. The technology has the potential to eliminate a lot of toil in many standard business practices, such as data entry and workflow.
      • Generate new ideas: Generative AI can be used to generate new ideas for products, services, or marketing campaigns. This can help businesses stay ahead of the competition.

      “Generative AI is a powerful technology that can be used for good or evil. It is important to be aware of the potential risks and to take steps to mitigate them.”

      Generative AI provides a lot of potential to change the way businesses operate. Organizations are just beginning to leverage this power to improve their businesses. This is a very new area, and the market potential is just starting to reveal itself. Most of the current market is focused on startups introducing novel applications of generative AI technology.

      Enterprises are thus starting to dip their toes into this space, but the growing use of generative AI also presents security risks. Some of these risks are new for AI, some risks are common to IT security. Here are some considerations for securing AI systems.

      • Data security: AI systems rely on large amounts of data to learn and make decisions. The privacy and security of this data is essential. Protect against unauthorized access to the data and ensure it is not used for malicious purposes.
      • Model security: AI models are vulnerable to attacks. One example is adversarial attacks. An attacker manipulates the inputs to the model to produce incorrect outputs. This can lead to incorrect decisions, which can have significant consequences. It is important to design and develop secure models that can resist this.
      • Bias and fairness: If the training data in the models contains biased information, the resulting AI systems may have bias in their decision-making. This can produce discriminatory decisions, which can have serious legal and ethical implications. It is important to consider fairness to ensure that AI and ML system designs reduce bias.
      • Explainability: AI systems are sometimes opaque in their decision-making processes. This makes it difficult to understand how and why decisions are being made. Lack of transparency leads to mistrust and challenges the credibility of the technology. It is important to develop explainable AI systems that provide clear and transparent explanations for their decision-making processes.
      • Monitoring and auditing: Track and audit AI performance to detect and prevent malicious activities. Include logging and auditing of data inputs and outputs of the systems. Watch the behavior of the algorithms themselves.
      • Privacy: Private data in model building and/or usage should be avoided as much as possible with artificial intelligence models. This avoids unintended consequences. Google’s Secure AI Framework provides a guide to securing AI for the enterprise.

      Securing AI systems is critical to effective deployment in various applications. Considering these issues, organizations can develop secure and trustworthy AI and ML systems. These deliver the desired outcomes and avoid unintended consequences.

      In addition to security risks, there are also ethical concerns related to the use of generative AI. For example, some people worry that generative AI could be used to create fake news or propaganda, or to generate deep fakes that could damage someone’s reputation. It is important to be aware of these ethical concerns and to take steps to mitigate them when using generative AI. Organizations will want to enact policies on acceptable use of generative AI which appropriately support their business objectives.

      Overall, generative AI is a powerful technology with the potential to revolutionize many aspects of our lives. However, it is important to be aware of the security risks and ethical concerns associated with this technology and to use this technology responsibly. By taking steps to mitigate these risks, we can help to ensure that generative AI is used in a safe and responsible manner and supports your future business goals.

      INNOVATION TAKEAWAYS

      GENERATIVE AI IS INNOVATIVE

      It is a powerful technology that can be used to create new content, improve customer service, automate tasks, and generate new ideas.

      THERE ARE RISKS WITH THE USE OF GENERATIVE AI

      Generative AI also poses some security risks, such as data security, model security, bias and fairness, explainability, monitoring and auditing, and privacy.

      COMMON SENSE CAN HELP COMPANIES LEVERAGE GENERATIVE AI

      Organizations can mitigate these risks by following best practices, such as protecting data privacy and security, developing secure models, reducing bias in decision-making, making AI systems more explainable, monitoring, and auditing AI systems, and considering privacy implications.

      Interesting read?

      Capgemini’s Innovation publication, Data-powered Innovation Review | Wave 6 features 19 such fascinating articles, crafted by leading experts from Capgemini, and key technology partners like Google,  Starburst,  MicrosoftSnowflake and Databricks. Learn about generative AI, collaborative data ecosystems, and an exploration of how data an AI can enable the biodiversity of urban forests. Find all previous waves here.

      Matthew O’Connor

      Technical Director, Office of the CTO, Google Cloud
      Matthew specializes in Security, Compliance, Privacy, Policy, Regulatory Issues, and large-scale software services. he is also involved in emerging technologies in Web3 and Artificial Intelligence. Before Google, Matthew held product management and engineering roles building scaled services at Postini, Tellme Networks, AOL, Netscape, Inflow, and Hewlett-Packard. His career started as a US Air Force officer on the MILSTAR joint service satellite program. He has an executive MBA from the University of California and earned a bachelor’s degree in computer science engineering from Santa Clara University.

        Ethical generative AI
        At the crossroads of innovation and responsibility

        Tijana Nikolic
        Mar 12, 2024

        Generative AI is reshaping business operations and customer engagement with its autonomous capabilities. However, to quote Uncle Ben from Spiderman: “With great power comes great responsibility.”

        Managing generative AI has been challenging as generative AI models are outperforming humans in some areas, such as profiling for national security causes. Sometimes, anti-principles clearly explain why ethics must be enforced, so it is important to understand the following challenges:

        • Generative AI can assist in managing information overload by helping extract and generate meaningful insights from large volumes of data but, at same time, information overload can dilute precise messaging.
        • A lack of domain-specific knowledge or context leads to inaccurate information and contextual errors in addition to bias and subjectivity.
        • There may be limited human resources to oversee training and regulate output, due to a lack of experienced personnel.
        • Stale data may be used in training.
        • Elite and/or not always ethically sourced data may be used for training.
        • There may be a lack of resilience in execution.
        • Scalability and cost tradeoffs may cause organizations to consider a shortcut.

        Although complex, these challenges can be alleviated on a technical level. Monitoring is a good example of ensuring robustness and observability of the behavior of these models. Additionally, since generative AI capability is exposing businesses to new risks, there is a need for well-thought-through governance, guardrails, and the following methods:

        • Model benchmarking
        • Model hallucination
        • Self-debugging
        • Guardrails.ai and RAIL specs
        • Auditing LLMs with LLMs
        • Detecting LLM-generated content
        • Differential privacy and homomorphic encryption
        • EBM (Explainable Boosting Machine)

        It is crucial that generative AI design takes care of the following aspects of ethical AI:

        • Ensuring ethical and legal compliance – Generative AI models can produce outputs that may be biased, discriminatory, or infringe on privacy rights.
        • Mitigating risk – Generative AI models can produce unexpected and unintended outputs that can cause harm or damage to individuals or organizations.
        • Improving model accuracy and explainability – Generative AI models can be complex and difficult to interpret, leading to inaccuracies in their outputs. Governance and guardrails can improve the accuracy of the model by ensuring it is trained on appropriate data and its outputs are validated by human experts.
        • Ethical generative AI approaches need to be different based on the purpose and impact of the solution, so diagnosing and treating life-threatening diseases should have a much more rigorous governance model than using generative AI to give marketing content suggestions based on products. Even the upcoming EU AI Act prescribes risk-based approaches, classifying
        • AI systems into low-risk, limited or minimal risk, high-risk, and systems with unacceptable risk.
        • AIs must be designed to say “no,” a principle called “Humble AI.”
        • Ethical data sourcing is particularly important with generative AI, where the created model can supplant human efforts if the human has not granted explicit rights.
        • Inclusion of AI: most AIs today are English-language only or, at best, use English as a first language.

        USING SYNTHETIC DATA FOR REGULATORY COMPLIANCE

        Försäkringskassan, the Swedish authority responsible for social insurance benefits, faced a challenge in handling vast amounts of data containing personally identifiable information (PII), including medical records and symptoms, while adhering to GDPR regulations. It needed a way to test applications and systems with relevant data without compromising client privacy. Collaborating with Försäkringskassan, Sogeti delivered a scalable generative AI microservice, using generative adversarial network (GAN) models to alleviate this risk.

        This solution involved feeding real data samples into the GAN model, which learned the data’s characteristics. The output was synthetic data closely mirroring the original dataset in statistical similarity and distribution, while not containing any PII. This allowed the data to be used for training AI models, text classification, chatbot Q&A, and document generation.

        The implementation of this synthetic data solution marked a significant achievement. It provided Försäkringskassan with realistic and useful data for software testing and AI model improvement, ensuring compliance with legal requirements. Moreover, this innovation allowed for efficient scaling of data, benefiting model development and testing.

        Försäkringskassan’s commitment to protecting personal data and embracing innovative technologies not only ensured regulatory compliance but also propelled it to the forefront of digital solutions in Sweden. Through this initiative, Försäkringskassan contributed significantly to the realization of the Social Insurance Agency’s vision of a society where individuals can feel secure even when life takes unexpected turns.

        MARKET TRENDS

        The market for trustworthy generative AI is flourishing, driven by these key trends.

        1. Regulatory compliance: Increasing government regulations demand rigorous testing and transparency.
        2. User awareness: Growing awareness among users regarding the importance of trustworthy and ethical AI systems.
        3. Operationalization of ethical principles: Specialized consulting to guide AI developers in creating ethical risk mitigations on a technical level.

        RESPONSIBLE USE OF GENERATIVE AI

        Ethical considerations are at the heart of these groundbreaking achievements. The responsible use of generative AI ensures that while we delve into the boundless possibilities of artificial intelligence, we do so with respect for privacy and security. Ethical generative AI, exemplified by Försäkringskassan’s initiative, paves the way for a future where innovation and integrity coexist in harmony.

        “ETHICAL GENERATIVE AI IS THE ART OF NURTURING MACHINES TO MIRROR NOT ONLY OUR INTELLECT BUT THE VERY ESSENCE OF OUR NOBLEST INTENTIONS AND TIMELESS VALUES.”

        INNOVATION TAKEAWAYS

        TRANSPARENCY AND ACCOUNTABILITY

        Generative AI systems should be designed with transparency in mind. Developers and organizations should be open about the technology’s capabilities, limitations, and potential biases. Clear documentation and disclosure of the data sources, training methods, and algorithms used are essential.

        BIAS MITIGATION

        Generative AI models often inherit biases present in their training data. It’s crucial to actively work on identifying and mitigating these biases to ensure that AI-generated content does not perpetuate or amplify harmful stereotypes or discrimination.

        USER CONSENT AND CONTROL

        Users should have the ability to control and consent to the use of generative AI in their interactions. This includes clear opt-in/opt-out mechanisms. Respect for user preferences and privacy and data protection principles should also be upheld.

        Interesting read?

        Capgemini’s Innovation publication, Data-powered Innovation Review | Wave 7 features 16 such fascinating articles, crafted by leading experts from Capgemini, and partners like Aible, the Green Software Foundation, and Fivetran. Discover groundbreaking advancements in data-powered innovation, explore the broader applications of AI beyond language models, and learn how data and AI can contribute to creating a more sustainable planet and society.  Find all previous Waves here.

        Authors

        Tijana Nikolic

        AI specialist
        Tijana is an AI Specialist in Sogeti Netherlands AI CoE team with a diverse background in biology, marketing, and IT. Her vision is to bring innovative solutions to the market with a strong emphasis on privacy, quality, ethics, and sustainability.

        Yashowardhan Sowale

        CTIO I&D India, I&D Architecture head, India Domain Leader for AI, Insights & Data, Capgemini
        Yash is a seasoned senior business and technology/architect leader with 30 years of work experience. As the VP – I & D Architecture head, Enterprise Architecture CoE lead, India Domain Leader for AI, CTIO – I & D India & L4 master architect, He brings extensive expertise in a wide range of domains, including Big Data, Cloud, Artificial Intelligence.

          Embedded software is changing how companies operate

          Walter Paranque-Monnet
          23 April 2024
          capgemini-engineering

          Discover why embedded software is increasingly important for industries – creating intelligent ecosystems, enhancing user experiences and reducing costs.

          Twenty years ago, we bought mobile phones for their hardware. Since then, a lot has changed, and now, embedded software delivers the primary value – offering entertainment, navigation, augmented reality, productivity apps, and so on.

          However, such software does not work alone. It requires the phone’s hardware (connectivity, cameras accelerometers, etc.), and a cloud ecosystem to download new apps and share data. But it is the software – the operating system and firmware on the phone – that runs the show.

          As a result, consumers now have sky-high expectations of technology. And if industrial companies can’t deliver products with a similar software-driven user experience, they will lose these customers. Manufacturers of cars, planes, trains, satellites, solar panels, cameras, home appliances, and so on are all undergoing a similar shift driven by embedded software.

          That shift has huge implications – not just for the product itself, but for the company designing it.

          Ever more products become software-driven

          Let’s start with the product. Take a car or a plane – products that are increasingly software-driven. Both are developing software for automation and route optimization on the one hand, and to improve user experience and entertainment on the other.

          They are not alone. Trains need one type of software with smart signal controls for optimal route planning, and another type that allows users to order food from the buffet car on their phone. Satellites must make real-time decisions about trajectory, data capture, and energy management. In-home batteries must control energy in and out, and track what they sell back to the grid.

          Embedded software drives a change in organizational thinking

          Embedded software is not entirely new in these industries – cars and planes, for example, have long had bits of control software. But its scale and sophistication are now skyrocketing.

          A Capgemini Research Institute (CRI) survey – of 1,350 $1bn+ revenue companies with goals to become software-driven – found software accounted for 7% of revenue in 2022, but was expected to rise to 29% by 2030. That same report also found that 63% of Aerospace & Defense organizations believe software is critical to future products and services, with industries from automotive to energy making comparable claims.

          But getting there will mean some big changes at these organizations.

          Unlike a phone – which was designed to be a single integrated device – cars, planes, satellites, drones and other industrial systems were originally designed with multiple ECUs (electronic control units), each running multiple pieces of software. Each ECU was developed separately by different parts of the organization.

          But now there is a need to integrate everything. For example, autopilot won’t work if its underpinning software can’t communicate seamlessly with the separate control units for sensors, steering, and brakes.

          The importance of transversal software

          Doing this in the current siloed way would create unmanageable complexity. Software needs to be ‘transversal’ – ie. developed consistently across the organization, rather than in silos. There must be a centralized team defining strategy, and managing and developing embedded software as a product across the organization. This must all be done with the same standards to facilitate interoperability, scalability, upgrades and reuse – whether it’s a landing control system, energy management system, in-flight infotainment, or smart cockpit. This transversal operating model makes software teams the backbone of software-defined organizations, continuously developing software solutions across the company.

          That doesn’t mean all software must be connected to the final system, or that everything will be developed in the same way. Software can be very different. For example, rear-seat entertainment software can offload some data-heavy functions to the cloud, and developers can launch beta versions to get user feedback. On the other hand, high-integrity software for braking must do everything on board, work every time, and be separate from any hackable entry points into the system.

          There are separate development tracks for different software components, so that less safety-critical software can quickly get to market, while more safety-critical parts can be carefully managed through verification and validation (V&V), and certification. But all development tracks should be within a centralized software team, which works together, sharing a consistent system architecture, standards and learnings, and creating products the entire business can access once complete.

          A positive example

          Consider Stellantis, which owns multiple car brands, including Opel, Peugeot, Dodge and Fiat, among others. It has invested in developing three core software platforms: one which is the backbone of the car (STLA brain), one for safety-critical assisted driving (STLA AutoDrive), and one for the connectivity and cockpit services (STLA SmartCockpit).

          It implemented centralized software standards that are systematically used across all brands and models. This is similar to a trend we’re seeing across all markets – ‘platforming’. The platforming approach leverages generic components (computer vision, voice command, navigation services, etc.) that are applied to several projects, products and use cases – sometimes used with customizations to different brands and marketings – all without needing to build, test and certify everything from scratch.

          Innovate or fail

          All of this requires a major shift in thinking from organizations. But they must make this shift to survive.

          And largely, they are. The auto industry is taking the threat from Tesla (and its advanced on-board computing) seriously. They may soon be pushed to move faster by software-driven Chinese competitors, like BYD and Nio, whose car interiors can transform into immersive cinemas at the push of a button. Industries from aviation to energy are no longer complacent – all recognize that embedded software is critical to their future. And all know they must undergo radical organizational change to turn legacy hardware into future-proof, software-driven products.

          See how embedded software is helping industries transform their business – and how Capgemini can help along your journey.

          Meet our experts

          Walter Paranque-Monnet

          Solution Director Capgemini Engineering
          Walter is passionate about helping organizations build high-value products and services driven by creativity, innovation, and business results. He has helped teams create a culture driven by software and innovation. For more than 12 years, Walter has supported software organizations along their chip-to-cloud transformation journey and designed embedded software roadmaps for acceleration.

            Conversational twins
            The virtual engineering assistants of the (near) future

            David Granger
            May 15, 2025
            capgemini-engineering

            What will happen when Gen AI meets VR meets Digital Twins meets high-powered chips? Enter the ‘Conversational Twin’ – a virtual, 3D, generative AI assistant that can visually guide you through complex tasks.

            Imagine your car breaks down and, to save a bit of money, you decide to fix it yourself.

            You head to your garage with your smartphone and start looking up YouTube tutorials. Eventually, you find one that covers your problem and start watching. As you get to the key part, you start fiddling around with the engine. The presenter is explaining much faster than you can act, so you keep going back to your phone to scroll back 20 seconds and rewatch. After watching the key bit several times, you find the problem. A new part is needed. You spend an hour online trying to find the right one, amongst 100 identical looking options with names like ‘CC01-15/06’ and ‘CC01-15/06e’. A few days later, it arrives and it’s back to the garage. Another hour fixing and scrolling, and your car is finally ready to go.

            For all their flaws, the popularity of online tutorials shows the enormous demand for information on how to fix things, and, perhaps, a deeper need to feel in control. And that’s just private citizens. Mechanics and engineers have an even greater need to access vast amounts of information on a vast range of processes and parts, and how to apply them to different models of cars, aircraft, machine tools, etc.

            YouTube is certainly better than thick, boring instruction manuals. But really, people want to interact in natural human ways. They process information in different ways, and have different starting knowledge, making start-to-finish tutorials an inefficient way to deliver information. In an ideal world, you would have someone nearby who understands the problem, and can explain what you should do as you go, and who can answer questions if you didn’t understand the instructions.

            Could digitally delivered instructions become more like that human expert? We think so. Particularly due to advances in generative AI, virtual reality, digital twins, and advanced chips.

            The instruction manual of the future

            We can imagine, in the not-too-distant future, that same smartphone could contain an app with a digital twin of the car, that has been trained on the car’s instruction manuals (we’ll stick with the car analogy, but this could be applied to any complex engineered product).

            The result would be that, when you arrive at a problem that needs fixing, you open the app and verbally describe the problem to a virtual assistant. The app then generates a visual step-by-step guide to solve the problem, which can be communicated via a mix of AR overlays, demonstrations by avatars, and spoken instructions, through your phone, tablet or VR headset.

            Rather than simply following a series of steps, it would leverage generative AI to contextualize your challenge and explain exactly what you need to do to fix it, using 3D visualizations, and adjusting them according to the verbal questions you ask it, then waiting patiently for you to finish one task, and ask for the next instruction (or to clarify the last one). We call this a Conversational Twin, because you are effectively conversing with a digital twin of the car, which knows everything about it.

            By harnessing the phone camera, the app could even watch your movements and guide you in real-time (“unscrew the cap, no not that one, the one 10 cm to your left”) by comparing the video feed to its internal model of the vehicle. When you reach the problem, you could hold up the broken part and the Twin would recognize it and order you a new one.

            Such a Conversational Twin will significantly benefit many people who want to fix things themselves. But its real value will be as a huge cost saver to companies with large maintenance and engineering teams, allowing those people to access much more expertise, and thereby enabling smaller teams to perform more tasks, more quickly, even if they’ve never seen the problem before.

            How to do it

            Technically, most of what is described above could be created today. But it would be a lot of work. Each product would need to be carefully mapped and digitized, conversational flows would need to be carefully scripted and programmed, and a library of animations would need to be pre-designed. 

            Generative AI is rapidly changing the game here. Already, dedicated AI models can be trained on information from manuals to YouTube videos to online trade forums, so they can find answers as they are requested, and return them as contextualized text or spoken instructions.

            The more challenging part is mapping those text-based instructions onto 3D models of the product. The Conversational Twin would need to interpret a mix of text and visual inputs, turn them all into prompts for itself, find the answers, match those text instructions onto its internal 3D model of the specific car, then overlay its responses as 3D objects onto the physical car it sees via the camera. We are not quite there yet.

            But such technology is coming. Virtual and augmented reality have come on leaps and bounds in the past few years, and it is only a matter of time before virtual objects can be generated in response to generative AI instructions. Equally, today’s large language models (LLMs) deal with text, but they will need to output machine-readable instructions in order to generate virtual overlays. That is not something LLMs do yet, but bright minds – including those at Capgemini – are working on making that connection between LLMs and Real-time 3D engines. Once these two areas advance a little further, it is a matter of carefully connecting everything.

            Of course, generative AI is not a ‘magic bullet’ that can just be told what to do and automatically produce the result you want. It will need a well-defined architecture and effective rules for how to ‘prompt’ it to generate the right responses, outputted in ways that can be reliably converted into 3D visuals.

            Finally, we still need some microchip advancements to deliver all this on a device. Today, we use edge computing devices and the cloud to process these advanced workloads, and indeed much can be done using these approaches that will lay the foundations for Conversational Twins. But we suspect that in the next few years, chips will be sufficiently more advanced to do all the processing on a smartphone, tablet, or VR headset.

            What to do to get ready for Conversational Twins

            Even if Conversational Twins are a few years away, there is a lot that companies can do now to prepare for them, which will also have immediate value elsewhere.

            The first is investing in Real-time 3D. This is a rapidly growing technology with exciting possibilities, like the ability to showcase products to customers without them leaving their homes or create virtual working environments that can train employees without risk.

            A related point is to start preparing existing assets for training Gen AI and building 3D assets. Many companies already have 3D product models, rendered marketing materials, and so on. But they are often held in silos and can be of inconsistent quality and formats. Complex projects like Conversational Twins will not be reliable if the underlying 3D model of the product – on which they base their recommendations – does not match the real product.

            Those that have not already done so, should create centralized virtual models of their products and businesses, as a single source of truth. That way, anyone in the company producing 3D materials – whether for new product design, marketing, or building Gen AI-powered assistants – is working from the same high-quality version. In time, this ‘virtual twin’ will provide the digital foundation for your Conversational Twin.

            Why you should start now

            Once the above comes to fruition, companies making products like cars or planes could offer a corresponding app that guides users on how to maintain and fix them. That could be sold as a subscription to professional mechanics, maintenance engineers and training organizations, and made available free or for a fee to people who have bought the product, as a differentiator from their competition.

            Many aerospace and industrial companies are already exploring how to simplify the maintenance, training, and configuration of products – rather than relying on complicated documents or fixed training modules. As engineering companies move from selling products, to managing the entire lifecycle, Conversational Twins can provide customers with added value that can save them time and money, extend the life of products, and provide a valuable source of data on how to improve future designs.

            If we start getting our data and models ready now, and embarking upon proof-of-concepts, Conversational Twins could be with us this decade.guide organizations to integrate AI carefully, following sensible adaption and risk management frameworks and deploying appropriate training, ensuring both its potential and limitations are carefully navigated.

            Discover the next generation of user experiences powered by real-time 3D. Click to learn more about Capgemini Engineering’s Real-time 3D solutions.

            Gen AI in software

            Report from the Capgemini Research Institute

            Meet the author

            David Granger

            Director of Engineering – Experience Engineering
            David and his expert team lead the development of advanced solutions that integrate real-time 3D (RT3D) visualization with generative AI to drive innovation across industries, known as ‘Experience Engineering’. His team specializes in crafting intelligent experiences that reshape how businesses engage with digital content.

              From pilots to production
              Overcoming challenges to generative AI adoption across the software engineering lifecycle

              Keith Glendon
              Apr 24, 2025
              capgemini-engineering

              Generative AI is rapidly revolutionizing the world of software engineering, driving efficiency, innovation, and business value from the earliest stages of design through to deployment and maintenance. This explosive development in technology enhances and transforms every phase of the software development lifecycle: from analyzing demand and modeling use cases in the design phase, to modernizing legacy code, assisting with documentation, identifying vulnerabilities during testing, and monitoring software post-rollout.

              Given its transformative power, it’s no surprise that the Capgemini Research Institute report, Turbocharging Software with Gen AI, reveals that four out of five software professionals expect to use generative AI tools by 2026.

              However, our experience and research find that to fully realize the benefits, software engineering organizations must overcome several key challenges. These include unauthorized use, upskilling, and governance. This blog explores these challenges and offers recommendations to help navigate them effectively.

              Prevent unauthorized use from becoming a blocker

              Our research indicates that 63% of software professionals currently using generative AI are doing so with unauthorized tools, or in a non-governed manner. This highlights both the eagerness of developers to leverage the benefits of AI and the frustration caused by slow or incomplete official adoption processes. This research is validated in our field experience across hundreds of client projects and interactions. Often, such issues arise from an overly ‘experimental’ versus programmatic approach to adoption and scale.

              Unauthorized use exposes organizations to various risks, including hallucinated code (AI-generated code that appears correct but is flawed), code leakage, and intellectual property (IP) issues. Such risks can lead to functional failures, security breaches, and legal complications.

              Our Capgemini Research Institute report emphasizes that using unauthorized tools without proper governance exposes organizations to significant risks, potentially undermining their efforts to harness the transformative business value of generative AI effectively.

              To mitigate unauthorized use, organizations should channel the curiosity of their development teams constructively and in the context of managed transformation roadmaps. This approach should include consistently explaining the pitfalls of unauthorized use, researching available options, learning about best practices, and adopting necessary generative AI tools in a controlled manner that maintains security and integrity throughout the software development process.

              Upskilling your workforce

              Upskilling is another critical challenge. According to our Capgemini Research Institute findings, only 40% of software professionals receive adequate training from their organizations to use generative AI effectively. The remaining 60% are either self-training (32%) or not training at all (28%). Self-training can lead to inconsistent quality and potential risks, as nearly a third of professionals may lack the necessary skills, resulting in functional and legal vulnerabilities.

              A consistent observation from our field experiences is that alongside the issue of training is a correlated barrier to making sufficient time available for teams to apply training in practical ways, and to evolve the training outcomes into pragmatic, lasting culture change.  Because generative AI is such a seismic shift in the way we build software products and platforms, the upskilling curve is about far more than incremental training.

              Managing skill development in this new frontier of software engineering will require an ongoing commitment to evolving skills, practices, culture, ways of working and even the ways teams are composed and organized.   As a result, software engineering organizations should embrace a long-term view of upskilling for success.

              Those that are most successful in adopting generative AI have invested in comprehensive training programs, which cover essential skills such as prompt engineering, AI model interpretation, and supervision of AI-driven tasks. They have begun to build organizational change management programs and transformation roadmaps that look at the human element, upskilling and culture shift as a vital foundation of success.

              Additionally, fostering cross-functional collaboration between data scientists, domain experts, and software engineers is crucial to bridge knowledge gaps, as generative AI brings new levels of data dependency into the software engineering domain. Capgemini’s research shows that successful organizations realizing productivity gains from AI are channeling these gains toward innovative work (50%) and upskilling (47%), rather than reducing headcount.

              Establishing strong governance

              Despite massive and accelerating interest in generative AI, 61% of organizations lack a governance framework to guide its use, as highlighted in the Capgemini Research Institute report. Governance should go beyond technical oversight to include ethical considerations, such as responsible AI practices and privacy concerns.

              A strong governance framework aligns generative AI initiatives with organizational priorities and objectives, addressing issues like bias, explainability, IP and copyright concerns, dependency on external platforms, data leakage, and vulnerability to malicious actors.

              Without proper governance, the risks associated with generative AI in software engineering — like hallucinated code, biased outputs, unauthorized data & IP usage, and other issues ranging from security to compliance risks, can outweigh its benefits. Establishing clear policies, driven in practice through strategic transformation planning will help mitigate these potential risks and ensure that AI adoption aligns with business goals.

              Best practices for leveraging generative AI in the software engineering domain

              Generative AI in software engineering is still in its early stages, but a phased, well-managed approach toward a bold, transformative vision will help organizations maximize its benefits across the development lifecycle. In following this path, here are some important actions to consider:

              Prioritize high-benefit use cases as building blocks

              • Focus on use cases that offer quick wins to generate buy-in across the organization. These use cases might include generating documentation, assisting with coding, debugging, testing, identifying security vulnerabilities, and modernizing code through migration or translation.
              • Capgemini’s research shows that 39% of organizations currently use generative AI for coding, 29% for debugging, and 29% for code review and quality assurance. The critical point here, however, is that organizations take a ‘use case as building blocks’ approach. Many currently struggle with what could be called ‘the ideation trap’. This trap comes about when the focus is too much on experiments, proofs of concept and use cases that aren’t a planned, stepwise part of a broader transformation vision. 
              • When high-benefit use cases are purposely defined to create building blocks toward a north star transformation vision, the impact is far greater. An example of this concept is our own software product engineering approach within Capgemini Engineering Research & Development. In late 2023 we set out on an ambitious vision of an agentive, autonomous software engineering transformation and a future in which Gen AI-driven agents autonomously handle the complex engineering tasks of building software products and platforms from inception to deployment. Since that time, our use cases and experiments all align toward the realization of that goal, with each new building block adding capability and breadth to our agentive framework for software engineering.

              Mitigate risks

              • All productivity gains must be balanced within a risk management framework. Generative AI introduces new risks that must be assessed in line with the organization’s existing risk analysis protocols. This includes considerations around cybersecurity, data protection, compliance and IP management. Developing usage frameworks, checks and quality stopgaps to mitigate these risks is essential.

              Support your teams

              • Providing comprehensive training for all team members who will interact with generative AI is crucial. This training should cover the analysis of AI outputs, iterative refinement of AI-generated content, and supervision of AI-driven tasks. As our Capgemini Research Institute report suggests, organizations with robust upskilling programs are better positioned to improve workforce productivity, expand innovation and creative possibilities, and mitigate potential risks.

              Implement the right platforms and tools

              • Effective use of generative AI requires a range of platforms and tools, such as AI-enhanced integrated development environments (IDEs), automation and testing tools, and collaboration tools.
              • However, only 27% of organizations report having above-average availability of these tools, highlighting a critical area for improvement.  Beyond the current view of Gen AI as a high-productivity assistant or enabler, we strongly encourage every organization in the business of software engineering to look beyond the ‘copilot mentality’ and over the horizon to what Forrester recently deemed “The Age Of Agents”.  The first wave of Gen AI and the popularity of these technologies as assistive tools will be a great benefit to routine application development tasks.
              • For the enterprises that are building industrialized, commercial software products and platforms – and for the experience engineering of the next generation, we believe that the value and even the essentials of competitive survival depend on adopting and building a vision of far more sophisticated AI software engineering capability than basic ‘off the shelf’ code assist tools deliver.

              Develop appropriate metrics

              • Without the right systems to monitor the effectiveness of generative AI, organizations cannot learn from their experiences or build on successes. Despite this, nearly half of organizations (48%) lack standard metrics to evaluate the success of generative AI use in software engineering. Establishing clear metrics, such as time saved in coding, reduction in bugs, or improvements in customer satisfaction, is vital.
              • We believe that organization-specific KPIs and qualitative metrics around things like DevEx (Developer Experience), creativity, innovation and flow are vital to consider, as the power of the generative era lies far more in the impact these intangibles have on the potential of business models, products and platforms than on the cost savings many leaders erroneously focus on. This is absolutely an inflection point, in which the value of the abundance mindset applies.

              In conclusion

              Generative AI is already well underway in demonstrating its potential to transform the software engineering lifecycle, improve quality, creativity, innovation and the impact of software products and platforms – as well as streamline essential processes like testing, quality assurance, support and maintenance. We expect its use to grow rapidly in the coming years, with continued growth in both investment and business impact.

              Organizations that succeed in adopting generative AI as a transformative force in their software engineering ethos will be those that fully integrate it into their processes rather than treating it as a piecemeal solution. Achieving this requires a bold, cohesive vision, changes in governance, the adoption of new tools, the establishment of meaningful metrics, and, most importantly, robust support for teams across the software development lifecycle. 

              At Capgemini Engineering Software, we are ambitiously transforming our own world of capability, vision, approach, tools, skills, practices and culture in the way we view and build software products and platforms.  We’re here for you, to help you and your teams strike out on your journey of transformation in the generative software engineering era.

              Download our Capgemini Research Institute report: Turbocharging software with Gen AI to learn more.


              Gen AI in software

              Report from the Capgemini Research Institute

              Meet the author

              Keith Glendon

              Senior Director, Generative AI and Software Product Innovation
              Keith is an experienced technologist, entrepreneur, and strategist, with a proven track record of driving and supporting innovation and software-led transformation in various industries over the past 25+ years. He’s demonstrated results in multinational enterprises, as well as high-tech startups, through creative disruption and expert application of the entrepreneurial mindset.

                Boosting productivity in software engineering with generative AI
                Real-world insights and benefits

                Jiani Zhang
                Apr 16, 2025
                capgemini-engineering

                Software engineers may have once stated that software doesn’t write itself. That’s not true anymore. Generative AI is perfectly capable of taking on at least some of the simple tasks involved in coding, as well as other aspects of the software development life cycle. In fact, research published in our new Capgemini Research Institute report, Turbocharging software with Gen AI, shows that organizations using generative AI have seen a 7–18% productivity improvement in software engineering.

                So, what does this mean for those working in the software industry? It would be reasonable to expect some fear of change, after all, status quo bias is a well-documented human behavior. But our research data – which involved both developers and senior executives – shows that software engineers and their employers expect generative AI to enhance the profession and deliver increased value with software quality and the daily workload of software engineers, as companies demand ever more complex software across all parts of their business and product lines.

                Let’s look in more detail at some of these key benefits.

                Accelerate faster with greater accuracy

                The old idea that moving too fast opens the door to mistakes can be turned on its head with the careful use of generative AI during software development. Because generative AI can automate some simple tasks, and complete them more quickly, it can help speed up a whole host of non-safety-critical processes, leaving more time to spend on complex software development.This can include paying extra attention to safety-critical systems, where human oversight will still play a crucial role in rigorous oversight to maintain the highest safety standards.

                Of course, generative AI is not a ‘magic bullet’ that can just be told what to do and automatically produce the result you want. It will need a well-defined architecture and effective rules for how to ‘prompt’ it to generate code that is repeatable and maintainable, and which meets company needs and compliance rules.

                But with the right processes in place, Gen AI clearly holds great promise, and these fundamental benefits are widely acknowledged among software developers. Our research indicates that its use is projected to grow significantly, with over a quarter of all work in software design, development, testing, and quality expected to be augmented by generative AI in two years. By 2026, we anticipate that more than four of every five software professionals will utilize generative AI tools.

                Make room for talent to shine

                Improved speed and accuracy are only part of the picture. They are very much enablers for other key advances, most notably allowing software engineers to spend the time required to develop the complex code they were hired to create.

                Software engineers possess a wealth of talents that extend beyond writing quality, complex code. However, these talents can be stifled if they spend the vast majority of their time on the more mundane – even repetitive – aspects of coding. By freeing them of these tasks, tools like generative AI can unlock engineers’ creativity, enabling them to be creative, think of new ways of addressing problems, or imagine entirely new aspects of a software solution.

                The challenge of balancing mundane tasks with creative thinking is not unique to software engineers. People in many professions often find that their most profound or innovative thoughts emerge when they are not immersed in the more day-to-day aspects of their work.

                However, software engineers still need to spend time writing code, and time must be allocated for it. By automating those everyday tasks, generative AI can free up more time for innovative thinking and creative problem-solving – like allowing software engineers to spend more time thinking through the user experience. Software professionals are aware of this, and we found they see multiple pathways for creativity to emerge. We found that 61% of software leaders have already seen the benefits of generative AI in enabling innovative work, and 36% have seen benefits in collaborative work.

                Advantages like this can be experienced across many different job grades. One technical leader told us, “While senior professionals are leveraging generative AI combined with their domain expertise for product innovation, junior professionals see value in AI process and tool innovation, and in automation and productivity optimization.”

                Increase job satisfaction and retention

                Despite initial fears, firms are not seeing that generative AI is reducing the software engineering workforce. Instead of considering generative AI as a standalone team member, the prevailing view is to use it as a tool to empower team members and enhance their effectiveness.

                When we examined how firms plan to utilize the productivity gains they reap from generative AI, we discovered that only a mere 4% intend to reduce the workforce. The overwhelming majority are committed to enhancing more meaningful work opportunities for their software professionals, such as innovation and new feature development (50%), upskilling (47%), and focusing on complex, high-value tasks (46%).

                This is not really surprising. The reality is that most engineering companies cannot hire anywhere near the number of software engineers they need. So, far from reducing headcount, generative AI is more about allowing the existing software workforce to get closer to what the company dreams it will deliver.

                Our research found that 69% of senior software professionals believe generative AI will positively impact job satisfaction. When we asked software professionals how they see generative AI, 24% felt excited or happy to use it in their work, and an additional 35% felt it left them assisted and augmented. These factors can also benefit staff retention: people who are happy in their work are less likely to look at moving on.

                In conclusion

                It is still very early days for generative AI in the software development life cycle. Still, we have already found that it is being leveraged to speed up development time, enhance products, free up software engineers to move from the mundane to more innovative work, and in doing all this, boost both productivity and job satisfaction. With uptake predicted to grow significantly over the coming few years, we expect exciting things for developers, their products, and their customers.

                Download our Capgemini Research Institute report Turbocharging software with Gen AI to learn more.

                Gen AI in software

                Report from the Capgemini Research Institute

                Meet the author

                Jiani Zhang

                EVP and Chief Software Officer, Capgemini Engineering
                As the Capgemini Software Engineering leader, Jiani has proven a track record for supporting organizations of all sizes to drive business growth through software. With over 15 years of experience in the IT and Software industry, including strategy and consulting, she has helped business transform to compete in today’s digital landscape.

                  Should we use generative AI for embedded and safety software development?

                  Vivien Leger
                  May 6, 2025
                  capgemini-engineering

                  The idea of deploying generative AI (Gen AI) in software for safety critical systems may sound like a non-starter. With AI coding implicated in declines in code quality, it’s hard to imagine it playing a role in the safety-critical or embedded software used in applications like automatic braking, energy distribution management, or heart rate monitoring.

                  Engineering teams are right to be cautious about Gen AI. But they should also keep an open mind. Software development is about much more than coding. Design, specification, and validation can collectively consume more time than actual coding, and here, Gen AI can significantly reduce overall development time and cost. It could even improve quality.

                  Incorporating Gen AI in safety-critical environments

                  Before we come onto these areas, let’s quickly address the elephant in the room: Gen AI coding. AI code generation for safety-critical software is not impossible, but it would need extensive training of the AI algorithms, rigorous testing processes, and will bring a lot of complexities. Right now, Gen AI should never directly touch a safety-critical line of code. But we should certainly keep an eye on it, as Gen AI code writing as it advances in other sectors.

                  However, other areas – from specification to validation – are ripe for Gen AI innovation. Our recent Capgemini Research Institute report, Turbocharging software with Gen AI, found that software professionals felt Gen AI could assist with 28% of software design, 26% of development, and 25% of testing in the next two years. In the report, one Senior Director of Software Product Engineering at a major global pharmaceutical company was quoted as saying: “use cases like bug fixing and documentation are fast emerging, with others like UX design, requirement writing, etc. just around the corner.”

                  Software design

                  Let’s consider how the software development journey may look, just a few years from now. Let’s say you are designing a control system for car steering, plane landing gear, or a medical device (pick a product in your industry).

                  Right at the start, you probably have a project brief. Your company or customer has given you a high-level description of the software’s purpose. Gen AI can analyze this, alongside regulatory standards, to propose functional and non-functional requirements. It will still need work to get it perfect, but it has saved you a lot of time.

                  However, you want to go beyond technical requirements and ensure this works for the user. Thus, you ask Gen AI to develop a wide range of user stories, so you can design solutions that pre-empt problems. That includes the obvious ones you would have come up with, Gen AI just writes them more quickly. But it includes all the weird and wonderful ways that future customers will use and abuse your product, ways that never would have occurred to a sensible software engineer like you.

                  In most cases, this is about improving the user experience, but it could also prevent disasters. For example, many of Boeing’s recent troubles stem from its MCAS software, which led to two crashes. While the software was a technically well-designed safety feature, its implementation overlooked pilot training requirements and risks from sensor failures. This is the sort of real-world possibility that Gen AI can help identify, getting engineers who are laser-focused on a specific problem to see the bigger picture.

                  Armed with this insight, you start writing the code. While the AI doesn’t have any direct influence on the code, you may let it take a hands-off look at your code at each milestone, and make recommendations for improvements against the initial brief, which you can decide whether to act upon.

                  Test and validation

                  Once you have a software product you are happy with, Gen AI is back in the game for testing. This is perhaps one of its most valuable roles in safety-critical systems. In our CRI report, 54% of professionals cited improved testing speed as one of the top sources of Gen AI productivity improvements.

                  Gen AI can start the verification process by conducting a first code review, comparing code industry standards (eg. MISRA for automotive, DO-178 for aerospace), to check for errors, bugs, and security risks. You still need to review it, but a lot of the basic stuff you would have spent time looking for has been sorted in the first pass, saving you time, and giving you more headspace to ensure everything is perfect.

                  Once you are satisfied with the product, you want to test it. Your Gen AI assistant can quickly generate test cases – sets of inputs to determine whether a software application behaves as expected – faster and more accurately than when you did it manually. This is already a reality in critical industries, as Fabio Veronese, Head of ICT Industrial Delivery at Enel Grids noted in our report that his company uses generative AI for user acceptance tests.

                  And, when you are confident your software product is robust, Gen AI can help generate the ‘proofs’ to show it works and will function under all specified conditions. For example, in the rail industry, trains rely on automated systems to process signals, ensuring trains stop, go, or slow down at the right times. Gen AI can look at data readouts and create ‘proofs’ that show each step of the signal processing is done correctly and on time under various conditions – and generate the associated documents.

                  In fact, as you progress through these processes, Gen AI can expedite the creation and completion of required documentation, by populating predefined templates and compliance matrices with test logs. This ensures consistency and accuracy in reporting and saves engineering time.

                  Automating processes

                  Gen AI can also help you automate many laborious processes that can be so mundane that human brains struggle to stay focused, thus creating the risk of error.

                  Take the example of the process used in the space industry for addressing software defects. When a defect is discovered, developers must create a report documenting this defect, develop a test to reproduce the defect, correct the defect in a sandbox, put the updated software through a verification process, reimplement the corrected code back into the main project, and finally test it in within the product.

                  A five-minute code fix may take hours of meetings and tens of emails. This is exactly the sort of task Gen AI is well suited to support. Any organization writing safety-critical software will have hundreds of such tedious documentation and procedural compliance processes. We believe (in some cases) that as much as 80% of the time could be saved in such processes by deploying Gen AI for routine work.

                  Don’t just take our word for it. Speaking to us for our report, Akram Sheriff, Senior Software Engineering Leader at Cisco Systems notes that, “One of the biggest drivers of generative AI adoption is innovation. Not just on the product side but also on the process side. While senior professionals leverage generative AI combined with their domain expertise for product innovation, junior professionals see value in AI process and tool innovation, and in automation and productivity optimization.”

                  Managing the risks to get the rewards

                  Despite all these opportunities, we must acknowledge that this is a new and fast-moving field. There are risks, including the correctness of outputs (Gen AI can hallucinate plausible but wrong answers), inherited risk from underlying models, and bias in training data. But there are also risks of not acting out of fear, and missing out on huge rewards while your competitors speed ahead.

                  Gen AI needs safeguards, but also a flexible architecture that allows companies to quickly adopt, test, and use new Gen AI technologies, and evolve their uses as needs demand.

                  In our report, we propose a risk model (see image 1). It states that any use of Gen AI requires (a) a proper assessment of the risks and (b) that – where mistakes could have serious consequences – you have the expertise to assess whether the outputs are correct.

                  Image 1: A risk assessment framework to kickstart generative AI implementation in software engineering

                  For now, safety-critical code creation will fall into ‘Not safe to use’, because the consequence of error is high, and the expertise needed to assess the code would probably be more of a burden than starting from scratch. However, testing would fall into ‘Use with caution’, because it would provide valuable insights about software behavior, that experts can assess.

                  Finally, a key part of managing risks is comprehensive user training to understand how Gen AI works and its strengths and weaknesses. In our research, 51% of senior executives said that leveraging Gen AI in software engineering will require significant investment to upskill the software workforce. Yet only 39% of organizations have a generative AI upskilling program for software engineering.

                  There is a real risk of becoming overly reliant on, or trusting of, Gen AI. We must ensure that humans retain their ability to think critically about the fundamental nature of software and safety. Software engineers must be well-informed and remain actively engaged in verification and decision-making processes, so they can spot problems and be ready to step in if Gen AI reaches its limits.

                  In conclusion

                  While Gen AI won’t be building safety-critical software on its own anytime soon, it has the potential to enhance development, documentation, and quality assurance right across the software development lifecycle. In doing so, it can not only save time and money, and speed time to market, but it can even improve safety.

                  Companies like Capgemini can help shape achievable, phased roadmaps for Gen AI adoption. We guide organizations to integrate AI carefully, following sensible adaption and risk management frameworks and deploying appropriate training, ensuring both its potential and limitations are carefully navigated.

                  Download our Capgemini Research Institute report Turbocharging software with Gen AI to learn more.

                  Gen AI in software

                  Report from the Capgemini Research Institute

                  Meet the author

                  Vivien Leger

                  Head of Embedded Software Engineering
                  With over 14 years of experience, Vivien has led teams in building a culture focused on technical excellence and customer satisfaction. He has successfully guided software organizations through their transformation journeys, aligning technology with business goals and designing strategic roadmaps that accelerate growth and profitability.

                    The Power of Zero from Capgemini’s ADMnext

                    Capgemini
                    10 April 2020
                    capgemini-engineering

                    Driving a lean, efficient, and optimized core is your pathway to enabling infinite possibilities.

                    Reap the full benefits of your transformation efforts and become a truly digital enterprise by bolstering your core IT foundation first

                    With large-scale market disruptions and the advent of newer-age digital technologies, constant evolution is essential. And, with digital services giving more power to consumers, applications have become the default source of business value – to the extent that application loyalty is now synonymous with brand loyalty. So, understandably, leading CIOs are increasingly becoming more “apps-focused” by default.

                    But, this emphasis on applications and future digital transformation can blur your focus on your core – your here-and-now operations – and this can hinder your ability to lay a foundation for a digitally empowered enterprise.

                    The key to achieving this digitally empowered enterprise and your digital transformation visions lies in getting the basics right first. This means ensuring your legacy estate is made rock solid so that it acts as your digital transformation launchpad. And in order to do this, a clear and solid operational framework is crucial.

                    Introducing the Power of Zero: An actionable framework for achieving business excellence through hyper-efficient core IT

                    The Power of Zero is an actionable framework for solidifying your legacy IT estate as a launchpad for your digital transformation, so that you attain all the speed and agility needed for a truly digital enterprise.

                    In putting your current state of IT and applications in order, the Power of Zero enables you to achieve maximum impact from your core applications. This means a future state with zero defects, zero touch, zero applications debt, and zero business interruption – all leading to zero innovation latency. The Power of Zero is driven by speed and agility and delivers business value throughout your entire applications realm by helping you to:

                    1. Drive down to zero defects and tickets through preventive, predictive, and perfective maintenance
                    2. Foster zero touch through an AI-infused intelligent platform
                    3. Get down to zero applications debt through effective portfolio management
                    4. Enable zero business interruption through insights, competitiveness, and efficiency
                    5. Create a state of zero innovation latency through disruptive services

                    ADMnext and the Power of Zero: Business-focused ADM Services for accelerated growth

                    In applying the Power of Zero, Capgemini’s ADMnext moves applications development and maintenance (ADM) from an insurance-based function to investment-focused, business value driver. Essentially, ADMnext equips you with the ability to rapidly respond to change – or rather to embody the change, the innovation, and the outcomes you want for your business.

                    In building a lean, efficient, and resilient core with zero human touch, ADMnext enables clients to drive operational agility and helps them restore services quickly in times of crisis.

                    At Capgemini, we fully believe in this simple, yet powerful vision, and we are committed to bringing everything ADMnext – and the Power of Zero – can offer your applications.

                    Download the whitepaper on the right to learn more about what the Power of Zero and ADMnext can do for your business.