Skip to Content

Unleashing engineering potential with generative AI

Capgemini
Sarah Richter, Hugo Brue, Udo Lange
Sep 18, 2025
capgemini-invent

Over recent months, companies have intensified their adoption of Gen AI. This, along with Gen AI’s rapid evolution, has led to new practices and roles for engineers

Although generative AI (Gen AI) initially gained recognition in engineering through applications in software development, its scope has broadened to help tackle today’s major engineering challenges. According to Gartner, Gen AI will require 80% of the engineering workforce to upskill through 2027.   

In today’s market context, Gen AI for engineering enables companies to optimize processes, reducing time-to-market by speeding up the production of engineering deliverables and improving product quality and compliance by automating certain quality control tasks. These contributions are especially critical since products and ecosystems are increasingly complex and regulated, with more stakeholders and highly personalized requirements.

In this blog, we explore effective strategies for integrating Gen AI technologies, offer practical recommendations to maximize their impact, and share key insights on how to unlock the value they bring to engineering. 

How to unlock the potential value of Gen AI in engineering  

Companies are struggling to unlock the potential value of Gen AI in engineering. This is not caused by a lack of use case ideas, but rather the lack of an efficient end-to-end assessment supporting the implementation of suitable use cases into productive systems. In addition, companies face difficulties in upscaling their implemented use cases effectively across the entire engineering department. 

Along the engineering value stream, we have been supporting our clients to integrate Gen AI successfully and, more importantly, to maintain profitability sustainably. In this blog, we share our top three lessons to help you reach your own goals with Gen AI. 

Unleashing engineering potential with generative AI blog infographic

Evaluation process 

Choose the right use cases to be implemented by using measurable assessment gateways 

To get the most value from Gen AI over time, it’s important to choose the right use cases and develop a strategic order of pursuit. Many companies try to connect Gen AI’s impact to their KPIs, but they often find it hard because of the overwhelming variety of application options. To act effectively, companies should use goal-oriented evaluation criteria as gateways within the use case decision process. 

To avoid being overwhelmed by too many options, it’s important to use clear and specific criteria that go beyond the simple effort-benefit considerations of how much effort something takes or what benefit it brings.   

As well as incorporating a specific evaluation criteria, task and process silos must be broken in order to optimize complex and interdependent engineering processes. Therefore, we recommend mapping the potential use case to the value stream throughout the entire V-cycle. This helps you evaluate ideas more clearly and see where different parts of the core engineering processes can support each other and create extra value. 

To compare the company readiness with the individual engineering use cases, we have developed an exhaustive assessment method that considers eight dimensions: strategy, governance and compliance, processes, data, IT infrastructure and security, employees, cost and investment readiness, and ethical and ecological impact.  

However, the minimal criteria to be considered within the Gen AI use cases selection process covers four focal points:

  • Functional criteria: Business impact of the specific Gen AI use case in engineering. The use case delivers measurable business impact within engineering workflows. 
  • Technical criteria: The necessary data and foundational technical requirements are available to implement the use case. 
  • Regulatory criteria: The use case complies with legal and regulatory standards, such as the EU AI Act and internal company policies. 
  • Strategic criteria: The use case aligns with and enhances the engineering value stream. 

Additionally, it is highly efficient to set a main KPI to ensure the comparability of use cases across the overall engineering Gen AI portfolio. This is an important part of establishing a strategic fit.

Implementation specifications 

Use the full range of relevant data by shifting the focus from engineering text to engineering data 

In markets, we see successfully implemented Gen AI engineering use cases in two key areas: the beginning and end phases of the V-cycle. Exemplary Gen AI use cases to highlight can be found in requirements engineering and compliance demonstration, which are both still highly document-centric and text-based. The most common applications here are conversational agents based on retrieval-augmented generation (RAG) technology. RAG solutions represent one of the most repeatable and transverse applications across the entire value chain, which is why these applications have been at the core of Gen AI strategies for the last few years. 

Both application areas are ideal for starting your Gen AI implementation journey because the solutions are mature, and the results are significant. Our client engagements suggest that by using Gen AI capabilities on technical documentation (e.g., retrieval and summarization), it is possible to generate high efficiency gains and reduce the time engineers spend accessing knowledge and information in the right technical context by up to 50%.

Even though the use of text-based LLMs works very well, the full potential for Gen AI in engineering has not yet been unleashed. Most of the engineering data is not available as a pure text format. Therefore, achieving a higher level of value generation requires overcoming the limitation of a text-based knowledge base. Within engineering, this means including the vast amount of information formats from various data sources across the product lifecycle (e.g., visualizations, diagrams, sensor data, GPS, or even sounds). For application in engineering, we want to highlight the extension of large language models (LLMs) with large multimodal models (LMMs) capabilities. Especially within complex problem definitions, LMMs show a high potential for significant Gen AI usage improvements and operational efficiency across the product development process. We are rapidly discovering the potential transformation of everyday tasks with generative AI for data engineering.

Applying Gen AI 

How to scale up to unlock the full value potential  

Implementation activities of generative AI for engineering are constantly gaining focus. Today, we see companies collecting a full use case funnel, realized in many small implementations of Gen AI addressing specific engineering tasks, resulting in small and often local value gains, respectively. 

Following the rule of “start small, think big,” we share the belief to first gain conviction of the added value and acceptance by implementing such quick wins. Start with simple and cost-sensitive use cases, such as RAG, and progressively extend to more complex use cases. However, we recommend to always keeping the bigger picture of scaling targets in mind. 

An overall AI strategy helps to guide the starting process, to connect existing Gen AI applications, and identifies synergy potentials from the beginning. 

The topic of scaling becomes crucial when using levers to strengthen and expand value generation. A successful upscaling of Gen AI implementations can be executed vertically by expanding the application area or horizontally by linking different Gen AI use cases. As connecting prior local solutions throughout the development process is highly difficult, we want to share the scalability factors we integrate into Gen AI implementation planning and execution. 

Future developments and fields of action

As Gen AI rapidly transforms the engineering sector, hybrid AI emerges as a key solution to meet its specific demands. Simultaneously, advances in multi-agent systems and the multimodal capabilities of language models open up new perspectives for process automation and optimization. 

The hybridization of AI capabilities (hybrid AI) to address the specificities of the engineering field 

LLMs are intrinsically statistical. This means that the risk of failing or being ineffective with investment in implementing Gen AI solutions is still there. One approach to mitigate these risks is to combine the capabilities of Gen AI with the more traditional methods of deterministic AI. This combination leverages the strengths of both approaches while addressing their respective limitations, enabling the development of more robust and tailored AI systems. In the field of engineering, where some activities inherently require reliability, predictability, and repeatability, this synergy proves particularly relevant for addressing critical challenges, such as system and process safety.   

Recent advances in LLMs and LMMs have marked a significant milestone in the improvement of AI agents. These agents are now capable of planning, learning, and making decisions based on a deep understanding of their environment and user needs. As new architectures and use cases continue to emerge, the transition toward multi-agent systems that collaborate in increasingly complex contexts is progressing further. 

We will witness the increasing integration of specialized agents to handle specific tasks, such as requirement extraction, requirement quality control, or requirement traceability reconstruction. Each agent will be able to perform a particular task, and these agents can be orchestrated by a “super-agent” through complex workflows. This agent-based approach will enable greater automation of processes, making them more streamlined and efficient while reducing the need for human oversight. 

However, this reduction in supervision could increase the risk of accidents. Therefore, special attention must be given to assessing the implications of AI agents and multi-agent systems in terms of safety, reliability, and societal impact. Moreover, there should be a focus on technical solutions and appropriate governance frameworks to ensure positive and lasting transformations in engineering.

LLMs are no longer limited to analyzing text. It is now possible to process other types of content, such as images, sounds, and diagrams. Much of the critical information in engineering reports is presented in visual form, and multimodal capabilities will allow this data to be retrieved and exploited more effectively. This will enhance the performance of conversational agents and improve the relevance of their analyses.

Software vendors are actively working to integrate Gen AI modules directly into their tools, especially for generative design. The goal is for these features to become an integral part of the engineer’s daily use, rather than external add-ons. For example, we can expect Gen AI modules integrated into project lifecycle management (PLM) solutions, further facilitating digital continuity.  

With generative AI for software engineering, new capabilities are helping to revolutionize the design process by improving efficiency: some actors have achieved up to a 90% reduction in product design times. This increased efficiency and the reduction in material usage, observed across various projects, lead to significant cost savings.

Innovation through Gen AI 

Generative AI in engineering is bringing human skills and intelligent automation together to solve complex challenges and shorten the development cycles drastically. The organizations that want to lead in the field of engineering must act decisively, scaling Gen AI strategically to unlock lasting innovation, resilience, and competitive edge.

Meet our experts

Udo Lange

Udo Lange

Global Head of Digital Engineering and R&D Transformation at Capgemini Invent
As Head of Digital Engineering & R&D Transformation at Capgemini Invent, Udo Lange brings over 25 years in consulting, innovation, and PLM. He helps global industrial firms embrace digitalization to deliver high-performance, sustainable products while optimizing lifecycle costs. With a passionate team, he blends engineering, IT, and transformation expertise to solve complex challenges across sectors like automotive and machinery, shaping the future of engineering and product development.
Jérôme Richard

Jérôme Richard

Vice President, Head of Gen AI for Engineering Offer, Capgemini Invent
Jérôme Richard combines expertise in operational excellence with knowledge of digital levers to accelerate change and drive organizational transformation for clients. By blending strategy, technology, data science and engineering with an inventive mindset, he helps clients innovate and navigate the future. As Vice President of Intelligent Industry, he guides teams helping clients envision, design, build, and operate smart products and plants.
Hugo Cascarigny

Hugo Cascarigny

Vice President & Global Head of Data & AI for Intelligent Industry, Capgemini Invent
Hugo Cascarigny has been passionate about AI, data, and analytics since he joined Invent 12 years ago. As a long-time member of the industries and operations teams, he is dedicated to transforming AI into practical efficiency levers within Engineering, Supply Chain, and Manufacturing. In his role as Global Data & AI Leader, he spearheads the development of AI and generative AI offerings across Invent.

    FAQs

    The benefits of using generative AI for software engineering include accelerating development, automation of repetitive coding tasks, enhanced quality of code, optimization suggestions, quicker prototyping phases, human error mitigation, and productivity gains. For human engineers, there is more time to spend on value-adding tasks, such as complex problem solving. 

    Generative AI plays many roles in data engineering. It automates the creation of data pipelines, collates realistic test data, detects inconsistencies, enhances the quality of data, documents workflows, and streamlines design. The net result is faster, more scalable, and consistent data engineering processes. 

    Companies can scale generative AI in engineering by adopting robust governance frameworks. This is the foundation on which to integrate AI into existing workflows. Next, they can establish model security, train teams, leverage cloud infrastructure, and continuously monitor performance to maintain reliability and alignment with business goals and operations. 

    Some real-world applications of generative AI in software engineering include code generation, test automation, documentation writing, anomaly detection, system design suggestions, and more rapid knowledge retrieval. This makes it possible for teams to innovate faster while reducing time-to-market and operational costs. 

    Challenges companies face when implementing generative AI in engineering include model bias, security risks, intellectual property concerns, explainability issues, and skill gaps. Moreover, ensuring AI-generated code meets compliance is particularly noteworthy and critical. These challenges can be overcome with sound implementation strategies built on robust frameworks.