Skip to Content

The CTO Playbook for Innovation Strategy in Engineering – 2: The importance of thematic focus

Capgemini
Capgemini
Jan 10, 2025

As CTOs, the challenge of steering our companies through a rapidly evolving technological landscape requires a strategic approach grounded in clear thematic priorities. The thematic focus is crucial in ensuring that we remain agile, responsive, and forward-thinking amidst the complexities of today’s innovation-driven market. 

Thematic Focus: A Strategic Imperative 

In our previous discussion, we identified the three main factors influencing CTO decision-making: the pace of change, the convergence of digital and physical innovations, and unpredictable external forces. These factors underscore the importance of having a thematic focus to guide our engineering and technological efforts, and the way we prioritise. 

At Capgemini Engineering, we have developed five key themes that help anchor our strategic direction. These themes emerged organically, reflecting the bottom-up recognition of crucial areas that are influence our customers every day. They are interconnected, highlighting the multifaceted nature of modern engineering challenges. Below I have outlined the rationale behind each one:  

Five Themes Driving Engineering Innovation 

1. Organic Engineering: 

Organic engineering is inspired by the resilience and efficiency observed in natural systems. This theme focuses on designing products and systems that are both autonomous and efficient, much like nature itself. The aim is to create engineering solutions that can adapt and evolve, ensuring long-term sustainability and resilience in the face of disruptions such as those seen during the COVID-19 pandemic. This approach is evident in innovations like additive manufacturing, where nature-inspired lattice structures enhance product strength and flexibility. 

2. Resource Revolution: 

The resource revolution theme addresses the critical need to engineer materials at a molecular level to meet specific requirements that natural materials cannot fulfil. This theme is about creating materials with tailored properties, such as high thermal conductivity and low expansion rates, which are essential for advancing various technological fields. This revolutionary approach marks a significant departure from traditional engineering practices and is key to driving future innovations. 

3. Hybrid AI: 

Hybrid AI combines classical machine learning with advanced artificial intelligence, grounding AI applications in solid, defensible knowledge bases. This theme focuses on leveraging AI to enhance human capabilities rather than replace them, ensuring that both human intuition and machine efficiency are optimally utilised. The development of augmented engineering tools – where AI assists engineers in making more informed decisions; and autonomous systems in defence, where human machine teaming reduces the need to put people in harm’s way to achieve military outcomes, are both examples in practice. 

4. Digital Fabric: 

The digital fabric theme encompasses the foundational technologies that enable advanced computing and connectivity, such as 5G connectivity, edge computing, and advanced data architectures. These technologies are crucial for supporting the other themes by providing the necessary infrastructure for real-time data processing, communication, and system integration. This interconnected digital ecosystem allows for seamless integration and scalability of engineering solutions. 

5. Velocity of impact: 

This theme addresses the societal and ethical implications of technological advancements. It emphasises the need for responsible innovation that considers the broader impact on society and the environment. This theme is particularly apparent in the context of autonomous vehicles and AI, where the social acceptance and safety of these technologies are paramount. Engineers must ensure that their innovations contribute positively to society, fostering trust and acceptance. 

The power of themes for the CTO 

Having a solid thematic focus provides a strategic framework that helps CTOs balance innovation; practicality; and societal responsibility to overcome the dilemma of what to progress and what to hold off on. By aligning engineering efforts with themes that are uniquely relevant to their organisation’s strategy, CTOs can ensure that they anchor their decisions around what really matters in the areas that will enable them to achieve their corporate goals – be they commercial; environmental; or societal. Engineering, with its focus on practical application and real-world problem-solving, remains the cornerstone of this journey, guiding us through the ever-changing landscape of technology and innovation. 

Our related podcast episodes

Ever since the Industrial Revolution, there have been moments of seismic change; innovations that have triggered momentous and hugely impactful transformations in manufacturing. Join Capgemini Engineering’s Ramon Antelo and Leonardo’s Antonio Girardi to determine if we are on the cusp of another step change and, if so, how manufacturers can fully realise the value from advanced technologies and make the best of current and future research and innovation.

Dr, Dorothea Pohlmann and Siemens’ Pina Schlombs discuss how technology can help to ensure sustainability is the most important element in contemporary product design.  

Authors

Keith Williams

Keith Williams

Executive Vice President, Chief Technology Officer, Capgemini Engineering
Keith Williams has 34 years’ experience in the engineering & technology industry. As Chief Technology Officer, Keith drives Research & Innovation, Strategic Investment and Technical Authority across all industrial and technical domains. He played a pivotal role in the development of the Capgemini WindSightIQTM innovative solution that brought real-time wind visualization to the Louis Vuitton 37th America’s Cup.
David Jackson

David Jackson

CTO Product and Systems Engineering, Capgemini Engineering
Ramon Antelo

Ramon Antelo

CTO Manufacturing and Industrial Operations, Capgemini Engineering

    Research & Innovation at Capgemini Engineering

    Our research and innovation programs are business accelerators that help clients with high-intensity R&D to reveal the value of incremental

    The CTO Playbook for Innovation Strategy in Engineering – 1: Introduction

    Capgemini
    Capgemini
    Jan 10, 2025

    Science and research may shape the future, but engineering transforms our world today.

    Engineering drives global growth by advancing technology, boosting productivity, and improving efficiency across industries. Strong infrastructure reduces costs, enhances connectivity, and opens up markets, while engineering projects promote international collaboration. Engineering also addresses emerging threats, supporting sustainability, defence, and global biosecurity efforts. The IMF’s 2024 World Economic Outlook stresses that engineering innovations are key to sustaining global economic momentum. Chief Technology Officers (CTOs) must prioritise the development of effective engineering innovation strategies to foster growth, but this task is increasingly complex and critical. The challenge is shaped by three key factors. 

    Three influencing factors

    The first factor is the pace of change. The speed at which innovation occurs makes effective decision-making more difficult. Digital transformation, driven by advances in artificial intelligence, machine learning, and the Internet of Things (IoT), is accelerating rapidly. This evolution also extends to the physical world through innovations in materials science, nanotechnology, and biotechnology. CTOs must stay ahead of a constantly shifting technological landscape in both realms. The volume of information they receive today is overwhelming compared to previous generations, and they must discern what is substantive versus speculative, adding complexity to their decision-making. 

    The second factor is the convergence of digital and physical innovation. This convergence reshapes industries, creating a multiplier effect where the combined impact of these innovations is greater than the sum of their parts. For instance, semiconductor advancements are opening up new possibilities in biology and medicine. CTOs face the dual challenge of staying ahead of rapid change and managing a vast array of potential advancements. 

    The third factor is unpredictable external forces. CTOs must navigate an environment marked by rapid, unpredictable changes driven by societal demands, environmental concerns, and global economic shifts. With the convergence of digital and physical innovation, new opportunities and threats can arise quickly, requiring technology leaders to remain agile and responsive. 

    The power of engineering for CTOs

    The impact of these three factors makes it difficult for CTOs to plan long-term strategies. But they are held to account on their ability to do so, regardless.

    This is why engineering is so pertinent, it  is an essential discipline to navigate the shifting sands  of technological change effectively. Its grounding in real world application, and its requirement to consider immediate consequences on people, processes and organisations makes it highly relevant to the pace of change. While fundamental science provides the theoretical foundation for technological advancements, it is engineering that converts these into practical applications because it is inherently focused on solving real-world problems and achieving desired changes in the physical world. It is this relentless focus on effective application that makes engineering indispensable for CTOs dealing with the convergence of digital and physical environments, and the use of technologies that work effectively in both.  

    The CTO’s Dilemma

    To accommodate the three influencing factors above, CTOs must strike a difficult balance. They need an engineering innovation strategy that fosters a culture of innovation within their organisations, where engineering teams are empowered to experiment, iterate, and develop cutting-edge solutions with freedom to explore. Yet they must simultaneously ensure that any engineering efforts driven by this strategy are aligned with the company’s overall business objectives and societal responsibilities, whilst also taking account of unpredictable external factors affecting the wider environment in which the company operates. Constantly navigating between these shifting sands is the essence of the CTO’s dilemma and they are understandably wary of making these tricky choices. Effective decisions are seldom reported but bad ones can become the stuff of corporate folklore. Nokia’s decision to pass on the Google Android operating system and Kodak’s dogmatic adherence to physical film over digital imaging are strategic failures driven by CTOs that defined their corporate stories. It’s a challenging time for this cadre of executive leaders. How can they tackle the task in front of them?   

    In the coming blogs in we will cover the tools we use to set our innovation strategy for engineering. 

    Authors

    Keith Williams

    Keith Williams

    Executive Vice President, Chief Technology Officer, Capgemini Engineering
    Keith Williams has 34 years’ experience in the engineering & technology industry. As Chief Technology Officer, Keith drives Research & Innovation, Strategic Investment and Technical Authority across all industrial and technical domains. He played a pivotal role in the development of the Capgemini WindSightIQTM innovative solution that brought real-time wind visualization to the Louis Vuitton 37th America’s Cup.
    David Jackson

    David Jackson

    CTO Product and Systems Engineering, Capgemini Engineering
    Ramon Antelo

    Ramon Antelo

    CTO Manufacturing and Industrial Operations, Capgemini Engineering

      Research & Innovation at Capgemini Engineering

      Our research and innovation programs are business accelerators that help clients with high-intensity R&D to reveal the value of incremental

      The answer is blowing in the wind
      How we use LiDAR at the America’s Cup to make the invisible, visible

      Dr Mark Roberts
      Jan 8, 2024
      capgemini-engineering

      The America’s Cup is the Formula 1 of the sailing world, constantly pushing the technological boundaries, and our work using LiDAR to see the wind in real-time just added an whole new dimension.

      The wind is important for sailing, and never more so that in the America’s Cup, a competition that attracts top sailors, deep pocketed hosts and sponsors, and an estimated 1 billion viewers. Understanding the capricious nature of the wind is key to understanding the race, both for sailors and their audience.

      For the first time in its history, broadcasters of the 2024 America’s Cup in Barcelona could show viewers a graphical overlay of the wind superimposed over real time video images of the racecourse. Viewers could ‘see’ the wind and how it affected the race. It allowed the broadcasters to compare the paths taken against optimal routes, and predict what boats on the water should do next.

      This was all thanks to WindSight IQ™, a wind sensing and visualization system developed by Capgemini with AC Media. So, what did it take to achieve all this?

      Taking the measure of the wind

      It all starts with LiDAR, or Light Detection and Ranging, the lesser-known cousin of radar. Whereas radar bounces radio waves of objects to detect them, LiDAR bounces light, normally in the form of focused pulsed laser beams, off aerosols in the air. Applications of LiDAR and radar are everywhere around us – from obvious applications in the automotive and aerospace world, to less obvious applications in mapping and 3D scanning.

      A Doppler LiDAR adds an extra dimension to the signal that bounces back, using the Doppler effect (how squashed or elongated the retuning waves are) to measure how fast that object is moving towards or away from us.
      It is this effect that powers WindSight IQ™. Each Doppler LiDAR shoots laser beams out into the air, around 10,000 pulses per second, some of which bounce off aerosols and impurities in the air and return an indication of where they are and how fast that air is moving towards or away from us.
      By combining multiple LiDAR’s perceptions of this towards-vs-away movement at a particular point, we can calculate both the wind speed and direction at that point.
      A huge amount of mathematical modeling is then required to turn those raw measurements into a usable wind-field, but it all starts with these Doppler measurements of the speed of the air.

      Choosing locations to measure the wind

      To correctly calculate the wind speed and direction, we must “see” the same bit of wind from multiple different angles. The America’s Cup racecourse also moves daily (and even within races), so we need to ensure we can cover all possible scenarios.

      In an ideal world, we would just place LiDAR units uniformly around the racecourse, but this is where the real world gets in the way of our plan – we can’t just put the LiDARs wherever we would like, because there are obstructions, reflections, legal issues and – in some cases – a distinct lack of land where we would like to place a unit.

      Early on in the project, we recognized that the site selection of our LiDAR units would be key to the success of the whole project. This is a classic constrained optimization problem – we have many different parameters, some which we can control and some which we can’t, and we need to find the optimal configuration that maximizes the performance and robustness of the system, and minimizes cost and complexity.

      We even created a dedicated analysis tool specifically to assist our team with this site selection problem. See the video below.

      Eventually, we settled on a site to the north of the race area, near Parc del Fòrum, one in the South at the Baleria Terminal, and later one in the middle, near the San Sebastian Beach. Installing the LiDARs, each of which is 100kg of carefully calibrated scientific equipment, required a diverse mix of engineering and logistical support from our local teams, as well as the cooperation of multiple building owners – who allowed access to their roofs as sites for our LiDARs.

      Finally came the most exciting part of any geeky innovation project – deciding on names for our tools. This is often a bit of fun in any tech project, but also serves an important practical purpose – providing instantly recognizable names for the hardware that won’t be confused in a busy operational setting. We settled on names derived from Greco-Roman gods – the northern LiDAR was named Borea, after the god of the North Wind, the southern LiDAR was dubbed Notus, after the god of the South Wind, and in the middle is Zephyr, after the god of the West Wind. Each of the LiDARs have their own personalities, our operators were very fond of Zephyr due to its very good range and consistent performance, and I’m sad to say that Notus is often the “black sheep” of our LiDAR family.

      A gale-force stream of wind data

      When you’re scanning a 6x5km area at high speed and high resolution, you produce a lot of data. Each LiDAR generates approximately 10 megabytes per second, and must feed that data back to our operations center in as close to real time as possible. However, the LiDARs are several kilometers from the operations center – across beaches, water, and streets, so running a cable between them is not an option.

      Luckily, as a major world center of telecoms, Barcelona is blessed with an excellent and reliable 5G cellular network, so each LiDAR is paired with a ruggedized mobile router to push the data over a custom UDP protocol and VPN back to base. However, mobile networks are notorious for congestion around major events, and with the America’s Cup estimated to bring an extra 2.5 million visitors to Barcelona over the race period, we needed to ensure that we had a viable backup to fail-over to, if necessary. This was achieved using Starlink units, giving us the option of a satellite uplink if required.

      How to keep a LiDAR happy

      Many things affect the quality of the data we get from our LiDARs. As stated earlier, the LiDAR’s laser beams bounce off impurities in the air – which could be aerosols, dust, or even pollution. We need there to be “stuff” in the air in order to see the air.

      Anyone who’s ever used a laser pointer will know that the beam itself is not visible – it’s only when that beam hits something that we can see it. This is why laser light shows always use smoke machines to make the lasers visible, and why movie thieves always blow smoke into a laser security grid to see where the beams are.

      Essentially, we need the air to be a bit dirty to see it, but not too dirty. Rain is our worst enemy – during rainfall, the LiDAR beams are stopped dead and we can’t see the wind at all. However, even when the rain stops we can still have a problem. Immediately after rainfall, we sometimes perceive that the air feels fresh and clean, and that’s exactly true – all the particulate matter has been washed from the air and our laser beams just keep going into the atmosphere and don’t bounce back enough to detect. We must trade off other parameters in this situation, potentially sacrificing range and/or accuracy to get a strong enough signal to detect the wind.

      In a fast changing and unpredictable environment like a sporting competition, where real-time insight matters, the tools need to respond to changes in input data in real time. For many intensive months, our team worked with these LiDAR signals, building algorithms on top of them. That has given them a near sixth-sense where they interpret this raw data effortlessly and rapidly tweak and reconfigure the settings to maximize range and accuracy in response to changing weather patterns.

      When disaster strikes


      A few weeks into the America’s Cup events, disaster struck. Borea got sick. A never-before-seen hardware fault developed and it would not start up. We were down to two LiDARs with races just hours away from starting. Luckily the WindSight IQ™ algorithm took this situation in its stride. The algorithm was always designed to use as much or as little information as was available in its calculations. When new data sources became available, the algorithm swallowed it up and used that new data to increase the accuracy and confidence in the windfield. Likewise, when there was not as much data as expected, the algorithm creates the best estimate possible with the data it does have.
      For example, when we got access to live wind telemetry from the boats and marker buoys, WindSight IQ™ ingested that too and used it to increase its confidence in those areas of the windfield. It always builds a model of the wind that uses any information at its disposal, without relying on any one source completely. Nobody watching the TV coverage knew that WindSight IQ™ was running with 33% less data than normal during those two days before Borea could be replaced. The inherent scalability of this algorithm is what now allows us to consider creating windfields over much larger areas for future applications.

      Setting the sails and a new precedent

      The introduction of WindSight IQ™ is not just a first in the sailing world, it’s a first in the world, full stop. Our ability to combine multiple Doppler LiDARs into a single, coherent, accurate picture of the wind at high resolution and in real-time changes how we think about measuring the wind.

      Previously, we were limited to sampling a handful of point measurements to give us a vague understanding of the wind – like fumbling around in a darkened room trying to figure out the layout. Now, we have flicked on the light switch in that room, and need no longer guess – we can just see it directly. That has already shown its value on the seas, but detailed wind sensing and visualization could soon have huge benefits in many other areas, including airport safety and operations optimization, optimizing windfarms, precision farming, high rise construction, disaster response, and the design of vehicles, boats and aircraft. Having proven its worth in the challenging environment of a live global broadcast of a big sporting competition, we are excited to work with partners in all these industries to explore what else it can do next.

      The 37th America's Cup

      The America’s Cup is one of the most iconic events in the sporting calendar. Global and technology-driven by nature, the prestigious sailing tournament embodies many of the Group’s values. As a Global Partner of the 37th America’s Cup we brought a new dimension to the competition with WindSight IQ™.

      Author

      Dr Mark Roberts

      Dr Mark Roberts

      CTO Applied Sciences, Capgemini Engineering and Deputy Director, Capgemini AI Futures Lab
      Mark Roberts is a visionary thought leader in emerging technologies and has worked with some of the world’s most forward-thinking R&D companies to help them embrace the opportunities of new technologies. With a PhD in AI followed by nearly two decades on the frontline of technical innovation, Mark has a unique perspective unlocking business value from AI in real-world usage. He also has strong expertise in the transformative power of AI in engineering, science and R&D.

        Realizing the potential of GovTech — if not now, when?

        Marc Reinhardt and Manuel Kilian
        Jan 8, 2025

        It is time for decisive action. GovTech has enormous potential to create a more responsive, inclusive, transparent, and efficiently-performing public sector: one that better meets societal needs. And now, perhaps more than ever, the design and application of tech products and services is a strategic topic for governments globally.

        Why now? Because as well as facing profound changes in the political and geostrategic landscape, governments today have an extraordinary opportunity in GovTech to reimagine the public sector as the operating system of society, where technology can unlock unprecedented value for all. Thus, GovTech is not just about digitizing processes to make public service delivery more efficient and to improve the citizen experience — it is a central catalyst in redefining government for the future.

        This is one of the strands in a new report from the World Economic Forum and the Global Government Technology Centre (GGTC), in collaboration with Capgemini. The Global Public Impact of GovTech: A $9.8 Trillion Opportunity quantifies the public impact of GovTech, rather than focusing only on the size of the GovTech market, which previous reports have done. This value-impact thinking positions GovTech as a topic for investigation not just by government CTOs and IT departments, but by the broader leadership community, from agency and departmental heads to secretaries of state and government ministers. As we see it, we’re not there yet in this respect — but we should be!

        The changing nature of GovTech

        You might argue that governments have been using technology to improve how they operate for decades. And you’d be right. So, what’s different about the current state of GovTech?

        To answer this, let’s start with the technology itself. There’s a plethora of new and emerging digital tools that can now be harnessed to modernize public sector operations. These include the real-life examples of AI-powered platforms being used to detect financial fraud and to speed up tax processing, and data-driven early-warning systems improving disaster response.

        Many of the newly available technologies offer the ability to create comprehensive systems that streamline government functions and interactions with both citizens and businesses. We can use these technologies in an interconnected way, enabled by foundational digital capabilities, such as digital identity systems and payment platforms, to deliver seamless, interoperable government services.

        The need for action on challenges

        Of course, beyond the availability of game-changing technology, governments must address challenges that would be difficult to tackle without technology. First, there are barriers to the continued delivery of public services, including legacy systems that are no longer up to the task. Second, there are global challenges, ranging from climate change to resource scarcity and global health crises.

        GovTech offers a means for governments to co-ordinate responses to all of these within and across national borders. But only with a collaborative mindset.

        Adopting an ecosystem approach

        This need for collaboration brings us to our next point: Value today is rarely created by just one solution from just one company, but rather by curating the best partners to achieve the mission at hand. As a result, GovTech extends to a broad ecosystem of providers, integrators, manufacturers, cloud providers, software vendors, startups, etc., as well as to the government bodies they’re supporting.

        Collaboration will be essential to advance global GovTech developments, as exemplified by the World Economic Forum’s 2024 launch of the GGTC in Berlin in September and in Kyiv in December — with a network of further Centres planned globally. These will connect innovation ecosystems to a global community of experts and practitioners to inform and inspire the GovTech agenda worldwide.

        This agenda moves us beyond “IT” as it’s understood today. As the World Economic Forum sees it, GovTech enables a “whole-of-government approach by applying emerging technologies and digital innovations to enhance the efficiency, effectiveness, and accessibility of public administration and services”.

        How GovTech creates value

        Delving into these enhancements, the new report states that the GovTech market will create an opportunity to generate $9.8 trillion in public value by 2034. This value will be realized through three key value drivers:

        • Efficiency gains: Streamlining processes, reducing costs, and improving service quality.
          The application of technology to automate processes and optimize resource allocation can drive improvements that research suggests could lead to a 30% increase in efficiency, with global savings projected to reach $5.8 trillion by 2034.
        • Transparency: Enhancing accountability in process, reducing corruption, and building public trust.
          The adoption of digital tools, such as e-procurement systems, e-invoices, and digital IDs, increases transparency within government operations. It is estimated that GovTech solutions could reduce the financial toll of corruption by as much as 10%, potentially saving $1.1 trillion by 2034.
        • Sustainability: Optimizing resources, cutting waste, and supporting environmental sustainability.
          Digitizing public services reduces reliance on resources, from paper to fuel, directly minimizing the environmental footprint of the public sector. Factors such as remote work, efficient building management and reduced vehicle emissions are crucial to making the public sector less resource intensive.

        That’s not all. Value creation will extend beyond merely digitizing processes and automating tasks to embrace the building of digital public infrastructure (DPI), for example digital identity systems and payment platforms, and digital public goods (DPGs), such as electronic health record management systems. This will drive greater public impact, helping to create safe and inclusive participation in markets and society. To support governments on this journey, the GGTC Berlin aims to promote best practices and re-use of existing GovTech solutions in the public sector.

        Use cases evidencing the impact of GovTech

        A well-functioning administration, supported by efficiency, transparency, and sustainability, strengthens citizens’ trust in the state and thus stabilizes the government and its structure. Strategized and implemented well, GovTech is a key pillar for achieving this.

        The impact of successful GovTech implementations is already being realized in some trailblazing administrations. This includes:

        • Rio de Janeiro’s GRAS, which detects risks of corruption and bias in government contracting via advanced data analytics.
        • Ukraine’s Unified State Electronic System in the Construction Sector (USESCS) that is bringing transparency to the construction process, covering the entire project life cycle and reducing corruption risks.
        • Malaysia’s National Digital Identity (NDI), which allows users to authenticate themselves online for various services without relying on physical identification documents.
        • Germany’s City of Hamburg, simplifying the procurement process for startups with its GovTecHH initiative. This enables startups to bypass traditional bureaucratic delays, accelerates the adoption of innovative solutions, and makes it easier for those young companies to engage with public administration.

        Taking action — now!

        Clearly, some administrations are already embedding new technologies, like AI, virtual reality and the internet of things (IoT), within government operations. Emerging economies in particular are using them to leapfrog developments and modernize their administrations at breathtaking speed. In the mature OECD countries, full-scale GovTech transformation doesn’t happen overnight. Here, barriers exist, such as outdated, fragmented legacy systems that are not easily compatible with modern digital solutions. Or there is a lack of leadership buy-in to the strategic value (societal, environmental, public trust, etc.) of GovTech.

        We believe that strategic and decisive action is needed to realize the potential of GovTech. Sharing lessons and transformation experiences within the GovTech ecosystem can help. As Markus Richter, State Secretary at the German Federal Ministry of the Interior and Community, said at the launch of the GGTC in Berlin: “We need to learn from each other, explore existing use cases, and fast-track high-priority digital transformation activities. It is crucial to provide a platform like GGTC for this kind of collaboration and exchange between governments, technology companies, or research organizations.”

        Together with our partners we’re helping public administrations tap into the value of GovTech. Our own extensive experience of creating and leveraging digital ecosystems will help government organizations access new and emerging GovTech solutions with huge potential to drive greater value, foster innovation and, importantly, instill public trust.

        Read the report

        Read the report on the World Economic Forum’s website — The Global Public Impact of GovTech: A $9.8 Trillion Opportunity

        The blog is co-authored by Marc Reinhardt: Executive Vice President, Public Sector Global Industry Leader, Capgemini and Manuel Kilian: Managing Director, Global Government Technology Centre Berlin.

        GovTech: Social impact through technology

        Government technology (GovTech) is about more than the technology itself.

        Can AI help draft witness statements? A pioneering collaboration in the Netherlands says yes

        Frederik Peters
        Jan 6, 2025

        With interest already being shown by several Dutch judicial organizations, an exciting new collaboration between Capgemini in the Netherlands, Rijksuniversiteit Groningen (RUG), and Scotty AI looks set to transform a critical area of criminal investigations. Capturing and using eyewitness statements is often a stumbling block in achieving a successful and timely prosecution. That’s all set to change as a Living Lab project gets underway to test and ultimately realize the new automated witness-taking solution, AIWitness.

        In a career that’s embraced scaling start-ups, life as a politician, and the government technology (GovTech) ecosystem, it’s no surprise that Frederik Peters, engagement director at Capgemini, is hugely excited about an ongoing GovTech project with significant public sector ramifications. Here he tells us more about the substantial potential of AIWitness

        Can you tell us briefly what AIWitness is?

        AIWitness is a highly advanced GovTech solution that will revolutionize the way in which eyewitness evidence is captured and used across the judicial system. As its name suggests, the core of the solution is artificial intelligence (AI). But it’s much more than just the technology: it’s also about “how” the AI is used, with a careful balancing act of ethical, societal, legal, and scale-up considerations, alongside the already proven technology.

        In practical technology terms, AI-witness is an AI-voice-to-text conversational solution. Beyond capturing witness statements in real time, it can also carry out first-line tasks, such as follow-up communication via phone, WhatsApp or email, all within a fully automated process on the Scotty AI platform. So, it can become a vital contact point between citizens and the police, offering communication in 140 languages.

        What problem does AIWitness solve?

        The team behind AIWitness began with a bold vision from the outset. With an interest in policing and the judiciary, we wanted to find a way in which the judicial process could be started on a Monday and finalized by the weekend, rather than weeks later. We quickly realized that the first obstacle preventing us reaching this goal was the witness taking procedure.

        Witness taking is typically labor intensive and lengthy. Here in the Netherlands (and elsewhere, no doubt) police capacity shortages mean that it can be several weeks before statements are taken, creating a bottleneck in the end-to-end criminal justice process. In that time memories of the incident can change as people forget often crucial details. This has a detrimental impact on the quality of the statements used in prosecutions.

        To give this more meaningful context, in 2021 in the Netherlands alone, 30,000 cases failed to lead to a criminal prosecution due to lack of evidence. In many instances, witness statements weren’t taken in time, so detailed information and evidence were lost, or statements weren’t even taken due to capacity shortages. Thousands of citizens who took the time to file a report with the police were left feeling that they couldn’t rely on the government when it came to safety and justice.

        So, yes, we had a bold vision to fix this situation—and we felt we could achieve it with a solution built on generative AI (Gen AI). It would be a solution that minimized frustrating delays for the justice system and ensured more criminal cases got to court.

        That’s the premise of AIWitness.

        What expertise has come together to develop AIWitness?

        It’s intended as an exciting collaboration between Capgemini in the Netherlands, Rijksuniversiteit Groningen (RUG), and Scotty AI. This blend of consulting, academia, and AI expertise is a great example of how a collaborative approach can accelerate innovation.

        I have been involved with it since the outset. All three founders (Laura Peters at RUG, Reiner Bruns at Scotty AI, and myself at Capgemini) are driven by a passion for creating a safe and secure society. Laura’s background as an associate professor of criminal law and criminal procedure gives her invaluable insight into aspects of the judicial system. In turn, Reiner is a tech-entrepreneur, with in-depth technological knowledge about conversational (generative) AI.

        The challenge of how to fix witness taking lit a spark amongst all three of us. This set in motion the AIWitness initiative.

        What key questions are being asked in the development of AIWitness?

        While Scotty AI’s technology is our starting point—we are already 85% there with this—the witness taking-process itself needs a rethink in terms of the boundaries regarding ethical, judicial and societal nuances. After all, while something might be technologically sound, would it be suitable in a judicial setting? For example:

        • What ethical perspectives need to be taken into consideration—and how can we prevent bias in the AI system?
        • Will a judge accept an AI-derived witness statement as an accurate reflection of an incident?
        • Does an AI have the emotional intelligence needed to deal with the human sensitivities of highly-charged situations?
        • Can automated witness taking technology be trusted to safeguard the privacy of all those involved?

        Questions like these make it all the more important that this is a co-created solution not only involving tech innovators, but also taking on board the more nuanced societal, cultural, and judicial aspects. That’s why we’ve now taken the next step on the AIWitness journey with the launch of a Living Lab.

        What does the Living Lab aim to achieve?

        This is where it gets really exciting because it’s how we will push the scope of what AIWitness can achieve. The Living Lab will explore the different aspects of creating a solution that is technically, ethically, judicially, and societally accepted. Further, we will be looking at how to ensure our solution is scalable and of such high quality that it is fully accepted in criminal court cases.

        It is early days, but our Living Lab will see us engaging with public sector organizations to conduct 360-degree experiments around automated witness taking. For example, we will carry out research into what type of crimes are suitable for this solution, and what ones aren’t. And we will explore and validate the use and value of emotional recognition within legal, ethical, social and technological parameters.

        How is it going so far?

        We have already made exciting progress in terms of how to make automated witness taking a better user experience. For example, AIWitness can take eyewitness statements in almost any language, which reduces the need to wait for interpreters to arrive on scene or at a police station. Information can also be cross-checked in real time, something that can’t be done manually. From a citizen perspective, the solution can read through written text so that any witness who might have literacy problems can confirm whether his/her statement is being captured verbatim.

        What next for AIWitness?

        Interest in AIWitness is growing. Representatives of the police and the Dutch Council for the Judiciary have already indicated they would like to join us.

        And while we want to release AIWitness as quickly as possible, we know we first need to build an ecosystem of organizations on which our solution will have an impact. That’s what our Living Lab will achieve—because we know automated witness taking will require new processes, so we need to understand and test what those processes might be, engaging with partners in the police and judiciary throughout.

        We will also be looking at how to scale the solution drawing on Capgemini’s global reach. This is something that a start-up company is often unable to achieve on its own, despite having developed a great technological solution.

        Are there implications for the public sector beyond the justice system?

        Yes, of course, the adoption of artificial intelligence to bolster human resources and deliver a better citizen experience in the face of staffing shortages and budgetary constraints has ramifications for the wider public sector. For example, could a similar AI to that used in AIWitness have use cases in booking GP and hospital appointments? Certainly, any number of regular processes that involve a conversation with a person and contain repetitive tasks could benefit from this technology. In fact, the options are endless.

        Clearly, all that is for the future. For the immediate term, we are excited to continue exploring and developing AIWitness in our Living Lab and to working with a growing ecosystem of partners from across the criminal justice system. At every step of the way is the aim of contributing directly to a safer and more just society, globally.

        How can we find out more about AIWitness?

        You can contact me for a chat (details below), or visit the AIWitness website: www.aitwitness.org

        Author

        Frederik Peters

        Frederik Peters

        Engagement director | Principal consultant | GovTech expert
        “Govtech, which involves the use of innovative technologies by public organizations to address societal challenges, is crucial for modern governments to effectively manage disruptive innovations. By integrating advanced technologies responsibly, public organizations can enhance efficiency, transparency, and responsiveness. It’s essential to ensure that human needs, rights and values remain central to these developments.”

          Beyond the hype: Why Google’s Willow alone does not bring you closer to practical applications

          Camille de Valk
          Dec 24, 2024

          The recent announcement of Google [link] has gained a lot of attention [1]. Google Quantum AI announced their new quantum chip Willow, which demonstrates notable improvements in reliability and speed. In the next few paragraphs, we (Capgemini’s Quantum Lab) will try to put the announcement into perspective, both for the quantum field and for the industry that will undergo the impact from quantum computers in the coming years.

          What does the announcement say?

          It is important to understand that Google announced two results in their announcement of the Willow chip. First, they showed that they can correct errors in quantum hardware faster than they would occur. The second result, which was the result mainly picked up by the media, showed their quantum chip performed a calculation faster on a quantum computer than a classical computer could do. These results show that Willow is a world-leading superconducting chip with the best error correction demonstration.

          Correcting errors

          Building good quantum bits (qubits) is an immense engineering challenge, as the systems are susceptible to all sorts of environmental noise. This means that calculations fail when they are of a certain complexity. And because all useful quantum algorithms are beyond this complexity, quantum computers of today are extremely limited. Because of this limitation, researchers have theorised ways of combining multiple (noisy) physical qubits to create one logical qubit.

          The goal is that this logical qubit is less noisy than the physical qubits and with their recent announcement, Google Quantum AI has shown that this is goal is achievable. They have shown that using more physical qubits indeed lowers the error rate of the logical qubit and they have pushed this below the threshold. This means that their error-corrected (logical) qubit has a lower error rate than the individual qubits. Even more so, they showed that adding more qubits to a logical qubit indeed makes the logical qubit better.

          Part of this demonstration is showing that the classical computing support system could meet the performance required for the implementation of quantum error correction, this is an extremely hard classical problem requiring massive data communication and processing. As part of this announcement, Google highlighted recent work they have completed with Deepmind to improve performance through the application of machine learning. 

          Benchmarking performance

          To benchmark the performance of their superconducting quantum chip, Google Quantum AI used random circuit sampling. This is a benchmark specifically well-suited for quantum computers and designed to be extremely hard for classical computers. The benchmark Willow performed would have taken today’s faster supercomputer 1025 or 10 septillion years. For a layman that sounds truly impressive, and by all means, the quantum chip is impressive. But it is highly non-trivial to interpret the meaning of these results and numbers. It raises the question:

          What do the results mean?

          10 septillion years does not mean that much

          The media are mostly talking about the “10 septillion years” number. However, other than to showcase quantum computing capabilities, random circuit sampling is of no use. In the original announcement, Google themselves even state that this is about the least commercially relevant activity you could do with a quantum computer. This is a typical example of comparing oranges with apples and should therefore not be taken too literally.

          Error correction is on track

          A more meaningful implication from Willow and its performance is the fact that the roadmap for error correction is realistic and that we are on track. When Peter Shor theorised quantum error correction in 1995 [2], the goal of error correction was to create logical qubits with errors (much) lower than possible with physical implementations of qubits. Willow has shown two different components of error correction.

          • Adding more qubits decreases the logical error rate

          When building upon physical (noisy) qubits to create logical qubits, the underlying qubits need to be at least of a certain baseline, and you need to be able to correct errors faster than they occur on average. When the physical qubits underlying the logical qubit are good enough, adding more physical qubits actually increases the performance of the logical qubit.

          As can be seen in the figure, Willow’s physical qubits have shown to be good enough for this. I.e., when adding more qubits (a higher surface code), the logical error decreases. That is an important note, because when (physical) qubits are too noisy, adding more of them only increases the noise of the logical qubit.

          • Logical error rate is lower than physical error rate

          In the advancements to large-scale quantum computers using error correction, logical qubits have been created before [3]. What is new with Willow is that the logical error is below the error of the individual physical (superconducting) qubits, thereby improving the performance, instead of worsening it. Moving beyond the error correction threshold with logical qubits is something that has not been done before on superconducting qubits, to the best of our knowledge.[JC1] 

          What do the results change?

          Most of the discussed results were already published in August 2024, and it created some buzz in the technical communities. Certainly, Google Quantum AI has shown high-quality engineering and research; however, the press release and associated publicity made a lot more of it than was justified. It should be stated that the achievement from Willow it just another step along the path and does not mean we are at the large-scale fault-tolerant quantum computers that can do anything useful, let alone break cryptographic standards. The main value of the work is to derisk quantum computing by showing that using more qubits leads to better logical qubits; i.e. with enough qubits, we can achieve the error rates required. The use of quantum error correction is already a part of most roadmaps—not just Google Quantum AI.

          The errors they achieved are still too high for useful calculations. Also, the expected timelines to commercial value or cryptographic threats are not impacted by Google Quantum AI’s result: this was an expected next step and not a step change. There is still a long way to go before quantum computers demonstrate commercial value, and more effort is required into how this technology will be used when it comes.

          What still needs to be done?

          This result has further demonstrated that it is “when” not “if” for a future with quantum computers, but the quantum computing industry has a long way to go. We still need to understand practically how to use these future devices and where they will give us transformative value. Some potential applications and use cases are known in theory, but to realise them in practice, we need more than just better quantum hardware. Without clear and practical insights, quantum computers risk becoming impressively engineered paperweights with no commercial value. We need to start tackling the specifics and practicalities of quantum computing. There is a lot of investment and work in hardware (like Google Quantum AI’s Willow chip), but a lack of investment in algorithms and how we do something that is actually worthwhile.

          Capgemini is working with clients and hardware companies to address these challenges, focusing on developing robust algorithms and practical applications. Our white paper, “Seizing the Commercial Value of Quantum Technology“, provides a comprehensive analysis of quantum technology’s current state and potential applications. By focusing on practical utility, we can ensure quantum computing delivers significant commercial value.


          References:

          [1] R. Acharya et al., ‘Quantum error correction below the surface code threshold,’ Aug. 24, 2024, arXiv: arXiv:2408.13687. Accessed: Aug. 27, 2024. [Online]. Available: http://arxiv.org/abs/2408.13687. [2] P. W. Shor, ‘Scheme for reducing decoherence in quantum computer memory’, Phys. Rev. A, vol. 52, no. 4, p. R2493, 1995. [3] Y. Hong, E. Durso-Sabina, D. Hayes, and A. Lucas, ‘Entangling four logical qubits beyond break-even in a nonlocal code’, Phys. Rev. Lett., vol. 133, no. 18, p. 180601, 2024.

          James Cruise

          James Cruise

          Head of Quantum Algorithms, Cambridge Consultants 
          James is the technical lead for quantum computing at Cambridge Consultants, developing capability to deliver early commercial value for clients. He brings together deep technical expertise in quantum computing with an understanding of clients’ challenges to identify and develop key value propositions. James has a particular interest in understanding how to deliver value through practical hybrid quantum-classical computing and supporting the most ambitious clients in bringing about the quantum revolution sooner. He has an Math and a PhD in Mathematics from the University of Cambridge.
          Camille de Valk

          Camille de Valk

          Quantum optimisation expert
          As a physicist leading research at Capgemini’s Quantum Lab, Camille specializes in applying physics to real-world problems, particularly in the realm of quantum computing. His work focuses on finding applications in optimization with neutral atoms quantum computers, aiming to accelerate the use of near-term quantum computers. Camille’s background in econophysics research at a Dutch bank has taught him the value of applying physics in various contexts. He uses metaphors and interactive demonstrations to help non-physicists understand complex scientific concepts. Camille’s ultimate goal is to make quantum computing accessible to the general public.
          Iftikhar Ahmed

          Iftikhar Ahmed

          Quantum Lead, Capgemini Invent
          Iftikhar is the Quantum Lead at Capgemini Invent, where he spearheads Invent’s global quantum initiatives and is part of the Next Frontier Portfolio Team. As a Core Team Member of Capgemini’s Quantum Lab Iftikhar has been at the forefront of the lab’s go-to-market strategy since its establishment, overseeing the funding and management of some of the lab’s most significant projects. In addition, Iftikhar is also responsible for the Quantum Lab’s advisory services. He works closely with clients and Capgemini Account Teams to identify how quantum technologies can be used to meet the business needs of our most ambitious clients and help them to best progress in their Quantum Journey.
          Phalgun Lolur

          Phalgun Lolur

          Scientific Quantum Development Lead
          Phalgun leads the Capgemini team on projects in the intersection of chemistry, physics, materials science, data science, and quantum computing. He is endorsed by the Royal Society for his background in theoretical and computational chemistry, quantum mechanics and quantum computing. He is particularly interested in integrating quantum computing solutions with existing methodologies and developing workflows to solve some of the biggest challenges faced by the life sciences sector. He has led and delivered several projects with partners across government, academia, and industries in the domains of quantum simulations, optimization, and machine learning over the past 15 years.

            Telecom predictions: Trends to watch in 2025 

            Praveen Shankar
            Dec 23, 2024

            As we start into 2025, the ever-dynamic telecom industry finds itself at yet another crossroads. It is set to explore new ways of working, unlocking new revenue streams, optimizing operations, and enhancing capabilities.   

            Here are my top five predictions for 2025:

            Despite significant investments in networks, growth has remained elusive. In 2025, telcos will leave no stone unturned to explore every possibility for growth, ranging from advanced connectivity and network solutions…to digital services, security, Sovereign AI Cloud and edge….and industry specific solutions.
            Successful Telcos will be the ones who instead of going alone…will put collaboration at the heart of their growth strategy…. recognising their role within the ecosystem and co-creating solutions for real customer needs by leveraging the collective expertise of partners.

            Once on the periphery, they will become a significant part of the telecom landscape. Their subscriber base will rapidly increase owing to their global coverage, coupled with technological advancements enabling higher speeds, greater capacity, and now… lower costs. Their viability will move beyond just for remote areas to many mainstream underserved areas.
            Telcos must not overlook satellite companies or treat them as disrupters…they are valuable allies. Creating hybrid solutions combing terrestrial, and satellite to deliver seamless global services…will unlock mutual growth.

            Telcos are a critical national infrastructure and the backbone of the digital economy. With networks becoming more interconnected… devices and AI-driven solutions growing exponentially…the attack surface for cyber threats will expand rapidly. This will in turn bring increased demand of robust security measures from regulators and customers.
            Telcos can either treat cybersecurity as a compliance checkbox to stay out of trouble or can turn this challenge into a strategic advantage….by investing in innovative and reliable cybersecurity solutions that will act as a differentiator.

            Telcos will pursue further in-market consolidation, reevaluate their international operations and rapidly get rid of non-core assets to reduce the drag on resources.
            Additionally, they will accelerate efforts to delayer, simplify their offer portfolios, systems, and processes to become agile and match fit.
            The key for telcos will be to avoid short-term “band-aid” fixes and reimagine themselves. Caution will be necessary in defining core and non-core assets to ensure they retain their “family silver.”

            Telcos have access to huge data. Yet, when it comes to extracting value from that data, they lag behind. This year they had fallen behind in data mastery, surpassed by eight of the eleven industries surveyed by us, compare to them leading the pack in 2020.
            In 2025, telcos will refocus on data and AI, making them the bedrock for optimising operations, reducing costs, personalising customer experience, and unlocking new revenue streams.
            To achieve this, telcos will have to establish a robust data foundation, starting with a comprehensive data estate. They will need to relentlessly focus on execution, scaling AI, and adopting a fail-fast, learn-fast mindset.

            Praveen Shankar, Global Head of Telecommunication at Capgemini dives deeper on each of these trends and its impact on the telcos. Watch the full video here.

            Telecom Predictions for 2025

            00:21 The quest for revenue growth will intensify

            01:04 Satellite companies will move into the mainstream

            01:48 Cybersecurity will ascend to a top priority

            02:34 Focus will increase on home markets and on simplification

            03:16 Data and AI will return to the center stage

            04:08 Summary

            Meet the author

            Praveen Shankar

            Praveen Shankar

            Global Head of Telecommunications
            With more than 20 years of experience in the Telecommunications industry, Praveen has been at the forefront of navigating the journey to unlock the next generation of digital solutions and accelerating transformation in Telecoms. Over the course of his career he has developed a proven track record of driving transformation, delivering innovative business solutions, increasing revenues, and creating value for clients and partners.

              NextGen net revenue management (NRM) is the key to winning in connected commerce

              Nishant Pandya & Owen McCabe
              Dec 11, 2024

              The game is changing, NRM needs to change too. 

              Net Revenue Management (NRM) or Revenue Growth Management (RGM) is vitally important – even more so in the current climate with cost-of-living and increased input costs squeezing margins. Consumer Packaged Goods (CPG) companies worldwide invest up to 20% of their revenue annually in trade promotional activities, making it the second highest line item in Profit & Loss (P&L) after the Cost of Goods Sold (CoGS).

              The discipline of NRM depends on the interconnectedness of business strategy, planning, and in-market execution. Until now, this interconnectedness has been difficult to navigate, and NRM solutions have been inherently disjointed. They also do not adequately address the new wave of connected commerce platforms, such as direct-to-consumer, social commerce, last-mile partners, 3P marketplaces, 1P pure players, bricks & clicks, and others. These have grown to the point where the game has changed forever.

              However, evidence suggests that many CPG companies are not yet ready to play this new game, let alone win it. Their current NRM models are implemented based on 70% intuition and 30% science, whereas to play and win in the new game it really needs to be the other way around.

              Unsurprisingly, with ad-hoc implementations of NRM, 59% of trade promotions fail to generate profit, with performance varying widely—up to a 5x difference between the most efficient best-in-class CPG promotions and the least effective ones.

              How the game is changing?

              What is driving this change now? The answer lies in three generational forces at work that are all set to reach a tipping point in the next four years. These forces relate to dramatic changes to the “who” (the consumers), “where/when” (the shopping environment), and “how” (the methods of purchase) of the typical shopper journey.

              For more information, see our related article: Going for Gold.

              Shoppers’ needs have stayed the same, but thanks to data and digital platforms enabling unprecedented levels of connected shopping, their expectations have changed forever.

              Over 40% of shopping journeys now begin in emerging channels. However, conversion often happens elsewhere. According to research from the Capgemini Research Institute, 32% of consumers have discovered a new product or brand on social media. This presents a significant challenge for CPG companies traditionally focused on winning shelf space in established retail formats.

              This is because the shopping journey is no longer linear. It reflects the fact that every digital touchpoint is now a potential point of engagement, and every physical touchpoint has become a potential point of fulfillment. 

              This directly affects CPG companies because these same digital platforms and new routes to the consumer come with a different investment profile than their existing trade expenditure frameworks, in that costs typically borne by the retailer are now the responsibility of the brand-owner (e.g., customer acquisition, retention, and fulfillment costs). 

              What are the winning plays in this new game?

              Most CPG companies have an NRM or RGM playbook based on the classic five pillars (brand pack-price architecture, channel/assortment mix, pricing, advertising and promo spend, and trade terms). However, this new game requires a critical update to provide an integrated view across strengthened pillars inclusive of these new digital platforms.

              To regain control in this new landscape, CPG companies need to adopt a comprehensive, unified, NextGen NRM approach that spans the initial customer engagement all the way to repeat purchases.


              Connecting pre-shop engagement to purchase behavior and establishing robust tracking based on Customer Lifetime Value (CLV) metrics — calculated as unique visitors × conversion rate × average order value × repeat rate—are critical challenges in building a coherent, future-ready NRM model.

              See the below illustration of how NextGen NRM metrics align to drive revenue growth and value in the new connected commerce world.


              Click here to read more about our collaboration with Databricks for a NextGen NRM analytics suite.

              Getting into the game

              Getting into the game requires the total organization—not just the sales function—to embrace a more systemic and inclusive approach to NRM that reflects the more connected commerce and full-funnel world we are operating. It will still feel somewhat uncomfortable (the challenges are, well, very challenging), but those who do will be set to be the main beneficiaries in the next 3-4 years.

              The evolving role of data in modern business environments, particularly within the context of NRM, underscores the need for real-time, always-on data connectivity and agile data collaboration built on a solid data foundation. NRM’s success going forward lies in interconnectedness, where underlying levers are intricately and logically linked, driving the need for a holistic, integrated approach that spans commercial markets and operational contexts.

              Collaboration is the key. The rich combination of first-, second-, and third-party data encompassing behavioral and transactional data will take the guesswork out of marketing and sales. Enabling brands to integrate signals across platforms to optimize CLV by identifying white space opportunities for new product development, targeting high-propensity consumers, and efficiently driving higher order values, and repeat purchases.

              We can help

              The future of NRM in CPG is becoming more complex but that doesn’t mean it needs to be complicated. We can help.

              As a frontrunner in business and tech transformations, Capgemini has been working with leading CPG companies to help them extend and incorporate NRM into their ways of working for the new retail landscape. The results of our previous work have been impressive, leading to gains in annualized gross margins of up to 4% and productivity/operational effectiveness increases up to 15%.

              With the prize for delivering on NextGen NRM promises to be even greater, it’s not surprising that the race to excellence has already begun.

              Capgemini’s Connected Commerce is a strategic framework for helping our consumer-facing clients upgrade their go-to-market capabilities to compete and win in the ecosystem-led generations of retail. This includes a clear vision for NextGen NRM.

              Our dedicated industry team can help you transform your current NRM capabilities and also leverage our extensive partner network to provide access to cutting-edge technologies and solutions, helping you unlock the full power of NextGen NRM at scale.

              Authors

              Nishant Pandya

              Nishant Pandya

              Director -Commercial Sales and Marketing Insights, CPR Industry Platform
              Nishant plays a critical role in the success of our Global Connected Commerce offering, focusing on Commercial Sales, Marketing Insights, and Revenue Growth Management. With 18 years of experience, Nishant has built and led high-performing consulting and data science teams, specializing in advanced analytics, data-driven insights, and strategic growth initiatives.
              Owen McCabe

              Owen McCabe

              Vice President, Digital Commerce – Global Consumer Goods & Retail, Capgemini
              Owen is the Global leader for Digital Commerce at Capgemini. He has led several major digital commercial transformations to enable our Consumer Goods clients to win through data and tech in the new retail landscape emerging through 2030. His previous experience includes 9 years as the global digital commerce practice leader at WPP/Kantar and more than a decade in senior brand marketing and sales roles at P&G and Nestle.

                Expert perspectives

                Customer experience, Data and AI

                Gen Z prefers connected shops – are you ready for it?

                Owen McCabe
                Sep 11, 2024
                Data and AI

                Data and tech: The future of commerce is connected

                Kees Jacobs
                Jul 25, 2023

                Morocco’s thriving rail in motion
                On track to a sustainable and smart mobility future

                Capgemini
                Capgemini
                Dec 10, 2024
                capgemini-engineering

                Morocco is on the cusp of a transformative journey to modernize and expand its rail network, a project that underlines the country’s ambition to establish itself as a leader in sustainable transport infrastructure in the EMEA region.

                With several million euros allocated over the next 15 years, Morocco plans to create a cutting-edge rail ecosystem that will redefine its transportation landscape, enhance socio-economic connectivity, and align the country with global sustainability goals (eg. Net Zero).

                The scope of this initiative is expansive, including new maintenance centers, modernized rolling stock, enhanced signaling systems, and the construction of state-of-the-art rail stations.

                Connecting the nation: expanding high-speed rail coverage

                One of the major axes of this new vision lies in expanding Morocco’s high-speed rail network, building on the success of the Al Boraq, Africa’s first high-speed train. The country plans to add thousands of kilometers of new tracks, targeting high-speed rail access for 87% of the population by 2030 – compared to the current 51%.

                Greater connectivity will deliver several benefits. Firstly it will alleviate urban congestion and improve the quality of life for commuters, by reducing traffic and congestion in major metropolitan areas. Secondly by linking key industrial and agricultural regions, the new expanded network will introduce new markets and nurture economic development, particularly in currently less connected areas. Finally, by integrating rural and regional areas into the national economy, it will help to mitigate disparities and support more balanced economic development.

                Sustainability at the core

                Sustainability is one of the major and critical pillars of Morocco’s rail sector modernization. As such, to reduce its environmental impact, this project is investing heavily in energy-efficient trains, electrified tracks, and green technology. This is because, compared to cars and airplanes, high speed trains can generate significantly lower emissions, making rail a cornerstone of Morocco’s climate action strategy.

                The project also aligns with Morocco’s broader commitment to the Paris Agreement and its national goal of reducing carbon emissions by 45.5% by 2030. By shifting passenger and freight transport to rail, the country also aims to decrease its fossil fuel reliance.

                National vision & global partnerships

                Achieving such a major transformation requires a strategic approach, involving both public and private stakeholders. With its vision to establish itself as an innovation hub in transport infrastructure, Morocco has actively sought partnerships with international companies, including Alstom, Sistra Egis and Goslo Cogifer, to bring expertise and technology to its rail sector. Capgemini Engineering Morocco is also part of this journey and has a proven track record with the country’s rail sector, enjoying successful projects with three major rail OEMs.

                Human capital is also a priority, with massive investments directed toward developing the adequate skill base necessary to sustain and operate the modernized rail system. Since establishing Capgemini’s Rail Engineering Center in Morocco in 2018, Capgemini has played an important role in supporting Morocco’s rail transformation. The 2018 creation of the Capgemini Engineering Rail Academy (CERA) is developing a robust talent pipeline to advance this aim. With the ability to train 200 engineers annually, the CERA supports our aim to create a network of world-leading professionals to serve leading rail manufacturers and customers in Morocco and the world. Leveraging our many decades of experience in technology transformation and rail Capgemini is helping Morocco realize its vision of a smarter, greener, and more connected future.

                We hope to meet you at the Casablanca Rail Summit on the 10th and 11th December 2024.

                To explore further

                Web Banner for Rethinking Rail - The Digital Transformation in Railways

                International summit & meetings
                For the railway industries and infrastructures
                December 10-11, 2024

                Authors

                David Pontal

                David Pontal

                Client Manager, Capgemini Engineering
                David Pontal has been an engineer since 2004 and manages large teams for Capgemini. With nearly 15 years of experience in the railway market, he is now focused on expanding Capgemini Engineering Morocco’s railway capabilities. David currently oversees Alstom in France.
                  Jawad Sabbar

                  Jawad Sabbar

                  Sales, Presales & Portfolio Director, Capgemini Engineering Morocco
                  Jawad Sabbar holds a PhD in Fluid Mechanics, and has 15 years of experience in engineering services. In 2020, he spearheaded the strategic transformation of a major railway OEM for Capgemini Engineering Morocco. Renowned for his leadership, Jawad was also part of establishing a Railway Academy to train and upskill talent in rail for design, systems, and RAMS.

                    Pop-Car: Join a groundbreaking urban mobility initiative aiming to kickstart BEV sales and reindustrialize Europe

                    Emmanuelle Bischoffe-Cluzel
                    Dec 10, 2024

                    The idea of the Pop-Car – an affordable, sustainable, appealing battery electric vehicle (BEV) – is generating a lot of excitement in the automotive world right now.

                    In this article, I’d like to recap the story so far and invite you to participate in the initiative. (Spoiler: We’re particularly keen to find partners to co-fund the development of a platform simulation using ecodesign and circular economy tools and then a prototype Pop-Car. But we need all sorts of other participants, too.)

                    Where has the Pop-Car concept come from?

                    Pop-Car is the brainchild of Movin’On, an international, business-led co-innovation ecosystem committed to sustainable mobility. Specifically, Pop-Car has come from a Movin’On community of interest in sustainable mobility. (This is one of two Capgemini-led communities; the other one focuses on the closely related topic of software.)

                    I’m proud to chair this community, which contains representatives from different industries: automotive, of course, but also insurance and banking, and even an NGO. We have a strong focus on tangible actions. To find out what we’ve achieved already, read on.

                    Why it’s time for the Pop-Car?

                    We developed the concept of Pop-Car to overcome the impasse in which the industry finds itself following the tightening up of the standards associated with Europe’s Corporate Average Fuel Economy (CAFE) regulation.

                    From 2020, Regulation (EU) 2019/631 set EU fleet-wide targets of 95g CO2/km for the average emissions of new passenger cars and 147g CO2/km for those of vans, based on the New European Driving Cycle (NEDC) mission test procedure. But soon there will be new targets based on the Worldwide Harmonized Light Vehicles Test Procedure (WLTP). For cars, the revised target will be 93.6g CO2/km for 2025-2029 and 49.5g CO2/km for 2030-2034. These new targets represent reductions of 15% and 55% respectively compared with the 2021 emissions levels. Hence the regulation will affect a growing proportion of BEVs.

                    Unfortunately, demand for BEVs is already stagnating, confronting OEMs with various unappealing options. For example, they could reduce sales of internal combustion engine (ICE) vehicles to achieve the desired result – a suicidal move at the moment, in my view. They could buy carbon credits from greener competitors – which would be counterproductive except in the very short term. Or they could sell imported products in the EU under a local banner.

                    We’ve devised a better alternative: one that will enable the industry to sell more BEVs in the EU – hence enabling reindustrialization in Europe. The key, we realized, is to create a category of simple, affordable BEVs that (unlike existing electric microcars) comply with safety standards for cars, and qualify for eco incentives such as the French ecological bonus.

                    Thus the Pop-Car concept was born.

                    Our two-step roadmap

                    • By 2028, we want to establish a category of EVs that weigh less than 850 kg with a carbon footprint of 6 tonnes CO2 equivalent. Capgemini Engineering has already demonstrated that this is feasible.
                    • By 2035 (perhaps as early as 2033), we want to progress to a category under 750 kg with a carbon footprint of 4 tonnes CO2 equivalent. This will require the amendment of the GSR2 safety standard, as discussed below.
                    proposed roadmap for the Pop-Car concept

                    Our goals for the Pop-Car: affordable, sustainable, appealing

                    We envisage a four-seater BEV that will be affordable, sustainable, and appealing to consumers. It will be primarily pitched at urban and peri-urban use, but still have value for rural locations.

                    Regarding emissions, we’ve set ourselves a threshold of 6 tonnes of CO2 equivalent in relation to the lifecycle assessment of the value chain, assessed using the ADEME methodology. This target is less than 30% of the current threshold for the French ecological bonus. We’re aiming, too, to halve energy consumption. In terms of cost, we’ve set a target of €10,000 for a purchase or, for a lease, 100 euros per month including insurance.

                    To meet our sustainability and affordability objectives, the vehicle must have the following characteristics:

                    • Lightweight: Currently, there’s no BEV with a weight of between 400kg and 975kg because of safety regulations and battery weight. We can reduce that weight by using smaller batteries – this type of vehicle can be charged every day and will typically be used over short distances, so range is less of a concern. This approach is a major step toward lightweight the car, along with eco-design including the use of innovative and green materials.
                    • Small: Reducing the car’s overall dimensions further reduces vehicle weight and the amount of raw materials needed, with affordability and eco benefits. In addition, a smaller vehicle is more appealing to end-users as it will be easier to drive and park, especially in a city.
                    • Safe: Safety is at the heart of our concept. As well as being an important objective in its own right, it has affordability and sustainability implications. Today’s microcars are cheap to buy but expensive to insure (maybe €2,000 per annum for a vehicle that only costs €7,000). By contrast, the Pop-Car will be as safe as cars currently on the road and therefore cheaper to insure. The two steps of our roadmap (shown above) will use slightly different approaches:
                      • In the first step, the Pop-Car will fully comply with the GSR2 safety regulation, as reviewed in July 2024.
                      • For the second step, we’ll need to get GSR2 amended so that we can further reduce the weight via an active safety approach based on Advanced Driver Assistance Systems (ADAS). Today’s passive safety requires extra materials, pushing weight upward.
                    • Repairable: Making repairs easier and cheaper is an important way of extending lifespan, which will help with both sustainability and affordability. For example, if the car’s price can be amortized over nine years instead of today’s four, that should lower insurance premiums. We need to rework the industry’s business model to reflect ideas like this.

                    In addition to the individual levers discussed above, the primary way to achieve all these characteristics is to design the properties we want in the car from day one, as explained in my earlier ecodesign blog article.

                    On the affordability front, we must ensure that the car’s sustainability is rewarded by financial incentives such as the French ecological bonus. And it needs to be manufactured and sold in high volumes (we’re hoping for a million sales per year) to keep input costs down.

                    What we’ve already done:

                    We’ve already created a platform design that shows it’s viable to make a car that complies with our objectives, for example, reducing weight by using smaller batteries. We’ll take this platform forward so that manufacturers can use it as a basis for rapidly developing their own Pop-Car models. Individual OEMs can choose how to use the platform – for example, there are recommendations about limiting speed but each OEM can decide whether or not to do so. Similarly, they can choose which ADAS features to include. We recognize that these decisions will be affected by the evolving regulatory environment.

                    Alongside this work, we’ve carried out business model simulation, supporting it with marketing studies and analyses of existing products and concepts such as Japan’s Kei car.

                    In addition, we’ve created an advocacy position to persuade the European Commission (EC) to support our new category. We are working to establish a dialogue with decision-makers within the EC.

                    We’ve publicized our concept in several forums, including the recent Paris Motor Show, the Movin’On LinkedIn channel, and a special summit held in November. Our audience’s reception has been extremely positive.

                    What we want to do next:

                    Our aim is now to influence the EC to introduce this new category of vehicle and help the industry build the actual vehicles.

                    What we need from the EC is:

                    • Recognition of the new vehicle category
                    • An incentive mechanism similar to the French ecological bonus, but with more challenging requirements and extended to the whole of the EU
                    • The integration of this category of vehicles into the CAFE calculation with an incentive mechanism, such as a multiplier coefficient whereby the sale of a Pop-Car is triple-counted
                    • Measures such as research tax credits to promote innovations in terms of weight reduction, sustainable development, use of recycled materials, and other circular economy innovations.

                    What our community needs to do to bring the vehicles to market is:

                    • Lobby the EC to make the changes above
                    • Fine-tune our proposals
                    • Engage further with the automotive and mobility industries to increase the profile of our work
                    • Build a platform simulation using ecodesign and circular economy tools, and then a prototype Pop-Car

                    Your opportunity to get in on the ground floor

                    Why not join us? For example, we need more financial services companies and product ecodesign specialists to work alongside Capgemini Engineering and the other experts in our group on fine-tuning the technical details of our proposals. To prepare for and build the prototype, we urgently need the involvement of, and investment from, OEMs or mobility providers.

                    Our partners can build a competitive advantage in several ways. They’ll have direct access to our platform and the results of all our ADAS work. They’ll be able to begin mass production as soon as the regulatory framework allows. And they’ll benefit from working with Capgemini Engineering, which has extensive relevant experience on successful projects such as the Citroën Ami and Mobilize Duo.

                    By joining the Pop-Car initiative, you could help to define the future of affordable, sustainable mobility. Contact me today to find out more.

                    Author

                    Emmanuelle Bischoffe-Cluzel

                    Emmanuelle Bischoffe-Cluzel

                    VP – Sustainability Lead, Global Automotive Industry, Capgemini
                    Emmanuelle Bischoffe-Cluzel offers practical IT and engineering solutions to support automotive sustainability. She has 30 years’ automotive industry experience, gained with a global automaker and a tier 1 supplier, in roles ranging from manufacturing engineering to business development. She holds four patents relating to engine assembly.