Skip to Content

Navigating the energy transition in 2024

James Forrest
Jan 30, 2024

The global energy landscape is at an inflection point, with significant changes needed in power generation, consumption, and sustainability, if we want to reach net zero. Technologies have a crucial role to play to accelerate this transition.

This year, volatile market forces, governmental policies, and societal demands will converge to shape the pace and direction of the energy transition. We expect global energy prices to remain volatile this year, influenced by factors such as geopolitical tensions, the upcoming elections, market speculation, and fragile supply chains that are not secure enough to deliver on our energy needs.

Looking ahead to the key forces set to shape energy demand, we expect to see the following advances, and challenges, in 2024.

Sovereignty considerations will be at the forefront of the energy transition

The energy transition’s pace hinges on this year’s crucial global elections, involving nearly half the world’s population. The outcome of these elections will decide how governments worldwide implement policies and incentives to speed up the move to cleaner energy alternatives. Though we’ll have to watch and wait for the political outcome, what we do expect to continue is the US making progress in clean electricity through its Inflation Reduction Act (IRA) and China’s carbon emissions declining this year, driven by a substantial increase in clean energy investments.

We also anticipate a stronger link between renewables and energy sovereignty. Our World Energy Markets Observatory (WEMO) 2023-24 found that more countries are transitioning to in-country renewable sources as a way to protect energy supply against geopolitical uncertainties. The energy transition will continue to be a way to safeguard supply against geopolitical threats this year. Whatever the outcome of elections, we expect governments and regulators to lead market interventions, such as demand response and flexibility schemes, which are crucial for ensuring the integration of intermittent renewable energy sources and maintaining grid stability. We also expect to see more governments reassessing investments in larger, long-term assets like nuclear.

SMRs have a simplified design based on existing and proven light water reactor technology (Gen III+ and predecessors), along with a mature and robust fuel supply chain. This reduces the overall licensing and build risk. While SMRs enjoy a well-established technological basis, other technologies such as advanced (Generation IV) reactors (Rs) will also see increased interest in 2024 as demonstration projects and the development of the fuel supply chain progress.  

Nevertheless, many nuclear projects will face financial setbacks and will encounter regulatory challenges, as we have seen in the US. This is not unexpected as the industry emerges and will require innovative approaches to minimize risk to ensure project feasibility.  To be sure, there will be winners and losers, but fortune will favor the well-prepared.

The year of consolidation

As governments around the world gear up for elections, the private sector will also be looking closely at their priorities. The oil and gas industry will continue a period of consolidation as companies streamline their operations and focus on assets that align with their transition plans.

The role of gas in a net-zero emissions future is also subject to debate, and the private and public sector alike will need to carefully consider the environmental and economic implications of its continued use. We expect to see this consolidation accelerate in 2024 as companies seek to optimize their portfolios and adapt to the changing energy landscape.

The potential of generative AI will become reality

Last but not least, 2023 was indisputably the year of AI, with the technology having a profound impact on the world’s businesses. AI has more than 50 different uses in the energy system, and the market for the technology in the sector could be worth up to USD 13 billion.

AI is already having a huge impact on the growth of smart grids and smart meters, so we anticipate an AI boom in the running of components within energy systems over the coming year, especially relating to customer service.

Overall, as we continue to navigate the energy transition in 2024, the global landscape is undergoing profound shifts, presenting both challenges and opportunities. We are expecting to see further synergies between digital and sustainable innovation, taking advantage of the potential of technologies to accelerate the energy transition for the better. The continued integration of AI, alongside advancements in renewable energy and nuclear technologies, positions 2024 as a pivotal year in our collective journey towards a sustainable and resilient future within the energy sector.

Authors

James Forrest

Group Industry Leader for Energy Transition and Utilities at Capgemini
I lead in helping global clients with major business transformations involving smart grid, IoT, the reform of gas and electricity markets, major software and infrastructure changes, and the use of machine learning and artificial intelligence to drive significant business performance improvement.

Peter King

Global Energy and Utilities Lead, Capgemini Invent
I focus on driving transformation by working with my clients to define new ways of working, new operating models and the transformation programs that will deliver change.

    How to set yourself up to redesign the car around the user
    What steps should you take when designing a better Mobility Experience?

    Mike Welsh
    18 Jan 2024
    capgemini-engineering

    The old way of designing cars works for old car designs. But the car of the future will require some changes to existing methods, along with some entirely new approaches. Cars used to be designed around engineering possibilities, but thanks to digital technology, they can now be designed around the user experience. So, what will that take?

    In the previous blog, we covered the key ingredients of the vehicle user experience. Here, we’ll discuss how to redesign the vehicle for that user experience.

    Step 1: Plan new use cases and business models

    Change how you think about the car. It’s no longer a machine to get people from A to B, it’s now an environment where people spend time and benefit from experiences.

    The vehicle that rolls off of the production line can be thought of now as a Minimal Value Product – a product which can increase its value through software additions across its life cycle. For automotive companies, correctly implementing this business model is partly a financial engineering challenge – which will also require a change in mindset for both vendors and consumers.

    Step 2: Build the in-car tech to enable these new use cases

    The in-car software and hardware architecture to support these new services will fall into three categories:

    • The software and processing power to run information and entertainment displays, as well as repurpose a glut of ‘big data’ into meaningful information. This will likely include processing data from onboard sensors, and from external devices.
    • The communication technology to allow the vehicle to exchange information with external devices, as well as selecting the right protocols (Wi-Fi, cellular modems, etc.) to deliver this.
    • An ecosystem that allows the car to safely access third-party apps, including processes for app certification.

    Step 3: Verify

    All of the above needs rigorous testing and verification. Mostly, this will involve running new services in a simulated vehicle environment to check they function as intended.

    Some services may need road testing. But for ones that don’t touch vehicle controls, or risk causing distractions, it is usually fine to conduct ‘real-world testing’, by launching beta versions, gathering user feedback, and using this data to continually improve products.

    Step 4: Create a software-driven culture to outpace the competition

    The move to digital services is forcing traditional automotive companies to rethink how they build and launch services. The digital culture of rapidly iterating digital products must reconcile itself with the more traditional, measured and safety-conscious automotive approach.

    Step 5: Evolve your supplier ecosystem

    Carmakers will need to expand their list of suppliers. Gone are the days when a Tier 1 knew everyone and could handle everything. This brave new world will encompass specialist providers in telco, silicon, software development and XR, as well as emerging technologies, like the metaverse.

    Author

    Mike Welch

    Australia Head of Telecom and Entertainment, Capgemini Invent
    Michael is responsible for defining and executing the technology strategy and roadmap across the ER&D Automotive Portfolio. He is also currently the Offer owner for Mobility Experience which includes Intelligent Cockpit, Vehicle Communication and Mobility Services.

      The key to speedy innovation and satisfying, safe, secure mobility?
      software

      Alexandre Audoin
      Alexandre Audoin
      Jan 5, 2024

      The race to provide autonomous mobility and compelling customer experiences is hotting up, but automakers need to balance their need for speed and innovation with a ‘no compromise’ approach to safety and cybersecurity.

      Competition in the automotive industry is intensifying and brands are competing on more fronts than at any time in history. Of course, price, performance, brand, and residual values continue to be important. But as the industry gravitates toward electrification and software-defined vehicles, customers are looking at what else their vehicles can do for them. How well do they integrate with their lives and their digital ecosystems? Can and will the car evolve over time to add more value to daily life? And, for manufacturers, how do you build supply chain resilience and competitiveness to address these evolving demands, while ensuring availability and affordability? 

      Automakers – especially at the luxury and premium end of the market – are also intensifying their focus on providing assisted and autonomous driving capabilities and new ways to add value with digital experiences, inside and outside the vehicle. In the face of increased competition, the speed with which automakers are able to innovate and the extent to which they can engage and satisfy their customers in new ways will be crucial to future success or failure.

      Autonomous mobility at the crossroads

      For years, tech and innovation events like CES have been dominated by autonomous vehicles of all shapes and sizes. The technology is always impressive … at the shows. But, in the real world, progress has been slower than expected. For every success, it seems like there’s been at least one story of a scaled-back or canceled investment, an unfulfilled promise, or a serious safety scare.

      The pursuit of autonomous mobility is a double-edged sword. The cost of adding sensors for 20+ detection zones around the car is significant. And the volumes of data, the sophistication of algorithms, and the amount of computing power required to develop, test, and validate systems are eye-watering. And yet, the ability to offer customers safe and stress-free ways to travel; to give back quality time while getting from A to B, is a once-in-a-lifetime opportunity to build trust and open the door to a whole new world of services and revenue streams. It’s no wonder the pursuit of the various certification levels is so intense and why so many companies are taking different routes – from in-house development with tech partners to major alliances with tier-1 suppliers, and even acquisitions. Some companies are making more progress than others, but the race is still wide open.

      The in-car experience is evolving

      The transition to electric and the pursuit of autonomous-driving capabilities have major implications for the automotive customer experience, especially the in-car digital experience. With electric vehicles, we know that recharging away from home will involve idle time. And – though it may still be a way off – autonomous mobility will allow us to focus less on driving the car and leave us more time to do other things. Today, our first thought might be to reach for our smartphones or tablets, but this is a lost opportunity for vehicle manufacturers.

      And so the question becomes: How can your car keep you entertained and engaged while it charges or self-drives?

      The answers are emerging in the form of expansive screens, adaptive interfaces, the addition of extra screens for passengers, an increasing emphasis on in-car gaming, content consumption, subscription services, and almost unlimited ways to pass the time productively, recreationally, or relaxingly in a vehicle.

      And then there’s the potential to have an AI-powered assistant, or companion, that connects all the different services and is capable of providing pretty much any information you need about your journey, your agenda, upcoming commitments, highlights from your inbox or social media feed, and much more.

      All of these features represent potential points of differentiation, and many of them are revenue-generating opportunities (e.g. subscription-based services). Beyond direct revenue and new levels of customer intimacy, in-car digital interactions also create opportunities to generate new data and insights, which can (with the right levels of consent and anonymity, of course) be used to shape new products and services – inside and outside the vehicle – and new monetization opportunities.

      Speed and satisfaction – why they matter more than ever

      You could argue that the evolutions I’ve explored above are technology trends, much like many others. However, these trends are different in that if you can achieve the combination of safe autonomous or highly assisted mobility and engage customers with compelling in-car experiences, you can gain a level of trust, and access – and even companionship – that is unprecedented in the history of OEM-customer relationships. This brings with it the opportunity to develop deeper, longer, and more lucrative relationships.

      But the race for the hearts and minds of customers is intense, with a raft of new players (many from China) to compete against, new demographics, and rapidly evolving customer expectations. In this climate of increased competition, it is imperative that automotive companies intensify their innovation efforts in a bid to deliver the integrated and connected customer experience that will soon be taken for granted. And if your brand isn’t able to provide it, you can assume that another one will. 

      Balancing the need for speed and satisfaction with a ‘zero compromise’ approach to safety and security

      Against this backdrop of ultra-intense competition and a relentless focus on innovation, OEMs must remain vigilant and understand that speed to market can never take priority over safety and security.

      Assisted and autonomous mobility can offer comfortable, convenient, and stress-free travel. But they also mean taking a significant degree of responsibility for the safety of vehicle occupants. In short, ADAS and autonomous driving systems cannot fail. Failures will result in more than a few lost sales – they could lead to loss of life, high-profile court cases, and a complete loss of confidence in your brand.

      And though it’s less likely to be a life-or-death matter, automotive brands need to be vigilant about ensuring the cybersecurity of their vehicles and data ecosystems. Digital assistance or companionship, subscriptions, services, integrated payment solutions and ecosystem services (e.g. via wearable health devices, smartphones, etc.) will typically require some degree of data sharing. This opens the door for personalization and seamlessly convenient experiences, but it’s not without its risks. No brand wants to be the next one to appear in a high-profile data leak story and risk losing the hard-earned trust of its customers.

      Software is the key to safe, secure, and satisfying experiences

      So what’s the key to accelerating innovation cycles and customer satisfaction without compromising on safety and data security?

      The answer lies in your software strategy. After all, software is at the heart of assisted and autonomous driving systems, it drives immersive and engaging digital experiences through infotainment systems and more, and it can be the key to ensuring the security of personal data and the identification and elimination of sophisticated cybersecurity threats. The right software strategy and architecture (i.e. a simplified one) can also provide you with greater flexibility during times of supply chain instability, meaning you can maintain product availability while your competition potentially suffers. As many of us learned during the pandemic, simply making sure your cars are available to potential buyers can be the biggest advantage of all.

      Capgemini Research Institute: The Art of Software

      But the stakes are too high with software and the task of transforming into a software company is too big to go it alone. Here are three ways automotive companies can get their transformation right.

      1. Partner up to boost software capabilities

      Software-driven transformation is a broad and deep-reaching process, which can encompass upskilling your existing team, building new capabilities, and finding the right balance between maintaining your existing digital products and developing new ones. This is a huge undertaking, and so it makes sense to partner up with automotive software specialists and engineers who can share and instill industry best practices, build dedicated software factories for you, or support you in maintaining existing products or developing new ones.

      2. Use cloud, virtualization, and AI to achieve more

      Cloud and AI can be used to process and analyze the high volumes of data produced during autonomous driving system development and testing, to virtualize ECUs, and to support data spaces and service ecosystems. These technologies, combined with the suite of automotive-specific accelerators being built by hyperscalers today, can supercharge your innovation and product development cycles, enabling you to get to market faster with new products and services, while keeping your – and your customers’ – valuable data secure. 

      3. Look for external inspiration

      Automotive companies can’t be everything to everybody. It’s difficult (impossible?) to develop an infotainment UX that rivals that of smartphone makers like Apple and Google if it’s not your core business. Likewise, you won’t suddenly create ‘killer’ content and entertainment options if you’re just starting out. Instead, partner up with startups and niche players in differentiating domains and focus on the bigger picture.

      The road ahead is filled with complexity and exciting developments. And yet, for all the focus on new technology, there are still large groups of customers who care little for new tech, and who continue to value practicality, build quality, and affordability above all else. How organizations address these oft-divergent customer desires within their product portfolio will be a challenge for many ‘traditional’ OEMs.

      What we can say with confidence is that mobility experiences of the future – whether they’re autonomous or human-driven – must be satisfying, safe, and secure. Automotive companies must be quick to give their customers what they want. Check out our perspective on software in automotive to learn more. 

      Software-driven mobility

      Bringing together the strengths of Capgemini in one offer

      Author

      Alexandre Audoin

      Alexandre Audoin

      EVP, Head of Global Automotive Industry, Capgemini
      Alexandre Audoin is Capgemini Group’s global leader for the automotive industry and head of automotive within Capgemini Engineering (formerly Altran). Alexandre maintains a special focus on the creation of Intelligent Industry, helping clients master the end-to-end software-driven transformation and do business in a new way through technologies like 5G, Edge computing, Artificial Intelligence (AI), and the Internet of Things (IoT).

        Intelligent products in manufacturing: How new technologies unlock hyper-personalization

        Anubhaw Bhushan
        May 10, 2024

        Manufacturers seeking to maintain market share will need to invest in technologies that reduce lead times and unlock personalization in design and production to meet customer expectations.

        Manufacturers have long relied on standardization of products and services to ensure quality and speed. But this will not suffice unless adaptation is baked into the design, delivery, and support models. Consumers are demanding more, faster.

        Fortunately, manufacturers have new and emerging technologies at their disposal that can personalize products and experiences without sacrificing time. In fact, these tools hold the potential to accelerate entire enterprises – if they can free their minds from outdated philosophies and methods.

        Changing customer expectations

        Manufacturing companies might not immediately realize the need to deliver more personalized experiences because their primary customers are other businesses, rather than end consumers. These products are likely to be packaged and distributed or incorporated into additional manufacturing processes.

        In this business-to-business-to-consumer (B2B2C) world, professionals traditionally understood delivery delays and limited product offerings because they dealt with the same constraints. But outside of work, these same people are enjoying seamless experiences and plentiful options when shopping for consumer goods.

        This means products need to speak to and interact with buyers in a personalized fashion. An intimacy model has moved from “nice to have” to a “must have” KPI in a product’s acceptance, usage, and effectiveness throughout its lifecycle.

        Furthermore, people accustomed to the convenience of smartphones or tablets controlling household items (e.g., thermostats, robotic vacuum cleaners) in their personal lives increasingly expect these devices to control other products at work. That might not be a problem if they want to automate office lights and coffee pots but would pose greater risks and challenges with automobiles and heavy machinery.

        The promise of GenAI

        Advancements in generative AI are making it easier to control all products connected over the internet of things (IoT) and even the industrial internet of things (IIoT). These smart machines are already better than humans at capturing and analyzing data – and subsequently optimizing operations. They just need an intrinsic operational intelligence that’s capable of monitoring and repairing itself in case things go haywire.

        Manufacturers can leverage ever-maturing GenAI to automate these smart machines and orchestrate services and resources more efficiently and successfully.

        But GenAI is no excuse to go on autopilot. This technology is unlocking doors that humans alone could not have pried open. Gen AI will yield the greatest results – at least in the short term – with human oversight and guidance.

        The goal of GenAI-enabled solutions in manufacturing is primarily to personalize customer experiences. Any transformation rooted in automation should address the client’s demands while tapping into market trends.

        For instance, business and technology transformation partners can use Gen AI to detect alternate designs for a manufacturing enterprise. However, those designs may still need to be tailored to the client’s needs and long-term strategies. That’s because new designs assisted by GenAI might produce great general designs for a particular industry, but each organization has its own pain points and growth opportunities.

        Having manufacturers review a virtual representation of the design and provide continuous feedback will yield improvements and better personalization.

        Manufacturers face challenges in offering personalization

        Manufacturers are often hampered by traditional processes and aging supply chains, and struggle to create and deliver products in a timely fashion. They are so focused on meeting their deadlines that they don’t have the freedom to experiment with more personalized experiences.

        Processes are deeply ingrained. People are reluctant to accept change. But what if they used GenAI to reduce long lead times? That could be the gateway to improving the customer experience.

        The integration and alignment of information technology (IT), operational technology (OT), and engineering technology (ET) can drive faster lead times and position manufacturers for greater personalization. This confluence enables companies to infuse digital microservices through no-code and low-code app development practices.

        But many professionals still view IT and OT as completely distinct from ET, and view investing in the cosmetic nature of the product – its look and feel – as a marketing gimmick that adds minimal value to the consumer’s satisfaction.

        This couldn’t be further from the truth. IT/OT/ET convergence provides the tools needed to standardize the manufacturing process so that engineer to order (ETO), in which production begins after the customer places the order, requires almost as little manual interference as configure to order (CTO), in which the base products exist before the customer places the order.

        Manufacturers must act

        Manufacturers must adapt to the changing demands of the market and keep pace with technological innovation if they want to maintain (and ideally expand) market share.  Capgemini understands the difficulties manufacturers face when implementing changes to meet the rising expectations of consumers.

        Advancements in the field of machine configuration can help manufacturers establish new processes for delivering extraordinary customer experiences. By listening to the voice of the customer – gathering and analyzing information on needs and preferences, capitalizing on the latest data and analytics strategies – manufacturers can create experiences and custom products that were unheard of years ago.

        As a business and technology transformation partner, Capgemini helps manufacturers capitalize on the latest advancements in digital technology to streamline and enhance their operations, from the user interfaces for managing process to the heavy-duty machinery for producing goods. B2B2C companies may have a longer grace period before hyper-personalization becomes a necessity – but that time is coming. The sooner a manufacturing company gets ready, the better.

        Meet the author

        Anubhaw Bhushan

        Sr. Director, Manufacturing Domain Lead
        Anubhaw Bhushan, has been engaged in enterprise technology management and implementation roles in the heavy machinery manufacturing, mining and construction industry over the past 15+ years. In addition to driving digital transformation through cutting-edge suite of applications, Anubhaw’s forte has been in refining business processes, helping the manufacturers shift gears to subscription based, life-cycle management of products, and articulate service as a monetization model. He has been instrumental in driving IoT based service methodology, a critical ingredient for shaping up an intelligent, a sustainable service and connected product, multi-tiered aftermarket support model.

          How to accelerate EV battery manufacturing in gigafactories

          Scott Farr
          May 8, 2024

          Learn how automotive companies can use technology to build a resilient and sustainable EV battery supply chain through gigafactories.

          The key to playing a decisive role in the growing electric vehicle market is producing enough batteries sustainably at a competitive cost, at scale, and at speed.

          Industry analysts anticipate global demand for electric vehicles (EVs) will rise in the next few years, thanks in large part to trends in China. Despite signs of growth cooling a bit, particularly in the US, it’s still incredible when compared with other segments of the transportation industry. The long-term growth story is alive and well, and getting to market with a lead is as important as ever. A confluence of factors indicate that North America will take more of a role in producing the batteries needed for the worldwide transition.

          The Capgemini Research Institute’s (CRI) recent report on reindustrialization strategies in North America and Europe found that 63 percent of organizations recognize the importance of establishing a domestic manufacturing infrastructure to ensure national security, and 62 percent acknowledge its significance for strengthening strategic sectors.

          The research also revealed that the US stands out as a top location for gigafactories – large-scale manufacturing facilities for batteries and component parts. Fifty-four percent of executives surveyed from automotive, battery manufacturing, and energy companies said they are currently building or plan to build at least one gigafactory in the US. Meanwhile, 38 percent said this about continental Europe.

          Automotive companies that understand how to unlock the potential of North American gigafactories stand to gain market share and position themselves as lynchpins in this emerging ecosystem.

          But winning the gigafactory race will require a holistic enterprise architecture that enables data-driven business agility. Automotive companies can master this transition by accelerating speed to production, optimizing costs sustainably, digitizing end-to-end core business processes, and upskilling their workforce.

          Increasing speed to market and reducing scrap rates

          Battery production is still responsible for much of the EV’s price tag. As new competitors race to the market, even incumbent players understand the need to transform their operations to be competitive.

          It typically takes about five years for an organization with a small-scale pilot factory to complete a gigafactory and stabilize production. To remain competitive and responsive to demand, companies need a streamlined process of getting gigafactories to world-class production.

          An inefficient gigafactory launch could mean that up to 30 percent of early production ends up discarded. Reducing the scrap rate by just 10 percent can save up to $300 million annually for a 30 gigawatt-hour factory.

          Unlocking solutions with digital twins and data

          Organizations can use digital twins – virtual models of objects or systems – to recreate the cell, battery pack, manufacturing process, and factory. Digital twins enhance co-creation and simultaneous product and process engineering. By optimizing in a virtual environment, companies can design and commission production lines that minimize extensive prototyping and costly changes on the factory floor.

          Building the factory virtually before physically can save months of work. Today, we estimate that digital twin leaders see 15 to 20 percent savings in operational efficiencies.

          Companies can expedite commissioning real-world gigafactories and ramp up operations at scale, by integrating virtual and physical models to enable data-driven automation for proactive quality and production.

          They should aim to establish a closed-loop operation based on a highly scalable and flexible architecture. A solid and standardized data platform will allow interoperability between different sources for a data-driven operations strategy, which enables analysis that could reduce a factory’s scrap rate.

          Digital tools can also accelerate the path to recycling, making it safer, faster, cheaper, and easier. For instance, models can combine physical and chemical disassembly with data analytics and automation to enhance the precision of planning and executing recycling. In recycling and waste management, it’s not uncommon to disentangle complex materials into simpler substances for safer disposal.

          Engineering resilient, sustainable supply chains

          Gigafactories need a connected supply chain with visibility throughout transportation and material handling to operate effectively and produce enough batteries.

          Manufacturing electric batteries often relies on procuring raw materials – lithium, nickel, graphite, manganese, etc. – from countries with geopolitical risk, which renders them vulnerable to sanctions and other political hurdles.

          Meanwhile, the entire battery supply chain contributes to an EV’s lifetime emissions and could be subject to future climate-conscious legislation. While the battery supply chain is still developing, it’s important to build it right with sustainability and resiliency.

          To build resilient supply chains for gigafactories, organizations will need a single thread to connect bills of materials, partner with reliable suppliers, and enable transportation networks for valuable cargo. This requires thorough analysis of potential partners across many countries, sourcing in the Americas when possible, signing long-term contracts (for ongoing delivery) if suppliers are in riskier geographies, and designing packaging to protect battery components during shipping.

          Organizations should digitize the supply chain for a comprehensive view on sustainability – one that enables data-informed decisions and battery tracking for responsible end-of-life disposal that recycles materials, and aims toward circularity.

          Empowering the workforce

          Organizations can face challenges recruiting the highly skilled workforce needed for specialized gigafactory responsibilities, which diverge from traditional factories in many ways. For instance, employees may be expected to maintain complex robotic systems, utilize precision automation, interact with digital twins, or use data analytics for energy management in sustainable production. Few candidates in today’s job market have all the necessary skills that align with new gigafactory processes.

          Gigafactories need thousands of employees ready for day one of production, meaning that hiring, training, and expert development must happen while the factory is still under construction.

          A training program like the Capgemini Battery Academy can help organizations define skill requirements for potential employees and upskill these hires through virtual and augmented reality (VR and AR) training modules. The Capgemini Battery Academy develops and builds the necessary skills that transfer directly into the job on day one.

          Capitalizing on growing interest in EVs

          Annual global demand for passenger plug-in EVs is expected to grow 127 percent (to nearly 22 million vehicles) by 2026, compared to 9.7 million in 2022, according to S&P Global data.

          Kelley Blue Book, a Cox Automotive company, estimates that US consumers bought a record-setting 1.2 million EVs in 2023, comprising 7.6 percent of all vehicles sold in the country – up from 5.9 percent the year before. That figure is expected to reach 10 percent by the end of 2024. EV sales are still rising, just not as quickly.

          The slowdown in the US stems from the typical concerns when deciding between EVs and internal combustion engine (ICE) vehicles: range awareness, infrastructure reliability, maintenance costs, resale value, upfront costs, and so forth.

          Despite this mild cooldown, automakers still see the long-term benefit of investing in EVs and batteries. In fact, my research indicates that federal support virtually negates near-term worries and incentivizes more aggressive investment in this sector.

          The Biden administration’s Infrastructure Law and Inflation Reduction Act together mobilized more than $50 million toward climate resilience, which is encouraging domestic automakers to prioritize EV batteries and foreign manufacturers to open facilities stateside.

          According to the Department of Energy, more than $120 billion of investments in the US battery manufacturing and supply chain have been announced so far – nearly $45 billion pre-IRA and around $85 billion post-IRA launch.

          The CRI report found that nearly half (47 percent) of companies have already started investing in reshoring their manufacturing, which is expected to increase average onshore production capacity from 45 percent to 49 percent in just three years.

          Now is the time to go full throttle.

          Meet our expert

          Scott Farr

          Segment Lead for Automotive Battery and Electric Vehicles at Capgemini Americas
          Scott Farr has over 25 years of experience in the IT consulting industry. He is an expert at helping clients achieve improved business results through enhanced processes and digital transformation efforts.

            Generative AI lab: Clearing obstacles and building bridges to a brighter future

            Robert Engels
            Sep 14, 2023

            Generative artificial intelligence (AI) is a step change.

            While hyped technologies often grab our attention before fading into the background, the potential impact of generative AI continues to increase. What’s more, the revolution is just beginning. Our newly launched Generative AI Lab is here to make sense of the opportunities and challenges that this transformation brings.

            Understanding the impact of the revolution

            Capgemini defines generative AI as a technology with the capability to learn and reapply the properties and patterns of data for a wide range of applications, from creating text, images, and videos in different styles to generating tailored content. It enables machines to perform creative tasks previously thought exclusive to humans.

            Capgemini Research Institute reports nearly all (96%) executives say generative AI is a hot topic of discussion in their boardrooms. Across organizations in every sector, digital and business leaders are talking about how generative AI might be applied to a series of use cases from customer engagement to sales processes and onto operational activities.

            It’s important to recognize that generative AI is more than just chatter. Gartner® published that in a recent Gartner, Inc. poll of more than 2,500 executive leaders, 45% reported that the publicity of ChatGPT has prompted them to increase artificial intelligence (AI) investments. Seventy percent of executives said that their organization is in investigation and exploration mode with generative AI, while 19% are in pilot or production mode.[1]

            The rate of investigation is uppermost at high-tech companies. Where OpenAI led with its work on ChatGPT, other vendors are now following quickly in their footsteps. Capgemini Research Institute reports 86% of organizations are either working on generative AI pilots or have already enabled functionality.

            Attend any technology conference today and you’ll hear a barrage of AI-related product launches. Fearful of falling behind their competitors, technology vendors are fighting to grab a piece of the generative AI action. The technology market is being flooded with standalone tools and AI-enabled additions to existing systems and services.

            Panning for nuggets amongst this AI gold rush is an intractable challenge. At work, curious staff are beginning to use generative AI in their everyday activities and executives are looking at ways to harness that momentum.

            While some of these tools could lead to huge boosts in productivity, it’s crucial to understand how these tools exploit data and how they might operate as part of an integrated technology stack. Today, we believe this deeper awareness is lacking. The result is a series of major hurdles that are associated with publicly available generative AI models:

            • Too disjointed – They don’t address the needs for risk, privacy, and business controls.
            • Too universal – They don’t understand business knowledge and cultural context.
            • Too uncontrollable – They don’t have mechanisms to control the quality of outputs.
            • Too risky – They don’t prevent third parties from reading and learning data.
            • Too immature – They don’t have a built-in and enterprise-scale technology stack.

            Developing an awareness of what’s next and what’s possible

            At Capgemini, we recognize the potential benefits of generative AI are undeniable, yet so are the potential risks that come from unregulated deployment. Business leaders can’t afford to let the technology be implemented without due cause and consideration. In this fast-moving area of innovation, it’s vital to establish what’s coming next and what might be possible.

            Capgemini’s dedicated Generative AI Lab is working to develop this important insight. Our group-wide effort aims to understand developments and advances in AI. With the rapid rise of generative AI, we expect the pace of change to quicken further. We anticipate both impactful breakthroughs in capabilities and unforeseen challenges and applications.

            The Lab orchestrates our efforts to make sense of this revolution. We develop thought leadership, research, and internal readiness in this emerging area, allowing the wider group to develop a strong sense of how generative AI will affect all businesses today, tomorrow, and long into the future. The Lab’s work concentrates on two key horizons:

            1. Internally, we provide a lighthouse effect on what’s coming next in generative AI. We develop an awareness of the key capabilities that are required, providing an early warning to our group of any major changes that are emerging.
            2. Externally, we present industry-leading thought leadership on the opportunities and challenges from advances in generative AI. We undertake research and development alongside partners and academics, establishing practical responses.

            Our Lab is staffed by a dedicated team of Capgemini AI experts from around the world. While the rush to implement generative AI is a recent trend, the technology itself has been a long time in gestation. During this period, Capgemini has worked with clients on AI across multiple sectors, including life sciences, consumer products and retail, and financial services.

            We’ve helped a life science company re-sequence DNA and we’ve supported banks as they’ve used generative AI to translate old software into modern languages. We’ve worked on documentation for highly complex engineering products, and we’ve partnered with an insurance firm as its uses natural language to provide accurate answers to non-technical staff.

            The experts in our Generative AI Lab will draw on these experiences and develop internal knowledge and external responses as professionals continue to explore emerging technology.

            Conclusion: Rewards without the risks

            The rapid rise of generative AI brings excitement and concern in equal measure. However, three-quarters (74%) of executives believe the benefits of generative AI outweigh the risks, according to the Capgemini Research Institute. Our Generative AI Lab has been created to identify pathways to a brighter, AI-enabled future. The Lab will work to clear the obstacles and build the bridges that will help us all reach this destination successfully.


            [1] Gartner Press Release, Gartner Poll Finds 45% of Executives Say ChatGPT Has Prompted an Increase in AI Investment, May 3, 2023. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

            Robert Engels

            Vice President, CTIO Capgemini I&D North and Central Europe | Head of Generative AI Lab
            Robert is an innovation lead and a thought leader in several sectors and regions, and holds the position of Chief Technology Officer for Northern and Central Europe in our Insights & Data Global Business Line. Based in Norway, he is a known lecturer, public speaker, and panel moderator. Robert holds a PhD in artificial intelligence from the Technical University of Karlsruhe (KIT), Germany.

              Driving innovation in financial services: Pega Infinity vs. Java EE solutions

              Dinesh Karanam
              15 February 2024

              In the swiftly evolving landscape of financial services, organizations in banking, capital markets, and insurance are grappling with multifaceted challenges where the efficiency and speed of application development are paramount.

              These industries are under constant pressure to streamline complex processes, ensure rapid deployment of services, maintain strict regulatory compliance, and provide a seamless customer experience. The latest whitepaper from Capgemini, focusing on the productivity comparison between Pega Infinity’23 and traditional custom development in Java, brings to light solutions that address these pressing issues.

              Customer problems in financial services:

              • Efficiency in process design and workflow automation: Financial institutions face the daunting task of designing and deploying complex financial products and services rapidly. The industry demands solutions that streamline the entire lifecycle ensuring that products meet market needs quickly while adhering to stringent compliance standards. The ability to adapt these workflows efficiently in response to regulatory changes or market dynamics is also crucial.
              • Innovative UI/UX for superior customer engagement: In the digital age, customer expectations for online banking and insurance services are higher than ever. Financial institutions must provide innovative, intuitive, and engaging user interfaces and experiences. This involves not just aesthetic appeal but also ensuring that digital platforms are responsive, accessible, and personalized, catering to the diverse needs and preferences of users.
              • Cost optimization and strategic resource allocation: With the pressure to innovate and stay competitive, financial institutions are also focused on optimizing costs and strategically allocating resources. This involves reducing the total cost of ownership (TCO) across development, maintenance, and training, without compromising on service quality or compliance. Achieving this balance requires solutions that are not only efficient and effective but also scalable and easy to manage in the long term.
              • Seamless integration of cutting-edge technologies: The financial sector is increasingly looking towards cutting-edge technologies like AI, machine learning, and predictive analytics to drive decision-making, personalize services, and enhance operational efficiency. Integrating these technologies seamlessly into existing systems, however, poses a significant challenge. Institutions need platforms that can not only accommodate these technologies but also leverage them to the fullest, transforming data into actionable insights and competitive advantages.

              Evolution in process automation

              Process automation has experienced a profound revolution, reshaping the landscape of application development. The emergence of low-code/no-code platforms, coupled with the infusion of generative AI, has fundamentally changed the game. These advancements have made application development more accessible and significantly faster, breaking down barriers and catalyzing innovation. At the forefront of this transformative wave are Pega Infinity’23 and Java Enterprise Edition. Both platforms, supercharged by the capabilities of generative AI, stand as beacons of this evolution, delivering levels of productivity and innovative potential that were once unimaginable.

              Dissecting the paradigms

              The whitepaper provides an in-depth analysis, juxtaposing two distinct technological frameworks that are redefining the realm of application development:

              Low code platforms with Pega Infinity’23 Pega Infinity’23 emerges as a paradigm of efficiency and user-centric design. It boasts an array of intuitive visual interfaces that simplify the design process, a comprehensive suite of pre-built components that accelerates development, and the cutting-edge Pega GenAI™ that reshapes the development landscape. The incorporation of generative AI elevates the platform’s capabilities, offering features like automated code generation and dynamic adaptability to process changes, thereby multiplying productivity and offering unparalleled flexibility in application development.

              Advanced cloud development with Java EE (JEE) and Microservices On the other end of the spectrum lies the more traditional, yet profoundly robust, route of advanced cloud development using Java EE (JEE) and Microservices. It’s an ideal choice for scenarios demanding highly customized and sophisticated application solutions. The approach, while offering an unparalleled level of flexibility and control, necessitates a more substantial investment in time and resources. It’s particularly suited for projects where the intricacy of the application’s functionalities and the need for tailor-made solutions outweigh the imperative for rapid development.

              Key findings and strategic insights

              The study presents a meticulous analysis, fortified with compelling metrics, that underscores the revolutionary impact of Pega Infinity’23 in the process automation sector. These insights reveal not just incremental improvements but substantial leaps in development efficiency, strategic impact, adaptability, and cost efficiency:

              Development efficiency: Pega’s productivity has surged by 33% compared to previous benchmarks against JEE, with a 75% efficiency increase in workflow automation and an impressive productivity factor of 8.9. Pega’s development pace is six times quicker than traditional Java, with a productivity factor of 9.5. The introduction of Pega Constellation and GenAI™ enhances user engagement by dynamically adapting UI elements to user behaviors, ensuring a seamless and intuitive user experience

              Strategic impact and cost efficiency: Pega offers a rapid development environment with a productivity factor 7.8 times higher than custom builds and showcases strength in automating complex business processes. Pega GenAI™ simplifies AI feature implementation and streamlines the testing phase with automated data generation.

              Unmatched adaptability and innovation: Pega offers a rapid development environment with a productivity factor 7.8 times higher than custom builds and showcases strength in automating complex business processes. Pega GenAI™ simplifies AI feature implementation and streamlines the testing phase with automated data generation.

              Pega GenAITM – A game changer

              Pega GenAI™, integrated with Pega Infinity’23, signifies a revolutionary shift in application development. It infuses AI-driven efficiencies into every aspect of application creation and management, simplifying complex tasks, automating routines, and providing intelligent insights. It enhances the robustness and reliability of the final product, streamlines integration, and ensures precision and accuracy by making data-driven decisions.

              Navigating the future of process automation

              This whitepaper serves as a visionary guide in the process automation landscape, dissecting the capabilities of Pega Infinity’23 and JEE Microservices. It provides invaluable insights for decision-makers and developers, anticipating future trends and offering knowledge for informed decision-making. The whitepaper invites readers to embrace the transformative potential of these platforms in process automation implementations.

              Embrace the revolution in process automation

              Discover the findings that delineate the capabilities and advantages of Pega Infinity’23 and JEE Microservices. By engaging with the content of this whitepaper, you position yourself at the forefront of the process automation revolution, ready to harness the advanced features, enhanced productivity, and strategic insights offered by Pega Infinity’23 and JEE Microservices.

              Meet our expert

              Dinesh Karanam

              Senior Director, Business Processes and Augmented Services Leader for North America, Financial Services
              Dinesh leads business and technology transformations for global organizations, using his 25 years of expertise in diverse industries to drive strategic innovation and impactful changes. He enhances operational efficiency and spearheads global teams to deliver significant business achievements, including profit growth and digital advancements. ​

                The convergence of spatial computing and enterprise-grade solutions

                Alexandre Embry
                Apr 11, 2024

                The rise of spatial computing requires to enable spatial processing capability at enterprise grade.

                Because of the required technology mix of #immersivetechs, such as AR, VR, MR, combined with and #AI and #ML to involve machines, people, objects and their environments. Because of the level of visualization, interaction and collaboration of complexes #digitaltwins in the #industrialmetaverse.

                When scaled, this might be very CPU, GPU, and system resource intensive, requiring a large amount of physical CPU cores, GPUs, memory and network bandwidth. To unlock the full power of this transformative concept for industries, many initiatives from the tech ecosystem are emerging.

                A great example comes from Lenovo and NVIDIA. They are collaborating to truly enable enterprises to materialize the possibilities offered by spatial computing, #genAI and digital twin technology in a variety of sectors through a end-to-end solution.

                Collaborative XR experiences between multiple users can be easily pixel-streamed from a single workstation to separate spatial computing headsets simultaneously, using 3D software like Autodesk VRED or NVIDIA Omniverse. Great progress in the computing domain.

                Meet the author

                Alexandre Embry

                Vice President, Head of the Capgemini AI Robotics and Experiences Lab
                Alexandre leads a global team of experts who explore emerging tech trends and devise at-scale solutioning across various horizons, sectors and geographies, with a focus on asset creation, IP, patents and go-to market strategies. Alexandre specializes in exploring and advising C-suite executives and their organizations on the transformative impact of emerging digital tech trends. He is passionate about improving the operational efficiency of organizations across all industries, as well as enhancing the customer and employee digital experience. He focuses on how the most advanced technologies, such as embodied AI, physical AI, AI robotics, polyfunctional robots & humanoids, digital twin, real time 3D, spatial computing, XR, IoT can drive business value, empower people, and contribute to sustainability by increasing autonomy and enhancing human-machine interaction.

                  Triple win: We take home three Google Cloud Partner of the Year Awards 

                  Herschel Parikh
                  9 Apr 2024

                  Nearly a decade of collaboration with Google Cloud has unlocked incredible potential. By combining forces, we’ve repeatedly shown the promise of a joint approach in unleashing possibilities and powering business transformation, as we help companies modernize their data and operations in sectors ranging from financial services, to retail, to the public sector. 

                  I’m so proud of our partnership and its evolution. Last year, we were awarded several Google Cloud Partner of the Year awards – and since then the pace of technological innovation has accelerated dramatically, presenting exciting new opportunities to power business transformation. By working with the Google Cloud team, we’ve developed and leveraged unique new solutions to meet customer and industry needs. 

                  As a result, I’m excited to share Capgemini has won Partner of the Year awards in three categories: 

                  • Global Industry Solution Partner of the Year award (Services) for Generative AI 
                  • Global Industry Partner of the Year award (Services) for Financial Services & Insurance 
                  • Global Specialization Partner of the Year award for SAP on Google Cloud 

                  A commitment to accelerating Generative AI 

                  Generative AI innovation is moving fast, and so are we. From the moment Google previewed its plans around generative AI, we jumped in to promote this new technology and its many applications. 

                  When I think about how generative AI has touched nearly every industry in the past year, it’s truly awe-inspiring – especially considering that only 18 months ago, most companies weren’t using it at all. Generative AI has since become part of nearly every conversation we have and, to leverage its full potential, we’re investing heavily in our ability to expand on our expertise and offerings. 

                  Last year, for example, we created the first-of-its-kind global generative AI Google Cloud Center of Excellence (CoE), including 18 dedicated subject matter experts from practices ranging from strategy to data science to software engineering – covering all angles of generative AI applications. And that was just the beginning of what is sure to be a long and fruitful collaboration with Google Cloud as generative AI picks up speed and begins to deliver immense value. 

                  Our ability to learn quickly and leverage solutions comes in part from our broad global reach. We’ve hosted global hackathons with thousands of participants who are developing demos and accelerators in the span of weeks, rather than months or years. We’ve also participated in Google’s trusted tester program, which gave us the opportunity to access generative AI technologies before they are generally available and to design solutions ahead of their market release. 

                  These early efforts mean we’ve already generated hundreds of use cases and mobilized tens of thousands of consultants on Google Cloud generative AI solutions, and we’re starting to see customers move from early experimentation to solid use cases. The coming year will undoubtedly bring more change and more opportunity, and we’re excited about the possibilities.

                  Expanding our successes in financial services 

                  We’ve long been recognized for our expertise in the financial services and insurance sectors, and last year we saw exponential expansion in this domain thanks to new business accounts built on our Google Cloud partnership. 

                  Modernization can be challenging in the financial services sector, where regulatory bodies closely oversee compliance requirements for data handling. But our Google Cloud CoE has enabled us to develop a deep understanding around the laws and regulations, business functions, and compliance requirements of our customers.  

                  Part of the reason we’ve been so successful at growing with Google Cloud is because we’ve created an environment where our clients can use Google services safely and in compliance with industry regulations. That’s how we’ve helped countless organizations modernize their legacy platforms and successfully move to the cloud with Google Cloud data services.

                  We’ve also invested significantly in our generative AI capabilities to deliver new value to our financial service sector clients. Our focus has been on helping customers define a roadmap to scale their implementations of generative AI while ensuring they see immediate value. This of course requires specific considerations around data privacy and communications protocols, with special attention to regulatory requirements.  

                  It’s been so rewarding to work collaboratively with the Google Cloud team on this, to grow the value we bring to clients, and to ensure that we’re providing security and a safe landing zone as financial services and insurance clients begin using cloud services.

                  Deepening our expertise in SAP

                  Our domain expertise in SAP also runs deep, and last year we expanded it even further by obtaining a specialization designation in SAP Cloud (along with designations in cloud migration and infrastructure).  

                  As one of SAP’s largest partners, we knew that boosting our expertise in SAP migrations to Google Cloud would reinforce our mastery of Google’s Cortex Framework. And that was important to our clients. Since then, I’m pleased to say we’ve seen an increase in opportunities to help our clients move their SAP implementations to Google Cloud.  

                  We’ve always been a trusted guide for our clients in leveraging their SAP implementations, but now, with SAP’s shift to S4, Google Cloud Platform will play an increasingly important role in helping enablement and, with our expanding breadth of knowledge, we can bring it all together. 

                  The journey to true value 

                  As we continue driving transformation with Google Cloud services, we remain committed to delivering genuine business value for our clients with the best industry solutions. In leveraging the strength of partners like Google, we can turn the often chaotic process of modernization into structured and value-driven outcomes that truly pay off over the long term. 

                  Ready to exceed expectations? Put the power of Google Cloud at your service with Capgemini. Let’s connect and chat about your plans to transform. 

                  Author

                  Herschel Parikh

                  Global Google Cloud Partner Executive
                  Herschel is Capgemini’s Global Google Cloud Partner Executive. He has over 12 years’ experience in partner management, sales strategy & operations, and business transformation consulting.

                    Auditing ChatGPT – part I

                    Grégoire Martinon, Aymen Mejri, Hadrien Strichard, Alex Marandon, Hao Li
                    Jan 12, 2024
                    capgemini-invent

                    A Chorus of Disruption: From Cave Paintings to Large Language Models

                    Since its release in November 2022, ChatGPT has revolutionized our society, captivating users with its remarkable capabilities. Its rapid and widespread adoption is a testament to its transformative potential. At the core of this chatbot lies the GPT-4 language model (or GPT-3.5 for the free version), developed by OpenAI. We have since witnessed an explosive proliferation of comparable models, such as Google Bard, Llama, and Claude. But what exactly are these models and what possibilities do they offer? More importantly, are the publicized risks justifiable and what measures can be taken to ensure safe and accountable utilization of these models?

                    In this first part of our two-part article, we will discuss the following:

                    What is Large Language Models (LLM)?

                    Artificial intelligence (AI) is a technological field that aims to give human intelligence capabilities to machines. A generative AI is an artificial intelligence that can generate content, such as text or images. Within generative AIs, foundation models are recent developments often described as the fundamental building blocks behind such applications as DALL-E or Midjourney. In the case of text-generating AI, these are referred to as Large Language Models (LLMs), of which the Generative Pre-trained Transformer (GPT) is one example made popular by ChatGPT. More complete definitions of these concepts are given in Figure 1 below.

                    Figure 1: Definitions of key concepts around LLMs4

                    The technological history of the ChatGPT LLM

                    In 2017, a team of researchers created a new type of model within Natural Language Processing (NLP) called Transformer. It achieved spectacular performance for sequential-data tasks, such as text or temporal data. By using a specific technology called ‘attention mechanism’, published in 2015, the Transformer model pushed the limits of previous models, particularly the length of texts processed and/or generated. 

                    In 2018, OpenAI created a model inspired by Transformer architecture (the decoder stack in particular). The main reason for this was that Transformer, with its properties of masked attention, excels in text generation. The result was the first Generative Pre-trained Transformer. The same year saw the release of BERT, a Google NLP model, which was also inspired by Transformers. Together, BERT and GPT launched the era of LLMs.  

                    Improving the performance of its model over BERT LLM variants, OpenAI released GPT-2 in 2019 and GPT-3 in 2020. These two models benefited from an important breakthrough: meta-learning models. Meta-learning is a paradigm of Machine Learning (ML) in which the model “learns how to learn.” For example, the model can respond to tasks other than those for which it has been trained.  

                    OpenAI’s aim is for GPT Large Language Models to be able to perform any NLP task with only an instruction and possibly a few examples. There would be no need for a specific database to train them for each task. OpenAI has succeeded in making meta-learning a strength, thanks to increasingly large architectures and databases massively retrieved from the Internet.  

                    To take its technology further, OpenAI moved beyond NLP by adapting its models for images. In 2021 and 2022, OpenAI published DALL-E 1 and DALL-E 2, two text-to-image generators.10 These generators enabled OpenAI to make GPT-4 a multi-modal model, one that can understand several types of data.  

                    Next, OpenAI released InstructGPT (GPT 3.5), which was designed to better meet user demands and mitigate risk. This was the version OpenAI launched in late 2022. But in March 2023, OpenAI released an even more powerful and secure version: the premium GPT-4. Unlike preceding versions, GPT-3.5 and GPT-4 gained strong commercial interest. OpenAI has now adopted a closed source ethos – no longer revealing how the models work – and become a lucrative company (it was originally a non-profit association). Looking to the future, we can expect OpenAI to try to push the idea of a prompt for all tasks and all types of data even further. 

                    Why is everyone talking about Large language models?

                    Only those currently living under a rock will not have heard something about ChatGPT in recent months. The fact that it made half the business world ecstatic and the other half anxious should tell you how popular it has become. But let’s take a closer look at the reasons why. 

                    OpenAI’s two remarkable feats­­

                    With the development of meta-learning, OpenAI created an ultra-versatile model capable of providing accurate responses to all kinds of requests – even those it has never encountered before. In fact, GPT-4 achieves better results on specific tasks than specialized models. 

                    In addition to the technological leaps, OpenAI has developed democratization. By deploying its technology in the form of an accessible chatbot (ChatGPT) with a simple interface, OpenAI has made it possible for everyone to utilize this powerful language model’s capabilities. This public access also enables OpenAI to collect more data and feedback used by the model.

                    Rapid adoption  

                    The rapid adoption of GPT technology via the ChatGPT LLM has been unprecedented. Never has an internet platform or technology been adopted so rapidly (see Figure 2). ChatGPT now boasts 200 million users and two billion visits per month.  

                    Figure 2: Speed of reaching 100 million users in months.13

                    The number of Large Language Models is exploding, with competitors coming from Google (Bard), Meta (Llama), and HuggingFace (HuggingChat, a French open-source version). There is also a surge in new applications. For example, ChatGPT LLMs have been implemented in search engines and Auto-GPT, which latter turns GPT-4 into an autonomous agent. This remarkable progress is stimulating a new wave of research, with LLM publications growing exponentially (Figure 3).  

                    Figure 3: Cumulative number of scientific publications on LLMs.

                    Opportunities, fantasies, and fears

                    The new standard established by GPT-4 has broadened the range of possible use cases. As a result, many institutions are looking to exploit them. For example, some hospitals are using them to improve and automate the extraction of medical conditions from patient records.  

                    On the other hand, these same breakthroughs in performance have given rise to a host of fears: job insecurity, exam cheating, privacy threats, etc. Many recent articles explore this growing anxiety, which now seems justified – Elon Musk and Geoffrey Hinton are just two of the many influential tech figures now raising the alarm, calling it a new ‘code red.’  

                    However, as is often the case with technological advances, people have trouble distinguishing between real risk and irrational fear (e.g., a world in which humans hide from robots like those in The Terminator). This example explores the creation of a model that rivals or surpasses the human brain. Of course, this is inextricably linked with the formation of consciousness. Here, it is worth noting that this latter fantasy is the ultimate goal of OpenAI, namely AGI (Artificial General Intelligence). 

                    Whether or not these events will remain fantasies or become realities, GPT-4 and the other Large Language Models’ AI are undoubtedly revolutionizing our society and represent a considerable technological milestone.

                    What can you do with an LLM?

                    Essentially, a ChatGPT LLM can:

                    1. Generate natural language content: Trained specifically for this purpose, this is where they excel. They strive to adhere to the given constraints as accurately as possible.
                    2. Reformulate content: This involves providing the LLM with a base text and instruction to perform tasks, such as summarizing, translating, substituting terms, or correcting errors.
                    3. Retrieve content: It is possible to request an LLM to search for and retrieve specific information based on a corpus of data.

                    How can you use an LLM?      

                    There are three possible applications of Large Language Models’ AI, summarized in Figure 4. The first one is direct application, where the LLM is only used for the tasks that it can perform. This is, a priori, the use case of a chatbot like ChatGPT, which directly implements GPT-4 technology. While this is one of the most common applications, it is also one of the riskiest. This is because the ChatGPT LLM often acts like a black box and is difficult to evaluate. 

                    One emerging use of LLMs is the auxiliary application. To limit risks, an LLM is implemented here as an auxiliary tool within a system. For example, in a search engine, an LLM can be used as an interface for presenting the results of a search. This use case was applied to the corpus of IPCC reports.19 The disadvantage here is that the LLM is far from being fully exploited.  

                    In the near future, the orchestral application of ChatGPT LLMs will consume much of the research budget for large organizations. In an orchestral application, the LLM is both the interface with the user and the brain of the system in which it is implemented. The LLM therefore understands the task, calls on auxiliary tools in its system (e.g., Wolfram Alpha for mathematical calculations), and then delivers the result. Here, the LLM acts less like a black box, but the risk assessment of such a system will also depend on the auxiliary tools. The best example to date is Auto-GPT.

                    Figure 4: The three possible applications of an LLM

                    Focusing on the use case of a Chatbot citing its sources

                    One specific use case that is emerging among our customers is that of a chatbot citing its sources. This is a response to the inability of Large Language Models’ AI to interpret results (i.e., the inability to understand which sources the LLM has used and why).

                    Figure 5: Technical diagram of a conversational agent quoting its sources

                    To delve into the technical details of the chatbot citing its sources (The relevant pattern – illustrated in Figure 5 – Is called Retrieval Augmented Generation or ‘RAG’), the model takes a user request as input, which the model then transforms into an embedding (i.e., a word or sentence vectorization that captures semantic and syntactic relationships). The model has a corpus of texts already transformed into embeddings. The goal is then to find the embeddings within the corpus that are closest to the query embedding. This usually involves techniques that find the nearest neighbours’ algorithms. Once we have identified the corpus elements that can help with the response, we can pass them to an LLM to synthesize the answer. Alongside the response, we can provide the elements that were used to generate it. The LLM then serves as an interface for presenting the search engine’s results. This ‘RAG’ approach therefore facilitates the decoupling of factual information provided by the sources from the semantic analysis provided by the LLM, leading to better auditability of the results provided by the Chatbot.  

                    Read more in Auditing ChatGPT – part II

                    Authors

                    Author of the blog large language models chatgpt

                    Grégoire Martinon

                    Trusted Senior AI Expert at Quantmetry
                    Grégoire leads Quantmetry’s trustworthy AI expertise. He ensures that missions keep close to the state of the art and directs the scientific orientations on the theme of trustworthy AI. He is the R&D referent for all projects aimed at establishing an AI certification methodology.
                    Author of the blog large language models chatgpt

                    Aymen Mejri

                    Data Scientist at Quantmetry
                    Holding two master of engineering degrees and a master of science in data science, I have decided to steer my career towards the field of artificial intelligence, specifically towards natural language processing. I am part of the NLP expertise at Quantmetry.
                    Author of the blog large language models chatgpt

                    Hadrien Strichard

                    Data Scientist Intern at Capgemini Invent
                    Hadrien joined Capgemini Invent for his gap year internship in the “Data Science for Business” master’s program (X – HEC). His taste for literature and language led him to make LLMs the main focus of his internship. More specifically, he wants to help make these AIs more ethical and secure.
                    main author of large language models chatgpt

                    Alex Marandon

                    Vice President & Global Head of Generative AI Accelerator, Capgemini Invent
                    Alex brings over 20 years of experience in the tech and data space,. He started his career as a CTO in startups, later leading data science and engineering in the travel sector. Eight years ago, he joined Capgemini Invent, where he has been at the forefront of driving digital innovation and transformation for his clients. He has a strong track record in designing large-scale data ecosystems, especially in the industrial sector. In his current role, Alex crafts Gen AI go-to-market strategies, develops assets, upskills teams, and assists clients in scaling AI and Gen AI solutions from proof of concept to value generation.
                    Author of the blog large language models chatgpt

                    Hao Li

                    Data Scientist Manager at Capgemini Invent
                    Hao is Lead Data Scientist, referent on NLP topics and specifically on strategy, acculturation, methodology, business development, R&D and training on the theme of Generative AI. He leads innovation solutions by confronting Generative AI, traditional AI and Data.

                      Stay informed

                      Subscribe to get notified about the latest articles and reports from our experts at Capgemini Invent