Skip to Content

Five transformative trends in the digital workplace for 2024

Alan Connolly, James McMahon, & Lukasz Ulaniuk
22 Jan 2024

Today’s workplace has the potential to be far more than a set of tools and technologies. It can be a philosophy, a lifestyle, a community, and a culture that fosters innovation, empowers employees, and truly puts people first.

Developments in immersive technology, artificial intelligence (AI), predictive analytics, machine learning, and cloud have laid the foundations for this new era of work. Advancements in these technologies now provide all people the ability to transcend physical barriers for collaboration and engagement, be more innovative and creative in their daily work, and allow businesses to make better more accurate decisions backed by data.

The driving force behind this was experience. The personal experience of each employee across key moments that matter within the workplace has become the yardstick for success. This shift in focus from productivity to performance, from physical boundaries to borderless connectivity, and from mundane routines to dynamic processes, all revolve around the idea of creating a personalized and enriching journey for every individual within the organization.

If 2023 was the year people realized the power of new tools, then in 2024 the stage is set for unprecedented growth for continued technological advancement, augmented human intelligence, and embedded sustainable practices.

It comes at a time when many of today’s employees are, to some degree, unsatisfied or unhappy with their experience at work. They want to be in the driving seat but their managers are often unaware of this: Capgemini research last year found 92% of leaders say their employees are happy at work, while only 30% of individual contributors and 65% of managers agree. The smartest organizations in 2024 will be those that successfully reimagine jobs and work environments around employees, putting well-being, skills, and culture at the core of business design.

1. The workplace becomes more human-centric by design

Over the next 12 months, the focus will continue to shift from productivity to performance. With many people now working remotely at least a few days a week, many leaders realize that getting the most out of employees is not a reflection of time spent at a desk, but by flexible approaches that allow people to add value in the way that works around their life.

Personalized services, customized learning paths, and adaptive environments based on individual needs will become the norm, and the use of digital assistants such as Microsoft Copilot will increase, augmenting human capabilities in new ways. The broader shift from reactive support to proactive enablement is creating a people-centric approach that will change the way workplaces are designed and operate. Importantly, we expect this to extend to the frontline, where automation will increasingly handle repetitive tasks, freeing workers to focus on shaping outcomes rather than just producing outputs

2. Immersive technology augments employee experience

Immersive technology, and virtual reality in particular, has ridden the hype curve since Facebook rebranded to Meta in 2021. This year, however, we expect the potential of immersive technology to start maturing across industries ranging from medicine to manufacturing.

For example, by using smartphones, tablets, or wearable devices such as the Meta Quest VR headset, factory employees could access a digital twin of the appliance that they are manufacturing, permitting practical training and diagnosis of faults without interrupting or otherwise negatively affecting the production process. Similar simulations can happen in almost any industry to help with onboarding and training with a host of benefits, whether that’s minimizing risks like in surgery or enhancing collaboration in the hybrid workplace.

According to Capgemini research, less than one-third of employees (29%) are happy with the collaboration tools to which they have access to at work. To engage and attract a new generation of workers, organizations will therefore have to leverage the consumer devices launched last year to start testing and creating immersive environments for their teams. Case studies, such as Airbus’s application of digital twins in guiding new hires through complex processes showcase the potential of this approach.

3. Time to deliver on the ‘social’ in ESG

According to the International Labor Organization (ILO), one in four people do not feel valued at work. Yet research has consistently shown that high levels of diversity, equality, and inclusion (DEI) are closely associated with a variety of business-related benefits, including higher levels of productivity, stronger innovation, improved performance, and better talent retention.

As the ambitious 2030 deadline set by the United Nations Sustainable Development Goals approaches, this year companies must not only foster cultures where employees can bring their whole selves to work and celebrate unique perspectives and experiences, they must also take greater account for their responsibility for driving positive social change. They can do this in many ways, but they’ll find that leveraging immersive and mixed-reality platforms will accelerate their efforts.

4. Sustainability becomes more deeply embedded

The workplace of the future is, and must be, a sustainable workplace, and in 2024, we expect to see greater efforts to embed green practices in the digital workplace. Again, this will require a proactive approach that combines skills programs such as Capgemini’s Virtual Sustainability Campus, renewable and circular energy management, and embedding sustainability components into modern endpoint management and Device-as-a-Service offerings. With over 50,000 companies facing new regulatory requirements to disclose the impact of their operations on nature as the EU’s Corporate Sustainability Reporting Directive comes into force, it’s likely that leading brands will need to go beyond reporting their impact to stand out from the crowd

5. The impact of intelligent automation, AI, and data

The marriage of AI, automation, and data sets the stage for continued advancements and ensures that the digital workplace remains at the forefront of innovation. AI, with its ability to analyze vast datasets and derive meaningful insights, empowers organizations to make informed decisions promptly. Automation streamlines processes, eliminating bottlenecks and expediting workflows, and the integration of data ensures that these processes are not only efficient but also adaptive, continuously learning and evolving to meet the dynamic demands of the digital landscape.

The impact of all three will ultimately prove fundamental to accelerating growth, streamlining governance, and ensuring compliance long, long into the future.

In 2024, experience is king

As we navigate the complexities of the digital workplace in 2024 and beyond, one thread becomes clear: AI plus immersive experiences are the new power couple, and a focus on employee well-being, coupled with performance, will become the cornerstone of delivering exceptional employee and customer experiences.

The future workplace is driven by automation, powered by intelligence, and focused on the moments that matter to employees. In essence, the digital workplace of 2024 is a fusion of technology and humanity, creating a thriving ecosystem where employees are empowered, ideas flourish, and innovation knows no boundaries.

Driven by intelligent automation, AI, and data, the digital workplace will continue to be a hotbed of innovation and growth. It’s a world where employees are not just contributors, but key players in shaping their organization’s future, empowered by technology to bring their unique skills and ideas to the forefront.

And, as we prioritize inclusivity and diversity in the workplace, we’re setting the stage for a richer, more dynamic environment. By celebrating unique perspectives and experiences, we’re fostering a culture where employees can truly be themselves, driving higher levels of productivity, creativity, and innovation.

Today, the question leaders should be asking is not whether we embrace the digital workplace revolution, but how quickly can we do it?

Are you looking to create a human-centric, experience driven workplace? Talk to us!

Author

Alan Connolly, Global Head – Employee Experience and Digital Workplace, Capgemini

Alan Connolly

Global Head of Portfolio – ESM, SIAM, and ServiceNow
Alan is a visionary leader with a deep passion for collaborating with customers, partners, and industry experts to address complex challenges within the workplace and enterprise service management portfolio. With over 20 years of experience, he combines creativity and analytical prowess to craft comprehensive strategies that align with organizational goals and enhance productivity.

James McMahon

Global Head of Employee Experience – Cloud Infrastructure Services
Global Head of Employee Experience at Capgemini, James has over 20 years’ experience in the field of employee experience and digital workplace services.

Lukasz Ulaniuk

Lead, Employee Experience – Cloud Infrastructure Services.
Lukasz leads Digital Workplace Offer Development at Capgemini’s Cloud Infrastructure Services. He manages development and introduction to the market strategy, advisory and transformative solutions for modern workplace that drive employee productivity and empowerment as well as support clients in achievement of sustainability and adoption targets. Lukasz brings 20 years of professional experience and passion in designing and introducing exceptional experiences to the customers across various industries.

    Redefining tech: #RabbitR1 and #AIpin challenge the smartphone norm – A paradigm shift in the making?

    Alex Bulat
    Jan 19, 2024

    Do we (I) want something other than a smartphone?

    This is the question that has been raised with introduction of new device like #RabbitR1 and The #AIpin before it. Do I want something that just does a fraction of the functions of my iPhone? Although these devices now only do a small subset of the functionality, they do intrigue me. It is a new way of interaction, new experience, maybe even just because it does less I’m considering it. Smartphones have been dominating our lives for over 2 decades now, is this the time to get rid of them and take back some control?

    All the above questions and thoughts have been circling my mind since I have seen the Launch of the R1 at CES. These to device are just the start but they are small but significant balloons that have been let up by the tech industry to test the waters. Are we ready for a change? Are we ready for less? Or do we just want the same but in a different jacket.

    What is your preference, something new or something old? Have you already preordered one of the devices?

    Stay curious 🧐

    Meet the author

    Alex Bulat

    Group Technology VP
    Alex is Group Technology Director, focused on helping our customers transform and adopt to the new digital age, and integrate new and disruptive innovations into their business. He is focused on driving the expansion and delivery of digital transformation and helping companies to get a grasp on future technologies like IoT, AI, Big data and Blockchain. He also focuses on how new innovations, disruptive technologies, and new platforms like Uber, impact the current businesses.

      How to set yourself up to redesign the car around the user
      What steps should you take when designing a better mobility experience?

      Mike Welsh
      18 Jan 2024
      capgemini-engineering

      The old way of designing cars works for old car designs. But the car of the future will require some changes to existing methods, along with some entirely new approaches. Cars used to be designed around engineering possibilities, but thanks to digital technology, they can now be designed around the user experience. So, what will that take?

      In the previous blog, we covered the key ingredients of the vehicle user experience. Here, we’ll discuss how to redesign the vehicle for that user experience.

      Step 1: Plan new use cases and business models

      Change how you think about the car. It’s no longer a machine to get people from A to B, it’s now an environment where people spend time and benefit from experiences.

      The vehicle that rolls off of the production line can be thought of now as a Minimal Value Product – a product which can increase its value through software additions across its life cycle. For automotive companies, correctly implementing this business model is partly a financial engineering challenge – which will also require a change in mindset for both vendors and consumers.

      Step 2: Build the in-car tech to enable these new use cases

      The in-car software and hardware architecture to support these new services will fall into three categories:

      • The software and processing power to run information and entertainment displays, as well as repurpose a glut of ‘big data’ into meaningful information. This will likely include processing data from onboard sensors, and from external devices.
      • The communication technology to allow the vehicle to exchange information with external devices, as well as selecting the right protocols (Wi-Fi, cellular modems, etc.) to deliver this.
      • An ecosystem that allows the car to safely access third-party apps, including processes for app certification.

      Step 3: Verify

      All of the above needs rigorous testing and verification. Mostly, this will involve running new services in a simulated vehicle environment to check they function as intended.

      Some services may need road testing. But for ones that don’t touch vehicle controls, or risk causing distractions, it is usually fine to conduct ‘real-world testing’, by launching beta versions, gathering user feedback, and using this data to continually improve products.

      Step 4: Create a software-driven culture to outpace the competition

      The move to digital services is forcing traditional automotive companies to rethink how they build and launch services. The digital culture of rapidly iterating digital products must reconcile itself with the more traditional, measured and safety-conscious automotive approach.

      Step 5: Evolve your supplier ecosystem

      Carmakers will need to expand their list of suppliers. Gone are the days when a Tier 1 knew everyone and could handle everything. This brave new world will encompass specialist providers in telco, silicon, software development and XR, as well as emerging technologies, like the metaverse.

      Author

      Mike Welsh

      Chief Technology Officer – Automotive Software & Electronics – Capgemini Engineering
      Mike is responsible for defining and executing the technology strategy and roadmap across the Capgemini Engineering Automotive Portfolio. With a background in powertrain, cockpit electronics and E/E Architecture, his primary focus over the last decade has been supporting the industry shift to Software Defined Vehicles.

        Ensuring quality and compliance
        a digital approach to food safety standards

        Michael Benko
        18 Jan 2024
        capgemini-engineering

        According to data released by the World Health Organization (WHO) in 2022: “…an estimated 600 million – almost 1 in 10 people in the world – fall ill after eating contaminated food and 420,000 die every year, resulting in the loss of 33 million healthy life years (DALYs) …children under 5 years of age carry 40% of the foodborne disease burden, with 125,000 deaths every year.”

        Food safety has always been paramount, today is no different. Ergo, in the ever-evolving landscape of the food and beverage industry, ensuring the quality of products is of paramount importance.

        The global nature of the food supply chain, increased consumer awareness, and changing regulatory requirements make it essential for businesses to adopt innovative approaches to food safety standards.

        One such approach that is gaining momentum is the use of digital technology. In this blog post, we’ll explore how a digital approach can revolutionize the way we manage and monitor food safety standards, ultimately benefiting both businesses and consumers.

        The traditional challenges of food safety

        Traditionally, food safety has been managed through a combination of manual record-keeping, periodic inspections, and reactive measures to address issues. While these methods have served the industry for many years, they come with several challenges:

        • Human error: Manual processes are susceptible to human error, which can lead to data inaccuracies and compliance issues.
        • Inefficiency: Paper-based record-keeping and manual inspections are time-consuming and labor-intensive, often leading to delayed responses to safety concerns.
        • Limited traceability: Traditional methods make it difficult to trace the origins of food products, making it harder to identify and isolate contaminated batches.
        • Regulatory compliance: Staying up to date with evolving food safety regulations can be a daunting task, especially when relying on manual processes.

        The digital revolution in food safety

        The adoption of digital technology has transformed how the food industry approaches safety standards. Here are some key aspects of this digital revolution:

        • Real-time monitoring: Digital systems allow for real-time monitoring of critical control points in the production process. Sensors, connected devices, and data analytics enable the immediate detection of any deviations from safety standards.
        • Data-driven insights: Digital platforms can collect and analyze vast amounts of data, providing businesses with valuable insights into their operations. These insights can help identify trends, predict potential issues, and make informed decisions.
        • Enhanced traceability: Blockchain and other technologies provide end-to-end traceability of food products. Consumers can access information about a product’s journey from farm to table, enhancing transparency and trust.
        • Automation: Automation reduces the risk of human error by streamlining processes. Quality control checks, temperature monitoring, and sanitation routines can be automated, ensuring consistency and compliance.
        • Regulatory compliance management: Digital platforms can be equipped with compliance tracking and reporting features, making it easier for businesses to stay in line with the latest regulations. This proactive approach helps avoid costly penalties and recalls.

        Benefits of a digital approach

        The shift toward a digital approach to food safety standards offers several significant benefits:

        • Improved product quality: Real-time monitoring and data analysis enable early identification of issues, allowing for immediate corrective actions. This, in turn, leads to improved product quality and reduced waste.
        • Enhanced consumer confidence: With increased transparency and traceability, consumers can make more informed choices about the products they buy, ultimately building trust in brands and the industry.
        • Cost savings: Automation and efficiency improvements can lead to long term cost savings. Fewer recalls, reduced product losses, and streamlined operations all contribute to a healthier bottom line.
        • Adaptability: Digital solutions are scalable and adaptable to the food industry’s evolving needs. They can accommodate changing regulations, emerging threats, and growing consumer demands.

        Conclusion: towards safer, better food production

        The digital revolution in food safety standards is not merely a trend; it’s a necessity.

        Businesses that embrace digital technology to enhance safety and quality standards will not only thrive in an increasingly competitive market but also contribute to the overall well-being of consumers. This transformation is not about replacing human expertise but augmenting it with tools that make food production safer, more efficient, and more transparent. By doing so, the industry can meet the challenges of today and tomorrow while maintaining consumer trust.

        Author

        Michael Benko

        IT consultant and process engineer
        Michael Benko is an IT consultant and process engineer with broad cross functional expertise. His industry experience includes food & beverage and pharmaceutical manufacturing, along with software pre-sales, implementation consulting and numerous enterprise level software implementations. Michael combines his expertise in Product Lifecycle Management with new product development best practices to solve a range of businesses’ most pressing problems.

          Customer-Centric Product Development

          Learn about Customer-Centric Product Development and Digital Continuity, and how these can drive success in the food and beverage (F&B) industry.

            A revolution in food packaging

            Food packaging is an integral part of the modern food industry, ensuring the safety, freshness, and quality of products as they make their way from the manufacturer to the consumer’s table.

              Embracing the ‘chat is the new super app’ trend at CES 2024

              Alex Bulat
              Jan 13, 2024

              L’Oréal unveiled the new “beauty genius chat app” powered by #AI

              and it reminded me of our just released #technovision2024 trend: “Chat is the New Super App”- Container: Applications Unleashed- AI-augmented chatting and talking in plain, natural language becomes the new app to rule them all.

              The interface and interaction tools are changing fast. With this new app L’Oréal is enabling everyone to become a beauty genius. The app scans your face and gives you advice on what and how to apply you beauty products.

              What cool apps have you seen at the CES!?

              And don’t forget: Check out our #technovision2024 report to understand the other opportunities.

              Meet the author

              Alex Bulat

              Group Technology VP
              Alex is Group Technology Director, focused on helping our customers transform and adopt to the new digital age, and integrate new and disruptive innovations into their business. He is focused on driving the expansion and delivery of digital transformation and helping companies to get a grasp on future technologies like IoT, AI, Big data and Blockchain. He also focuses on how new innovations, disruptive technologies, and new platforms like Uber, impact the current businesses.

                Custom GPT: A game-changer for business management

                Robert-Engels
                Robert Engels
                Jan 13, 2024

                Bora Ger and Paolo Cervini transform strategy and innovation with Custom GPTs: A game-changer for business management.

                Exciting news for business management! Bora Ger and Paolo Cervini have shared their thoughts on enhancing strategy and innovation with Custom GPTs. The groundbreaking GPT Builder creates specialized GPTs for diverse niches, revolutionizing managerial tasks and offering tailored advice.

                This article highlights how Custom GPTs bring specialization, efficiency, consistency, and automation integration to the forefront of management practices. It also explores the developing GPT Store’s role in democratizing AI tool access, boosting innovation and strategic decision-making.

                As we enter this new arena, the article forwards questions about the future of organizational leadership and ethical use of AI. Don’t miss out on this must-read for anyone interested in the intersection of AI and business strategy.

                Meet the author

                Robert-Engels

                Robert Engels

                CTIO, Head of AI Futures Lab
                Robert is an innovation lead and a thought leader in several sectors and regions, and holds the position of Chief Technology Officer for Northern and Central Europe in our Insights & Data Global Business Line. Based in Norway, he is a known lecturer, public speaker, and panel moderator. Robert holds a PhD in artificial intelligence from the Technical University of Karlsruhe (KIT), Germany.

                  CES 2024: Las Vegas shines with cutting-edge innovations in AI and future mobility

                  Pascal Brier
                  Jan 13, 2024

                  As #CES2024 comes to a close, Las Vegas was once again a hotspot for ground- breaking moments.

                  From GenerativeAI embedded and applied everywhere, to electric (and sometimes flying!) mobility, this latest edition of CES has been a rollercoaster of exciting innovations. Here are some of my favorite picks. What were your favorite moments?

                  Each of these innovations represent a step towards a more connected, smarter and greener future. I cannot wait to see what the rest of 2024 will bring.

                  Meet the author

                  Pascal Brier

                  Group Chief Innovation Officer, Member of the Group Executive Committee
                  Pascal Brier was appointed Group Chief Innovation Officer and member of the Group Executive Committee on January 1st, 2021. Pascal oversees Technology, Innovation and Ventures for the Group in this position. Pascal holds a Masters degree from EDHEC and was voted “EDHEC of the Year” in 2017.

                    Auditing ChatGPT – part II

                    Capgemini
                    Grégoire Martinon, Aymen Mejri, Hadrien Strichard, Alex Marandon, Hao Li
                    Jan 12, 2024
                    capgemini-invent

                    A Survival Issue for LLMs in Europe

                    Large Language Models (LLMs) have been one of the most dominant trends of 2023. ChatGPT and DALL-E have been adopted worldwide to improve efficiency and tap into previously unexplored solutions. But as is often the case, technological developments come with an equal share of opportunities and risks.  

                    In the first part of our LLM analysis, we provided a comprehensive definition, examined their technological evolution, discussed their meteoric popularity, and highlighted some of their application. In this second part, we will answer the following questions: 

                    In this second part, we will answer the following questions:

                    Are LLMs dangerous?

                    The short answer is sometimes. With Large Language Models having such a diverse range of applications, the potential risks are numerous. It is worth pointing out that there is no standard list of the risks but a selection is presented below.

                    Figure 1: A breakdown of risks posed by LLMS3

                    Some of these dangers are linked to the model itself (or to the company developing it). The data in the model could contain all sorts of biases, the results might not be traceable, or user data or copyrights could have been used illegally, etc.  

                    Other dangers are linked to the use of these models. Users seek to bypass the security measures of templates and use them for malicious purposes, such as generating hateful or propagandic texts.  

                    Additionally, Large Language Models have social, environmental, and cultural consequences that can be harmful. They require enormous amounts of storage and energy. Moreover, their arrival in society has weakened employee power in many industries. For example, writers striking in Hollywood have complained about the use of LLMs. Finally, Large Language Models’ AI is challenging the boundaries of literary art, just as DALL-E did with graphic art.

                    How can you deal with these risks?

                    It often takes a while before the risks of emerging technology are fully understood. This is also true of the survival tactics. However, we are already beginning to see early strategies being deployed.  

                    LLM developers invest in safeguards

                    OpenAI invested six months of research to establish safeguards and secure the use of Generative Pre-trained Transformers (GPT) models. As a result, ChatGPT now refuses to respond to most risky requests. As for its responses, they now perform better on such benchmarks as veracity or toxicity. Furthermore, unlike previous models, the ChatGPT Large Language Models has improved since it was deployed. 

                    However, it is possible to circumvent these safeguards, with examples of such prompts being freely available on the Internet (Do Anything Now (DANs)). These DANs often capitalize on ChatGPT’s human-centric nature – the model seeks to satisfy the user, even if this means overstepping its ethical framework or creating a confirmation bias. Furthermore, the opacity of the model and its data creates copyright problems and uncontrolled bias. As for benchmark successes, suspicions of contamination with the training database undermine their objective value. Finally, despite announced efforts to reduce their size, OpenAI models consume a lot of resources. 

                    Some Large Language Models now claim to be more ethical or safer, but this is sometimes to the detriment of performance. None of the models are faultless, and there is currently no clear and reliable evaluation method on the subject.

                    GPT-4 safety in five steps

                    To go into more detail about implementing guardrails, let’s look at the 5 steps implemented by OpenAI for GPT models, as shown in Figure 2.

                    1. Adversarial testing: Experts from various fields have been hired to test the limits of GPT-4 and find its flaws.
                    2. Supervised policy: After training, annotators show the model examples of the desired responses to fine-tune it.
                    3. Rule-based Reward Model (RBRM) classifiers: The role of these classifiers is to decide whether a prompt and/or its response are “valid” (e.g., a classifier that invalidates toxic requests).
                    4. Reward model: Human annotators train a reward model by ranking four possible model responses from best to least aligned.
                    5. Reinforcement learning: Using reinforcement learning techniques, the model takes user feedback into account.
                    Figure 2: GPT-4 Safety Pipeline

                    Governments and institutions worry about LLMs

                    Several countries have decided to ban ChatGPT (see Figure 3). Most of them (Russia, North Korea, Iran, etc.) have done so for reasons of data protection, information control, or concerns around their rivalry with the USA. Some Western countries, such as Italy, have banned it and then reauthorized it, while others are now considering a ban. For the latter, the reasons cited are cybersecurity, the protection of minors, and compliance with current laws (e.g., GDPR). 

                    Figure 3: Map of countries that have banned ChatGPT3.

                    Many tech companies (Apple, Amazon, Samsung, etc.) and financial institutions (J.P. Morgan, Bank of America, Deutsche Bank, etc.) have banned or restricted Large language models ChatGPT. They are all concerned about the protection of their data (e.g., a data leak occurred at Samsung). 

                    Scientific institutions, such as scientific publishers, forbid it for reasons surrounding trust – given the risk of articles being written surreptitiously by machines. Finally, some institutions are concerned about the possibility of cheating with such tools.  

                    European regulation changes

                    Many articles in Quantmetry’s blog have already mentioned the future EU AI Act, which will regulate artificial intelligence as soon as 2025. However, we should add here that this legislation has been amended following the rapid adoption of the ChatGPT LLM, and the consequences of this amendment are summarized in Figure 4. The European Union now defines the concept of General Purpose AI (GPAI). This is an AI system that can be used and adapted to a wide range of applications for which it was not specifically designed. The regulations on GPAIs therefore concern Large Language Models’ AI as well as all other types of Generative AI. 

                    GPAIs are affected by a whole range of restrictions, summarized here in three parts:

                    • Documentary transparency and administrative registration, should not be complicated to implement.
                    • Risk management and setting up evaluation protocols. These aspects are more complicated to implement but feasible for LLM providers, as outlined by OpenAI with ChatGPT Large Language Models.
                    • Data governance (RGPD and ethics) and respect for copyright. LLM providers are far from being able to guarantee these for now.

                    The European Union will therefore consider LLMs to be high-risk AIs, and LLM providers still have a lot of work to do before they reach the future compliance threshold. Nevertheless, some believe that this future law is, in some respects, too impractical and easy to circumvent. 

                    Figure 4: Impact of the EU AI Act on LLMS3

                    Assessing model compliance is one of Quantmetry’s core competencies, particularly in relation to the EU AI Act. Regarding LLMs specifically, Stanford researchers published a blog post evaluating the compliance of 10 LLMs with the future European law. The results are shown in Figure 5. To establish a compliance score, the researchers extracted 12 requirements from the draft legislation and developed a rating framework. Annotators were then tasked with conducting an evaluation based on publicly available information. The article identifies copyright, data ecosystem, risk management, and the lack of evaluation standards as the main current issues, aligning with our analysis above. The researchers estimate that 90% compliance is a realistic goal for LLM providers (the top performer currently achieves 75%, with an average of 42% across the evaluated 10 LLMs). 

                    Figure 5: Results of the compliance evaluation made by Stanford researchers11

                    A few tips

                    Faced with all these risks, it would be wise to take a few key precautions. Learning a few prompt engineering techniques to ensure that prompts provide reliable and high-quality responses could be a good way forward. It’s also worth watching out for data leaks via free chatbots (e.g., on the free version of ChatGPT). The paid version does not store your data a priori. Finally, Figure 6 illustrates how to use tools like ChatGPT with care.   

                    Figure 6: Diagram for using ChatGPT with care13

                    How do you audit such models?

                    There are three complementary approaches to auditing an LLM, summarized in Figure 9.

                    Organizational audit

                    An organizational audit can be carried out to check if the company developing the LLM is working responsibly, along with ensuring, for example, that its processes and management systems are compliant.   

                    It will be possible to do this for our clients who are not suppliers of LLMs but wish to specialize them further, to ensure that they are well employed.  

                    Audit of the foundation model

                    Auditing the foundation model is the current focus of scientific research. For such an audit, it would be necessary to be able to explore the dataset (which is inaccessible in reality), run test benches on recognized benchmarks and datasets (but face the problem of contamination), and implement adversarial strategies to detect the limits of the model. If we go into more detail, there is a multitude of possible tests for evaluating the following aspects of the model: 

                    • Responsibility: Understanding how risks materialize and finding the limits of the model (typically with adversarial strategies).
                    • Performance: This involves using datasets, test benches, or Turing tests to assess the quality of the language, the skills and knowledge of the model, and the veracity of its statements (see Figures 7 and 8).
                    • Robustness: The aim here is to assess the reliability of responses by means of calibration or stability measurements in the face of prompt engineering strategies.
                    • Fairness: Several methods exist to try to identify and quantify bias (even without access to the dataset) but remain limited. For example, a method could be counting biased word associations (man = survival, woman = pretty).
                    • Frugality: Some inference measurements can be made to estimate the environmental impact of the model, but they are also limited without access to supplier infrastructures.
                    Figure 7: Performance of GPT -4 within Truthful QA
                    Figure 8: Performance of GPT-4 on human examinations

                    Theoretically, an LLM can be assessed on five of the eight dimensions of a Trustworthy AI defined by Quantmetry. On the dimension of being explainable, the previously mentioned solution of the Chatbot citing its sources responds to this problem, to a certain degree. 

                    Use case audit

                    Quantmetry and Capgemini Invent are currently working together to define a framework that enables our clients to audit their AI systems based on LLMs. The primary aim of this audit is to check that the impact of the system on the user is controlled. To do this, a series of tests check compliance with regulations and the customer’s needs. We are currently developing methods for diagnosing the long-term social and environmental impact of their use within a company. Finally, we will create systems that can assess risks and biases, as well as operational, managerial, and feedback processes. The methods used are often inspired by but adapted from, those used to audit the foundation model.  

                    Figure 9: Three approaches to auditing an LLM

                    How can Capgemini Invent and Quantmetry help you capitalize on LLMs?

                    Amidst the media excitement surrounding the widespread adoption of ChatGPT, harnessing the full potential of Generative AI and LLMs while mitigating risks lies at the heart of an increasing number of our clients’ strategic agendas. Our clients must move quickly along a complex and risky path, and the direct connection between the technology and end-users makes any errors immediately visible – with direct impacts on user engagement and brand reputation.  

                    Drawing upon our experience in facilitating major transformations and our specific expertise in artificial intelligence, our ambition is to support our clients at every stage of their journey, from awareness to development and scalable deployment of measured-value use cases. Beyond our role in defining overall strategy and designing and implementing use cases, we also offer our clients the opportunity to benefit from our expertise in Trustworthy AI. We assist them in understanding, measuring, and mitigating the risks associated with this technology – ensuring safety and compliance with European regulations.  

                    In this regard, our teams are currently working on specific auditing methods categorized by use cases, drawing inspiration from the academic community’s model of auditing methods. We are committed to advancing concrete solutions in this field.  

                    Authors

                    main author of large language models chatgpt

                    Alex Marandon

                    Vice President & Global Head of Generative AI Accelerator, Capgemini Invent
                    Alex brings over 20 years of experience in the tech and data space,. He started his career as a CTO in startups, later leading data science and engineering in the travel sector. Eight years ago, he joined Capgemini Invent, where he has been at the forefront of driving digital innovation and transformation for his clients. He has a strong track record in designing large-scale data ecosystems, especially in the industrial sector. In his current role, Alex crafts Gen AI go-to-market strategies, develops assets, upskills teams, and assists clients in scaling AI and Gen AI solutions from proof of concept to value generation.
                    Author of the blog large language models chatgpt

                    Hao Li

                    Data Scientist Manager at Capgemini Invent
                    Hao is Lead Data Scientist, referent on NLP topics and specifically on strategy, acculturation, methodology, business development, R&D and training on the theme of Generative AI. He leads innovation solutions by confronting Generative AI, traditional AI and Data.
                    Author of the blog large language models chatgpt

                    Hadrien Strichard

                    Data Scientist Intern at Capgemini Invent
                    Hadrien joined Capgemini Invent for his gap year internship in the “Data Science for Business” master’s program (X – HEC). His taste for literature and language led him to make LLMs the main focus of his internship. More specifically, he wants to help make these AIs more ethical and secure.

                      Auditing ChatGPT – part I

                      Capgemini
                      Grégoire Martinon, Aymen Mejri, Hadrien Strichard, Alex Marandon, Hao Li
                      Jan 12, 2024
                      capgemini-invent

                      A Chorus of Disruption: From Cave Paintings to Large Language Models

                      Since its release in November 2022, ChatGPT has revolutionized our society, captivating users with its remarkable capabilities. Its rapid and widespread adoption is a testament to its transformative potential. At the core of this chatbot lies the GPT-4 language model (or GPT-3.5 for the free version), developed by OpenAI. We have since witnessed an explosive proliferation of comparable models, such as Google Bard, Llama, and Claude. But what exactly are these models and what possibilities do they offer? More importantly, are the publicized risks justifiable and what measures can be taken to ensure safe and accountable utilization of these models?

                      In this first part of our two-part article, we will discuss the following:

                      What is Large Language Models (LLM)?

                      Artificial intelligence (AI) is a technological field that aims to give human intelligence capabilities to machines. A generative AI is an artificial intelligence that can generate content, such as text or images. Within generative AIs, foundation models are recent developments often described as the fundamental building blocks behind such applications as DALL-E or Midjourney. In the case of text-generating AI, these are referred to as Large Language Models (LLMs), of which the Generative Pre-trained Transformer (GPT) is one example made popular by ChatGPT. More complete definitions of these concepts are given in Figure 1 below.

                      Figure 1: Definitions of key concepts around LLMs4

                      The technological history of the ChatGPT LLM

                      In 2017, a team of researchers created a new type of model within Natural Language Processing (NLP) called Transformer. It achieved spectacular performance for sequential-data tasks, such as text or temporal data. By using a specific technology called ‘attention mechanism’, published in 2015, the Transformer model pushed the limits of previous models, particularly the length of texts processed and/or generated. 

                      In 2018, OpenAI created a model inspired by Transformer architecture (the decoder stack in particular). The main reason for this was that Transformer, with its properties of masked attention, excels in text generation. The result was the first Generative Pre-trained Transformer. The same year saw the release of BERT, a Google NLP model, which was also inspired by Transformers. Together, BERT and GPT launched the era of LLMs.  

                      Improving the performance of its model over BERT LLM variants, OpenAI released GPT-2 in 2019 and GPT-3 in 2020. These two models benefited from an important breakthrough: meta-learning models. Meta-learning is a paradigm of Machine Learning (ML) in which the model “learns how to learn.” For example, the model can respond to tasks other than those for which it has been trained.  

                      OpenAI’s aim is for GPT Large Language Models to be able to perform any NLP task with only an instruction and possibly a few examples. There would be no need for a specific database to train them for each task. OpenAI has succeeded in making meta-learning a strength, thanks to increasingly large architectures and databases massively retrieved from the Internet.  

                      To take its technology further, OpenAI moved beyond NLP by adapting its models for images. In 2021 and 2022, OpenAI published DALL-E 1 and DALL-E 2, two text-to-image generators.10 These generators enabled OpenAI to make GPT-4 a multi-modal model, one that can understand several types of data.  

                      Next, OpenAI released InstructGPT (GPT 3.5), which was designed to better meet user demands and mitigate risk. This was the version OpenAI launched in late 2022. But in March 2023, OpenAI released an even more powerful and secure version: the premium GPT-4. Unlike preceding versions, GPT-3.5 and GPT-4 gained strong commercial interest. OpenAI has now adopted a closed source ethos – no longer revealing how the models work – and become a lucrative company (it was originally a non-profit association). Looking to the future, we can expect OpenAI to try to push the idea of a prompt for all tasks and all types of data even further. 

                      Why is everyone talking about Large language models?

                      Only those currently living under a rock will not have heard something about ChatGPT in recent months. The fact that it made half the business world ecstatic and the other half anxious should tell you how popular it has become. But let’s take a closer look at the reasons why. 

                      OpenAI’s two remarkable feats­­

                      With the development of meta-learning, OpenAI created an ultra-versatile model capable of providing accurate responses to all kinds of requests – even those it has never encountered before. In fact, GPT-4 achieves better results on specific tasks than specialized models. 

                      In addition to the technological leaps, OpenAI has developed democratization. By deploying its technology in the form of an accessible chatbot (ChatGPT) with a simple interface, OpenAI has made it possible for everyone to utilize this powerful language model’s capabilities. This public access also enables OpenAI to collect more data and feedback used by the model.

                      Rapid adoption  

                      The rapid adoption of GPT technology via the ChatGPT LLM has been unprecedented. Never has an internet platform or technology been adopted so rapidly (see Figure 2). ChatGPT now boasts 200 million users and two billion visits per month.  

                      Figure 2: Speed of reaching 100 million users in months.13

                      The number of Large Language Models is exploding, with competitors coming from Google (Bard), Meta (Llama), and HuggingFace (HuggingChat, a French open-source version). There is also a surge in new applications. For example, ChatGPT LLMs have been implemented in search engines and Auto-GPT, which latter turns GPT-4 into an autonomous agent. This remarkable progress is stimulating a new wave of research, with LLM publications growing exponentially (Figure 3).  

                      Figure 3: Cumulative number of scientific publications on LLMs.

                      Opportunities, fantasies, and fears

                      The new standard established by GPT-4 has broadened the range of possible use cases. As a result, many institutions are looking to exploit them. For example, some hospitals are using them to improve and automate the extraction of medical conditions from patient records.  

                      On the other hand, these same breakthroughs in performance have given rise to a host of fears: job insecurity, exam cheating, privacy threats, etc. Many recent articles explore this growing anxiety, which now seems justified – Elon Musk and Geoffrey Hinton are just two of the many influential tech figures now raising the alarm, calling it a new ‘code red.’  

                      However, as is often the case with technological advances, people have trouble distinguishing between real risk and irrational fear (e.g., a world in which humans hide from robots like those in The Terminator). This example explores the creation of a model that rivals or surpasses the human brain. Of course, this is inextricably linked with the formation of consciousness. Here, it is worth noting that this latter fantasy is the ultimate goal of OpenAI, namely AGI (Artificial General Intelligence). 

                      Whether or not these events will remain fantasies or become realities, GPT-4 and the other Large Language Models’ AI are undoubtedly revolutionizing our society and represent a considerable technological milestone.

                      What can you do with an LLM?

                      Essentially, a ChatGPT LLM can:

                      1. Generate natural language content: Trained specifically for this purpose, this is where they excel. They strive to adhere to the given constraints as accurately as possible.
                      2. Reformulate content: This involves providing the LLM with a base text and instruction to perform tasks, such as summarizing, translating, substituting terms, or correcting errors.
                      3. Retrieve content: It is possible to request an LLM to search for and retrieve specific information based on a corpus of data.

                      How can you use an LLM?      

                      There are three possible applications of Large Language Models’ AI, summarized in Figure 4. The first one is direct application, where the LLM is only used for the tasks that it can perform. This is, a priori, the use case of a chatbot like ChatGPT, which directly implements GPT-4 technology. While this is one of the most common applications, it is also one of the riskiest. This is because the ChatGPT LLM often acts like a black box and is difficult to evaluate. 

                      One emerging use of LLMs is the auxiliary application. To limit risks, an LLM is implemented here as an auxiliary tool within a system. For example, in a search engine, an LLM can be used as an interface for presenting the results of a search. This use case was applied to the corpus of IPCC reports.19 The disadvantage here is that the LLM is far from being fully exploited.  

                      In the near future, the orchestral application of ChatGPT LLMs will consume much of the research budget for large organizations. In an orchestral application, the LLM is both the interface with the user and the brain of the system in which it is implemented. The LLM therefore understands the task, calls on auxiliary tools in its system (e.g., Wolfram Alpha for mathematical calculations), and then delivers the result. Here, the LLM acts less like a black box, but the risk assessment of such a system will also depend on the auxiliary tools. The best example to date is Auto-GPT.

                      Figure 4: The three possible applications of an LLM

                      Focusing on the use case of a Chatbot citing its sources

                      One specific use case that is emerging among our customers is that of a chatbot citing its sources. This is a response to the inability of Large Language Models’ AI to interpret results (i.e., the inability to understand which sources the LLM has used and why).

                      Figure 5: Technical diagram of a conversational agent quoting its sources

                      To delve into the technical details of the chatbot citing its sources (The relevant pattern – illustrated in Figure 5 – Is called Retrieval Augmented Generation or ‘RAG’), the model takes a user request as input, which the model then transforms into an embedding (i.e., a word or sentence vectorization that captures semantic and syntactic relationships). The model has a corpus of texts already transformed into embeddings. The goal is then to find the embeddings within the corpus that are closest to the query embedding. This usually involves techniques that find the nearest neighbours’ algorithms. Once we have identified the corpus elements that can help with the response, we can pass them to an LLM to synthesize the answer. Alongside the response, we can provide the elements that were used to generate it. The LLM then serves as an interface for presenting the search engine’s results. This ‘RAG’ approach therefore facilitates the decoupling of factual information provided by the sources from the semantic analysis provided by the LLM, leading to better auditability of the results provided by the Chatbot.  

                      Read more in Auditing ChatGPT – part II

                      Authors

                      main author of large language models chatgpt

                      Alex Marandon

                      Vice President & Global Head of Generative AI Accelerator, Capgemini Invent
                      Alex brings over 20 years of experience in the tech and data space,. He started his career as a CTO in startups, later leading data science and engineering in the travel sector. Eight years ago, he joined Capgemini Invent, where he has been at the forefront of driving digital innovation and transformation for his clients. He has a strong track record in designing large-scale data ecosystems, especially in the industrial sector. In his current role, Alex crafts Gen AI go-to-market strategies, develops assets, upskills teams, and assists clients in scaling AI and Gen AI solutions from proof of concept to value generation.
                      Author of the blog large language models chatgpt

                      Hao Li

                      Data Scientist Manager at Capgemini Invent
                      Hao is Lead Data Scientist, referent on NLP topics and specifically on strategy, acculturation, methodology, business development, R&D and training on the theme of Generative AI. He leads innovation solutions by confronting Generative AI, traditional AI and Data.
                      Author of the blog large language models chatgpt

                      Hadrien Strichard

                      Data Scientist Intern at Capgemini Invent
                      Hadrien joined Capgemini Invent for his gap year internship in the “Data Science for Business” master’s program (X – HEC). His taste for literature and language led him to make LLMs the main focus of his internship. More specifically, he wants to help make these AIs more ethical and secure.

                        Stay informed

                        Subscribe to get notified about the latest articles and reports from our experts at Capgemini Invent

                        Five cybersecurity trends for 2024

                        Geert van der Linden
                        12 Jan 2024

                        2024 marks a paradigm shift in cybersecurity. Defined by the rise of generative AI and in the context of the ubiquity of technology in our daily lives (approximately 15 billion connected devices were in circulation last year), cyber professionals now find themselves at the frontiers of security in the modern world, where threats are constantly evolving in sophistication.

                        By 2025, the global cost of cybercrime is expected to reach $10.5 trillion, an annual rise of 15%, and Gartner forecasts, that 45% of global organizations will grapple with supply chain attacks within the next two years. Add the ongoing global skills shortage, supply chain vulnerabilities, and geopolitical challenges, and you’d be forgiven for feeling concerned about the scale of the task.

                        To help prepare for this new era, we’ve identified five key cybersecurity trends we believe will take precedence in the year ahead:

                        • Zero trust goes mainstream

                        Zero trust is the gold standard of cybersecurity architecture which emphasizes a shift from traditional perimeter-based security to a model where trust is never assumed, even within the network.

                        Governments and many companies have already made zero trust strategies mandatory, reflecting the framework’s critical role in combating evolving cyber threats. As attacks increase and grow in sophistication, Zero trust must become more than the gold standard, it must become standard practice. It is, quite simply, the most effective strategy we have, and we expect more widespread adoption in 2024. Learn more about zero trust here.

                        • Generative AI transforms capabilities

                        Generative AI is expanding capabilities for both attackers and defenders with myriad applications. If we look at the glass as half full, stretched security teams will feel more supported and empowered than they have in recent years, and we expect organizations in 2024 to explore their transformative impact for compliance, data analysis, and accelerated means of defending against the evolving nature of cyber threats.

                        At the same time, questions surrounding the ethical use and security of generative AI will be at the forefront of cybersecurity discussions, and the rise of sophisticated AI-driven phishing attacks will be a major concern. There are many unknown unknowns, but there are also many unknown possibilities. Either way, organizations should be exploring generative AI’s security capabilities before threat actors control the playing field.

                        • Compliance builds transparency and spurs investment

                        The growth in compliance standards, spearheaded by regulations like the EU Cyber Resilience Act and the Digital Operational Resilience Act (DORA), emerges as a third significant trend. Compliance makes investment in security necessary, with no excuses. With new rules, such as the SEC disclosure requirements in the US coming in last year, companies will have to be far more transparent on company breaches when they occur. With the EU Cyber Resilience Act now agreed upon, manufacturers and suppliers will also have to prioritize cybersecurity throughout the life cycle of hardware and software, as well as supporting businesses and consumers to use technology more securely. All of this sets up 2024 to be a busy year for cybersecurity regulation across the globe.

                        • Convergence of IT, OT, and IoT Security

                        Another important trend is the convergence of IT, operational technology (OT), and Internet of Things (IoT) security. This is expected to standardize IP security and place higher demands on production and product security.

                        As organizations embrace Industry 4.0, there’s a growing emphasis on securing manufacturing processes and IoT devices throughout their lifecycle, and we can expect AI and machine learning to play a crucial role in analyzing the vast amounts of data generated by these interconnected systems.

                        • When cyber meets quantum

                        Quantum technology is now advancing faster than expected. Major players like Google and IBM are investing in quantum security to address the challenges posed by quantum computing. Its rapid progress may soon render obsolete the current encryption standards like RSA and ECC, and so the development of quantum-resistant algorithms is therefore becoming a pressing necessity for maintaining data privacy and security in the future.

                        While it may not take off in 2024, it certainly will in 2025, and as a result we expect quantum security to demand increased attention from cybersecurity professionals this year.

                        An era of disruption and opportunity

                        Advances in computing power must be matched by strengthened digital defenses. Beyond AI and ML and zero trust, new threats like quantum promise to upend the very foundation of cybersecurity standards worldwide. All business leaders and technology professionals will be impacted by this approaching milestone, as more and more organizations begin their quantum transition.

                        The convergence of these trends demands a proactive and adaptive approach from organizations in 2024. Leaders will find a strong defense in zero trust architecture and discover new capabilities in generative AI that will be critical to navigating the evolving cybersecurity landscape. Increasingly stringent compliance standards, driven by global regulations, are not only forcing organizations to invest in cybersecurity, but are also driving transparency, creating a more robust cybersecurity ecosystem at a time when IT, OT, and IoT are converging.

                        In the face of these challenges, 2024 is not just a year of disruption, but a year of unprecedented opportunity. The path forward may be uncertain, but with the right strategies and technologies in place, organizations can move forward into a new era of cybersecurity resilience with confidence.

                        Contact Capgemini to understand how we are uniquely positioned to help you structure cybersecurity strength from the ground up. 

                        Author

                        Geert van der Linden

                        Global CISO, Cloud Infrastructure Services
                        Geert is a globally recognized cybersecurity leader with over three decades of experience in shaping robust security strategies and driving business resilience initiatives. Known for his strategic vision and ability to build diverse and high-performing teams, Geert has consistently driven rapid growth and innovation within the organizations he has led. He has been connecting business and cybersecurity, turning cybersecurity into a competitive advantage for clients. As the Chief Information Security Officer (CISO) of Cloud Infrastructure Services, Geert has been instrumental in establishing and managing comprehensive information security programs. He is leveraging his CISO experience to implement practices based on real-world scenarios in defending an organization. A prolific author and sought-after speaker, Geert’s thought leadership and expertise have established him as a respected voice in the security community. Geert also champions the Cyber4Good initiative at Capgemini, a strategic program dedicated to advancing cybersecurity for social good.