Skip to Content

Embracing the ‘chat is the new super app’ trend at CES 2024

Alex Bulat
Jan 13, 2024

L’Oréal unveiled the new “beauty genius chat app” powered by #AI

and it reminded me of our just released #technovision2024 trend: “Chat is the New Super App”- Container: Applications Unleashed- AI-augmented chatting and talking in plain, natural language becomes the new app to rule them all.

The interface and interaction tools are changing fast. With this new app L’Oréal is enabling everyone to become a beauty genius. The app scans your face and gives you advice on what and how to apply you beauty products.

What cool apps have you seen at the CES!?

And don’t forget: Check out our #technovision2024 report to understand the other opportunities.

Meet the author

Alex Bulat

Group Technology VP
Alex is Group Technology Director, focused on helping our customers transform and adopt to the new digital age, and integrate new and disruptive innovations into their business. He is focused on driving the expansion and delivery of digital transformation and helping companies to get a grasp on future technologies like IoT, AI, Big data and Blockchain. He also focuses on how new innovations, disruptive technologies, and new platforms like Uber, impact the current businesses.

    Custom GPT: A game-changer for business management

    Robert-Engels
    Robert Engels
    Jan 13, 2024

    Bora Ger and Paolo Cervini transform strategy and innovation with Custom GPTs: A game-changer for business management.

    Exciting news for business management! Bora Ger and Paolo Cervini have shared their thoughts on enhancing strategy and innovation with Custom GPTs. The groundbreaking GPT Builder creates specialized GPTs for diverse niches, revolutionizing managerial tasks and offering tailored advice.

    This article highlights how Custom GPTs bring specialization, efficiency, consistency, and automation integration to the forefront of management practices. It also explores the developing GPT Store’s role in democratizing AI tool access, boosting innovation and strategic decision-making.

    As we enter this new arena, the article forwards questions about the future of organizational leadership and ethical use of AI. Don’t miss out on this must-read for anyone interested in the intersection of AI and business strategy.

    Meet the author

    Robert-Engels

    Robert Engels

    CTIO, Head of AI Futures Lab
    Robert is an innovation lead and a thought leader in several sectors and regions, and holds the position of Chief Technology Officer for Northern and Central Europe in our Insights & Data Global Business Line. Based in Norway, he is a known lecturer, public speaker, and panel moderator. Robert holds a PhD in artificial intelligence from the Technical University of Karlsruhe (KIT), Germany.

      CES 2024: Las Vegas shines with cutting-edge innovations in AI and future mobility

      Pascal Brier
      Jan 13, 2024

      As #CES2024 comes to a close, Las Vegas was once again a hotspot for ground- breaking moments.

      From GenerativeAI embedded and applied everywhere, to electric (and sometimes flying!) mobility, this latest edition of CES has been a rollercoaster of exciting innovations. Here are some of my favorite picks. What were your favorite moments?

      Each of these innovations represent a step towards a more connected, smarter and greener future. I cannot wait to see what the rest of 2024 will bring.

      Meet the author

      Pascal Brier

      Group Chief Innovation Officer, Member of the Group Executive Committee
      Pascal Brier was appointed Group Chief Innovation Officer and member of the Group Executive Committee on January 1st, 2021. Pascal oversees Technology, Innovation and Ventures for the Group in this position. Pascal holds a Masters degree from EDHEC and was voted “EDHEC of the Year” in 2017.

        Auditing ChatGPT – part II

        Capgemini
        Grégoire Martinon, Aymen Mejri, Hadrien Strichard, Alex Marandon, Hao Li
        Jan 12, 2024
        capgemini-invent

        A Survival Issue for LLMs in Europe

        Large Language Models (LLMs) have been one of the most dominant trends of 2023. ChatGPT and DALL-E have been adopted worldwide to improve efficiency and tap into previously unexplored solutions. But as is often the case, technological developments come with an equal share of opportunities and risks.  

        In the first part of our LLM analysis, we provided a comprehensive definition, examined their technological evolution, discussed their meteoric popularity, and highlighted some of their application. In this second part, we will answer the following questions: 

        In this second part, we will answer the following questions:

        Are LLMs dangerous?

        The short answer is sometimes. With Large Language Models having such a diverse range of applications, the potential risks are numerous. It is worth pointing out that there is no standard list of the risks but a selection is presented below.

        Figure 1: A breakdown of risks posed by LLMS3

        Some of these dangers are linked to the model itself (or to the company developing it). The data in the model could contain all sorts of biases, the results might not be traceable, or user data or copyrights could have been used illegally, etc.  

        Other dangers are linked to the use of these models. Users seek to bypass the security measures of templates and use them for malicious purposes, such as generating hateful or propagandic texts.  

        Additionally, Large Language Models have social, environmental, and cultural consequences that can be harmful. They require enormous amounts of storage and energy. Moreover, their arrival in society has weakened employee power in many industries. For example, writers striking in Hollywood have complained about the use of LLMs. Finally, Large Language Models’ AI is challenging the boundaries of literary art, just as DALL-E did with graphic art.

        How can you deal with these risks?

        It often takes a while before the risks of emerging technology are fully understood. This is also true of the survival tactics. However, we are already beginning to see early strategies being deployed.  

        LLM developers invest in safeguards

        OpenAI invested six months of research to establish safeguards and secure the use of Generative Pre-trained Transformers (GPT) models. As a result, ChatGPT now refuses to respond to most risky requests. As for its responses, they now perform better on such benchmarks as veracity or toxicity. Furthermore, unlike previous models, the ChatGPT Large Language Models has improved since it was deployed. 

        However, it is possible to circumvent these safeguards, with examples of such prompts being freely available on the Internet (Do Anything Now (DANs)). These DANs often capitalize on ChatGPT’s human-centric nature – the model seeks to satisfy the user, even if this means overstepping its ethical framework or creating a confirmation bias. Furthermore, the opacity of the model and its data creates copyright problems and uncontrolled bias. As for benchmark successes, suspicions of contamination with the training database undermine their objective value. Finally, despite announced efforts to reduce their size, OpenAI models consume a lot of resources. 

        Some Large Language Models now claim to be more ethical or safer, but this is sometimes to the detriment of performance. None of the models are faultless, and there is currently no clear and reliable evaluation method on the subject.

        GPT-4 safety in five steps

        To go into more detail about implementing guardrails, let’s look at the 5 steps implemented by OpenAI for GPT models, as shown in Figure 2.

        1. Adversarial testing: Experts from various fields have been hired to test the limits of GPT-4 and find its flaws.
        2. Supervised policy: After training, annotators show the model examples of the desired responses to fine-tune it.
        3. Rule-based Reward Model (RBRM) classifiers: The role of these classifiers is to decide whether a prompt and/or its response are “valid” (e.g., a classifier that invalidates toxic requests).
        4. Reward model: Human annotators train a reward model by ranking four possible model responses from best to least aligned.
        5. Reinforcement learning: Using reinforcement learning techniques, the model takes user feedback into account.
        Figure 2: GPT-4 Safety Pipeline

        Governments and institutions worry about LLMs

        Several countries have decided to ban ChatGPT (see Figure 3). Most of them (Russia, North Korea, Iran, etc.) have done so for reasons of data protection, information control, or concerns around their rivalry with the USA. Some Western countries, such as Italy, have banned it and then reauthorized it, while others are now considering a ban. For the latter, the reasons cited are cybersecurity, the protection of minors, and compliance with current laws (e.g., GDPR). 

        Figure 3: Map of countries that have banned ChatGPT3.

        Many tech companies (Apple, Amazon, Samsung, etc.) and financial institutions (J.P. Morgan, Bank of America, Deutsche Bank, etc.) have banned or restricted Large language models ChatGPT. They are all concerned about the protection of their data (e.g., a data leak occurred at Samsung). 

        Scientific institutions, such as scientific publishers, forbid it for reasons surrounding trust – given the risk of articles being written surreptitiously by machines. Finally, some institutions are concerned about the possibility of cheating with such tools.  

        European regulation changes

        Many articles in Quantmetry’s blog have already mentioned the future EU AI Act, which will regulate artificial intelligence as soon as 2025. However, we should add here that this legislation has been amended following the rapid adoption of the ChatGPT LLM, and the consequences of this amendment are summarized in Figure 4. The European Union now defines the concept of General Purpose AI (GPAI). This is an AI system that can be used and adapted to a wide range of applications for which it was not specifically designed. The regulations on GPAIs therefore concern Large Language Models’ AI as well as all other types of Generative AI. 

        GPAIs are affected by a whole range of restrictions, summarized here in three parts:

        • Documentary transparency and administrative registration, should not be complicated to implement.
        • Risk management and setting up evaluation protocols. These aspects are more complicated to implement but feasible for LLM providers, as outlined by OpenAI with ChatGPT Large Language Models.
        • Data governance (RGPD and ethics) and respect for copyright. LLM providers are far from being able to guarantee these for now.

        The European Union will therefore consider LLMs to be high-risk AIs, and LLM providers still have a lot of work to do before they reach the future compliance threshold. Nevertheless, some believe that this future law is, in some respects, too impractical and easy to circumvent. 

        Figure 4: Impact of the EU AI Act on LLMS3

        Assessing model compliance is one of Quantmetry’s core competencies, particularly in relation to the EU AI Act. Regarding LLMs specifically, Stanford researchers published a blog post evaluating the compliance of 10 LLMs with the future European law. The results are shown in Figure 5. To establish a compliance score, the researchers extracted 12 requirements from the draft legislation and developed a rating framework. Annotators were then tasked with conducting an evaluation based on publicly available information. The article identifies copyright, data ecosystem, risk management, and the lack of evaluation standards as the main current issues, aligning with our analysis above. The researchers estimate that 90% compliance is a realistic goal for LLM providers (the top performer currently achieves 75%, with an average of 42% across the evaluated 10 LLMs). 

        Figure 5: Results of the compliance evaluation made by Stanford researchers11

        A few tips

        Faced with all these risks, it would be wise to take a few key precautions. Learning a few prompt engineering techniques to ensure that prompts provide reliable and high-quality responses could be a good way forward. It’s also worth watching out for data leaks via free chatbots (e.g., on the free version of ChatGPT). The paid version does not store your data a priori. Finally, Figure 6 illustrates how to use tools like ChatGPT with care.   

        Figure 6: Diagram for using ChatGPT with care13

        How do you audit such models?

        There are three complementary approaches to auditing an LLM, summarized in Figure 9.

        Organizational audit

        An organizational audit can be carried out to check if the company developing the LLM is working responsibly, along with ensuring, for example, that its processes and management systems are compliant.   

        It will be possible to do this for our clients who are not suppliers of LLMs but wish to specialize them further, to ensure that they are well employed.  

        Audit of the foundation model

        Auditing the foundation model is the current focus of scientific research. For such an audit, it would be necessary to be able to explore the dataset (which is inaccessible in reality), run test benches on recognized benchmarks and datasets (but face the problem of contamination), and implement adversarial strategies to detect the limits of the model. If we go into more detail, there is a multitude of possible tests for evaluating the following aspects of the model: 

        • Responsibility: Understanding how risks materialize and finding the limits of the model (typically with adversarial strategies).
        • Performance: This involves using datasets, test benches, or Turing tests to assess the quality of the language, the skills and knowledge of the model, and the veracity of its statements (see Figures 7 and 8).
        • Robustness: The aim here is to assess the reliability of responses by means of calibration or stability measurements in the face of prompt engineering strategies.
        • Fairness: Several methods exist to try to identify and quantify bias (even without access to the dataset) but remain limited. For example, a method could be counting biased word associations (man = survival, woman = pretty).
        • Frugality: Some inference measurements can be made to estimate the environmental impact of the model, but they are also limited without access to supplier infrastructures.
        Figure 7: Performance of GPT -4 within Truthful QA
        Figure 8: Performance of GPT-4 on human examinations

        Theoretically, an LLM can be assessed on five of the eight dimensions of a Trustworthy AI defined by Quantmetry. On the dimension of being explainable, the previously mentioned solution of the Chatbot citing its sources responds to this problem, to a certain degree. 

        Use case audit

        Quantmetry and Capgemini Invent are currently working together to define a framework that enables our clients to audit their AI systems based on LLMs. The primary aim of this audit is to check that the impact of the system on the user is controlled. To do this, a series of tests check compliance with regulations and the customer’s needs. We are currently developing methods for diagnosing the long-term social and environmental impact of their use within a company. Finally, we will create systems that can assess risks and biases, as well as operational, managerial, and feedback processes. The methods used are often inspired by but adapted from, those used to audit the foundation model.  

        Figure 9: Three approaches to auditing an LLM

        How can Capgemini Invent and Quantmetry help you capitalize on LLMs?

        Amidst the media excitement surrounding the widespread adoption of ChatGPT, harnessing the full potential of Generative AI and LLMs while mitigating risks lies at the heart of an increasing number of our clients’ strategic agendas. Our clients must move quickly along a complex and risky path, and the direct connection between the technology and end-users makes any errors immediately visible – with direct impacts on user engagement and brand reputation.  

        Drawing upon our experience in facilitating major transformations and our specific expertise in artificial intelligence, our ambition is to support our clients at every stage of their journey, from awareness to development and scalable deployment of measured-value use cases. Beyond our role in defining overall strategy and designing and implementing use cases, we also offer our clients the opportunity to benefit from our expertise in Trustworthy AI. We assist them in understanding, measuring, and mitigating the risks associated with this technology – ensuring safety and compliance with European regulations.  

        In this regard, our teams are currently working on specific auditing methods categorized by use cases, drawing inspiration from the academic community’s model of auditing methods. We are committed to advancing concrete solutions in this field.  

        Authors

        main author of large language models chatgpt

        Alex Marandon

        Vice President & Global Head of Generative AI Accelerator, Capgemini Invent
        Alex brings over 20 years of experience in the tech and data space,. He started his career as a CTO in startups, later leading data science and engineering in the travel sector. Eight years ago, he joined Capgemini Invent, where he has been at the forefront of driving digital innovation and transformation for his clients. He has a strong track record in designing large-scale data ecosystems, especially in the industrial sector. In his current role, Alex crafts Gen AI go-to-market strategies, develops assets, upskills teams, and assists clients in scaling AI and Gen AI solutions from proof of concept to value generation.
        Author of the blog large language models chatgpt

        Hao Li

        Data Scientist Manager at Capgemini Invent
        Hao is Lead Data Scientist, referent on NLP topics and specifically on strategy, acculturation, methodology, business development, R&D and training on the theme of Generative AI. He leads innovation solutions by confronting Generative AI, traditional AI and Data.
        Author of the blog large language models chatgpt

        Hadrien Strichard

        Data Scientist Intern at Capgemini Invent
        Hadrien joined Capgemini Invent for his gap year internship in the “Data Science for Business” master’s program (X – HEC). His taste for literature and language led him to make LLMs the main focus of his internship. More specifically, he wants to help make these AIs more ethical and secure.

          Auditing ChatGPT – part I

          Capgemini
          Grégoire Martinon, Aymen Mejri, Hadrien Strichard, Alex Marandon, Hao Li
          Jan 12, 2024
          capgemini-invent

          A Chorus of Disruption: From Cave Paintings to Large Language Models

          Since its release in November 2022, ChatGPT has revolutionized our society, captivating users with its remarkable capabilities. Its rapid and widespread adoption is a testament to its transformative potential. At the core of this chatbot lies the GPT-4 language model (or GPT-3.5 for the free version), developed by OpenAI. We have since witnessed an explosive proliferation of comparable models, such as Google Bard, Llama, and Claude. But what exactly are these models and what possibilities do they offer? More importantly, are the publicized risks justifiable and what measures can be taken to ensure safe and accountable utilization of these models?

          In this first part of our two-part article, we will discuss the following:

          What is Large Language Models (LLM)?

          Artificial intelligence (AI) is a technological field that aims to give human intelligence capabilities to machines. A generative AI is an artificial intelligence that can generate content, such as text or images. Within generative AIs, foundation models are recent developments often described as the fundamental building blocks behind such applications as DALL-E or Midjourney. In the case of text-generating AI, these are referred to as Large Language Models (LLMs), of which the Generative Pre-trained Transformer (GPT) is one example made popular by ChatGPT. More complete definitions of these concepts are given in Figure 1 below.

          Figure 1: Definitions of key concepts around LLMs4

          The technological history of the ChatGPT LLM

          In 2017, a team of researchers created a new type of model within Natural Language Processing (NLP) called Transformer. It achieved spectacular performance for sequential-data tasks, such as text or temporal data. By using a specific technology called ‘attention mechanism’, published in 2015, the Transformer model pushed the limits of previous models, particularly the length of texts processed and/or generated. 

          In 2018, OpenAI created a model inspired by Transformer architecture (the decoder stack in particular). The main reason for this was that Transformer, with its properties of masked attention, excels in text generation. The result was the first Generative Pre-trained Transformer. The same year saw the release of BERT, a Google NLP model, which was also inspired by Transformers. Together, BERT and GPT launched the era of LLMs.  

          Improving the performance of its model over BERT LLM variants, OpenAI released GPT-2 in 2019 and GPT-3 in 2020. These two models benefited from an important breakthrough: meta-learning models. Meta-learning is a paradigm of Machine Learning (ML) in which the model “learns how to learn.” For example, the model can respond to tasks other than those for which it has been trained.  

          OpenAI’s aim is for GPT Large Language Models to be able to perform any NLP task with only an instruction and possibly a few examples. There would be no need for a specific database to train them for each task. OpenAI has succeeded in making meta-learning a strength, thanks to increasingly large architectures and databases massively retrieved from the Internet.  

          To take its technology further, OpenAI moved beyond NLP by adapting its models for images. In 2021 and 2022, OpenAI published DALL-E 1 and DALL-E 2, two text-to-image generators.10 These generators enabled OpenAI to make GPT-4 a multi-modal model, one that can understand several types of data.  

          Next, OpenAI released InstructGPT (GPT 3.5), which was designed to better meet user demands and mitigate risk. This was the version OpenAI launched in late 2022. But in March 2023, OpenAI released an even more powerful and secure version: the premium GPT-4. Unlike preceding versions, GPT-3.5 and GPT-4 gained strong commercial interest. OpenAI has now adopted a closed source ethos – no longer revealing how the models work – and become a lucrative company (it was originally a non-profit association). Looking to the future, we can expect OpenAI to try to push the idea of a prompt for all tasks and all types of data even further. 

          Why is everyone talking about Large language models?

          Only those currently living under a rock will not have heard something about ChatGPT in recent months. The fact that it made half the business world ecstatic and the other half anxious should tell you how popular it has become. But let’s take a closer look at the reasons why. 

          OpenAI’s two remarkable feats­­

          With the development of meta-learning, OpenAI created an ultra-versatile model capable of providing accurate responses to all kinds of requests – even those it has never encountered before. In fact, GPT-4 achieves better results on specific tasks than specialized models. 

          In addition to the technological leaps, OpenAI has developed democratization. By deploying its technology in the form of an accessible chatbot (ChatGPT) with a simple interface, OpenAI has made it possible for everyone to utilize this powerful language model’s capabilities. This public access also enables OpenAI to collect more data and feedback used by the model.

          Rapid adoption  

          The rapid adoption of GPT technology via the ChatGPT LLM has been unprecedented. Never has an internet platform or technology been adopted so rapidly (see Figure 2). ChatGPT now boasts 200 million users and two billion visits per month.  

          Figure 2: Speed of reaching 100 million users in months.13

          The number of Large Language Models is exploding, with competitors coming from Google (Bard), Meta (Llama), and HuggingFace (HuggingChat, a French open-source version). There is also a surge in new applications. For example, ChatGPT LLMs have been implemented in search engines and Auto-GPT, which latter turns GPT-4 into an autonomous agent. This remarkable progress is stimulating a new wave of research, with LLM publications growing exponentially (Figure 3).  

          Figure 3: Cumulative number of scientific publications on LLMs.

          Opportunities, fantasies, and fears

          The new standard established by GPT-4 has broadened the range of possible use cases. As a result, many institutions are looking to exploit them. For example, some hospitals are using them to improve and automate the extraction of medical conditions from patient records.  

          On the other hand, these same breakthroughs in performance have given rise to a host of fears: job insecurity, exam cheating, privacy threats, etc. Many recent articles explore this growing anxiety, which now seems justified – Elon Musk and Geoffrey Hinton are just two of the many influential tech figures now raising the alarm, calling it a new ‘code red.’  

          However, as is often the case with technological advances, people have trouble distinguishing between real risk and irrational fear (e.g., a world in which humans hide from robots like those in The Terminator). This example explores the creation of a model that rivals or surpasses the human brain. Of course, this is inextricably linked with the formation of consciousness. Here, it is worth noting that this latter fantasy is the ultimate goal of OpenAI, namely AGI (Artificial General Intelligence). 

          Whether or not these events will remain fantasies or become realities, GPT-4 and the other Large Language Models’ AI are undoubtedly revolutionizing our society and represent a considerable technological milestone.

          What can you do with an LLM?

          Essentially, a ChatGPT LLM can:

          1. Generate natural language content: Trained specifically for this purpose, this is where they excel. They strive to adhere to the given constraints as accurately as possible.
          2. Reformulate content: This involves providing the LLM with a base text and instruction to perform tasks, such as summarizing, translating, substituting terms, or correcting errors.
          3. Retrieve content: It is possible to request an LLM to search for and retrieve specific information based on a corpus of data.

          How can you use an LLM?      

          There are three possible applications of Large Language Models’ AI, summarized in Figure 4. The first one is direct application, where the LLM is only used for the tasks that it can perform. This is, a priori, the use case of a chatbot like ChatGPT, which directly implements GPT-4 technology. While this is one of the most common applications, it is also one of the riskiest. This is because the ChatGPT LLM often acts like a black box and is difficult to evaluate. 

          One emerging use of LLMs is the auxiliary application. To limit risks, an LLM is implemented here as an auxiliary tool within a system. For example, in a search engine, an LLM can be used as an interface for presenting the results of a search. This use case was applied to the corpus of IPCC reports.19 The disadvantage here is that the LLM is far from being fully exploited.  

          In the near future, the orchestral application of ChatGPT LLMs will consume much of the research budget for large organizations. In an orchestral application, the LLM is both the interface with the user and the brain of the system in which it is implemented. The LLM therefore understands the task, calls on auxiliary tools in its system (e.g., Wolfram Alpha for mathematical calculations), and then delivers the result. Here, the LLM acts less like a black box, but the risk assessment of such a system will also depend on the auxiliary tools. The best example to date is Auto-GPT.

          Figure 4: The three possible applications of an LLM

          Focusing on the use case of a Chatbot citing its sources

          One specific use case that is emerging among our customers is that of a chatbot citing its sources. This is a response to the inability of Large Language Models’ AI to interpret results (i.e., the inability to understand which sources the LLM has used and why).

          Figure 5: Technical diagram of a conversational agent quoting its sources

          To delve into the technical details of the chatbot citing its sources (The relevant pattern – illustrated in Figure 5 – Is called Retrieval Augmented Generation or ‘RAG’), the model takes a user request as input, which the model then transforms into an embedding (i.e., a word or sentence vectorization that captures semantic and syntactic relationships). The model has a corpus of texts already transformed into embeddings. The goal is then to find the embeddings within the corpus that are closest to the query embedding. This usually involves techniques that find the nearest neighbours’ algorithms. Once we have identified the corpus elements that can help with the response, we can pass them to an LLM to synthesize the answer. Alongside the response, we can provide the elements that were used to generate it. The LLM then serves as an interface for presenting the search engine’s results. This ‘RAG’ approach therefore facilitates the decoupling of factual information provided by the sources from the semantic analysis provided by the LLM, leading to better auditability of the results provided by the Chatbot.  

          Read more in Auditing ChatGPT – part II

          Authors

          main author of large language models chatgpt

          Alex Marandon

          Vice President & Global Head of Generative AI Accelerator, Capgemini Invent
          Alex brings over 20 years of experience in the tech and data space,. He started his career as a CTO in startups, later leading data science and engineering in the travel sector. Eight years ago, he joined Capgemini Invent, where he has been at the forefront of driving digital innovation and transformation for his clients. He has a strong track record in designing large-scale data ecosystems, especially in the industrial sector. In his current role, Alex crafts Gen AI go-to-market strategies, develops assets, upskills teams, and assists clients in scaling AI and Gen AI solutions from proof of concept to value generation.
          Author of the blog large language models chatgpt

          Hao Li

          Data Scientist Manager at Capgemini Invent
          Hao is Lead Data Scientist, referent on NLP topics and specifically on strategy, acculturation, methodology, business development, R&D and training on the theme of Generative AI. He leads innovation solutions by confronting Generative AI, traditional AI and Data.
          Author of the blog large language models chatgpt

          Hadrien Strichard

          Data Scientist Intern at Capgemini Invent
          Hadrien joined Capgemini Invent for his gap year internship in the “Data Science for Business” master’s program (X – HEC). His taste for literature and language led him to make LLMs the main focus of his internship. More specifically, he wants to help make these AIs more ethical and secure.

            Stay informed

            Subscribe to get notified about the latest articles and reports from our experts at Capgemini Invent

            Five cybersecurity trends for 2024

            Geert van der Linden
            12 Jan 2024

            2024 marks a paradigm shift in cybersecurity. Defined by the rise of generative AI and in the context of the ubiquity of technology in our daily lives (approximately 15 billion connected devices were in circulation last year), cyber professionals now find themselves at the frontiers of security in the modern world, where threats are constantly evolving in sophistication.

            By 2025, the global cost of cybercrime is expected to reach $10.5 trillion, an annual rise of 15%, and Gartner forecasts, that 45% of global organizations will grapple with supply chain attacks within the next two years. Add the ongoing global skills shortage, supply chain vulnerabilities, and geopolitical challenges, and you’d be forgiven for feeling concerned about the scale of the task.

            To help prepare for this new era, we’ve identified five key cybersecurity trends we believe will take precedence in the year ahead:

            • Zero trust goes mainstream

            Zero trust is the gold standard of cybersecurity architecture which emphasizes a shift from traditional perimeter-based security to a model where trust is never assumed, even within the network.

            Governments and many companies have already made zero trust strategies mandatory, reflecting the framework’s critical role in combating evolving cyber threats. As attacks increase and grow in sophistication, Zero trust must become more than the gold standard, it must become standard practice. It is, quite simply, the most effective strategy we have, and we expect more widespread adoption in 2024. Learn more about zero trust here.

            • Generative AI transforms capabilities

            Generative AI is expanding capabilities for both attackers and defenders with myriad applications. If we look at the glass as half full, stretched security teams will feel more supported and empowered than they have in recent years, and we expect organizations in 2024 to explore their transformative impact for compliance, data analysis, and accelerated means of defending against the evolving nature of cyber threats.

            At the same time, questions surrounding the ethical use and security of generative AI will be at the forefront of cybersecurity discussions, and the rise of sophisticated AI-driven phishing attacks will be a major concern. There are many unknown unknowns, but there are also many unknown possibilities. Either way, organizations should be exploring generative AI’s security capabilities before threat actors control the playing field.

            • Compliance builds transparency and spurs investment

            The growth in compliance standards, spearheaded by regulations like the EU Cyber Resilience Act and the Digital Operational Resilience Act (DORA), emerges as a third significant trend. Compliance makes investment in security necessary, with no excuses. With new rules, such as the SEC disclosure requirements in the US coming in last year, companies will have to be far more transparent on company breaches when they occur. With the EU Cyber Resilience Act now agreed upon, manufacturers and suppliers will also have to prioritize cybersecurity throughout the life cycle of hardware and software, as well as supporting businesses and consumers to use technology more securely. All of this sets up 2024 to be a busy year for cybersecurity regulation across the globe.

            • Convergence of IT, OT, and IoT Security

            Another important trend is the convergence of IT, operational technology (OT), and Internet of Things (IoT) security. This is expected to standardize IP security and place higher demands on production and product security.

            As organizations embrace Industry 4.0, there’s a growing emphasis on securing manufacturing processes and IoT devices throughout their lifecycle, and we can expect AI and machine learning to play a crucial role in analyzing the vast amounts of data generated by these interconnected systems.

            • When cyber meets quantum

            Quantum technology is now advancing faster than expected. Major players like Google and IBM are investing in quantum security to address the challenges posed by quantum computing. Its rapid progress may soon render obsolete the current encryption standards like RSA and ECC, and so the development of quantum-resistant algorithms is therefore becoming a pressing necessity for maintaining data privacy and security in the future.

            While it may not take off in 2024, it certainly will in 2025, and as a result we expect quantum security to demand increased attention from cybersecurity professionals this year.

            An era of disruption and opportunity

            Advances in computing power must be matched by strengthened digital defenses. Beyond AI and ML and zero trust, new threats like quantum promise to upend the very foundation of cybersecurity standards worldwide. All business leaders and technology professionals will be impacted by this approaching milestone, as more and more organizations begin their quantum transition.

            The convergence of these trends demands a proactive and adaptive approach from organizations in 2024. Leaders will find a strong defense in zero trust architecture and discover new capabilities in generative AI that will be critical to navigating the evolving cybersecurity landscape. Increasingly stringent compliance standards, driven by global regulations, are not only forcing organizations to invest in cybersecurity, but are also driving transparency, creating a more robust cybersecurity ecosystem at a time when IT, OT, and IoT are converging.

            In the face of these challenges, 2024 is not just a year of disruption, but a year of unprecedented opportunity. The path forward may be uncertain, but with the right strategies and technologies in place, organizations can move forward into a new era of cybersecurity resilience with confidence.

            Contact Capgemini to understand how we are uniquely positioned to help you structure cybersecurity strength from the ground up. 

            Author

            Geert van der Linden

            Global CISO, Cloud Infrastructure Services
            Geert is a globally recognized cybersecurity leader with over three decades of experience in shaping robust security strategies and driving business resilience initiatives. Known for his strategic vision and ability to build diverse and high-performing teams, Geert has consistently driven rapid growth and innovation within the organizations he has led. He has been connecting business and cybersecurity, turning cybersecurity into a competitive advantage for clients. As the Chief Information Security Officer (CISO) of Cloud Infrastructure Services, Geert has been instrumental in establishing and managing comprehensive information security programs. He is leveraging his CISO experience to implement practices based on real-world scenarios in defending an organization. A prolific author and sought-after speaker, Geert’s thought leadership and expertise have established him as a respected voice in the security community. Geert also champions the Cyber4Good initiative at Capgemini, a strategic program dedicated to advancing cybersecurity for social good.

              The key to speedy innovation and satisfying, safe, secure mobility?
              Software

              Alexandre Audoin
              Jan 5, 2024

              The race to provide autonomous mobility and compelling customer experiences is hotting up, but automakers need to balance their need for speed and innovation with a ‘no compromise’ approach to safety and cybersecurity.

              Competition in the automotive industry is intensifying and brands are competing on more fronts than at any time in history. Of course, price, performance, brand, and residual values continue to be important. But as the industry gravitates toward electrification and software-defined vehicles, customers are looking at what else their vehicles can do for them. How well do they integrate with their lives and their digital ecosystems? Can and will the car evolve over time to add more value to daily life? And, for manufacturers, how do you build supply chain resilience and competitiveness to address these evolving demands, while ensuring availability and affordability? 

              Automakers – especially at the luxury and premium end of the market – are also intensifying their focus on providing assisted and autonomous driving capabilities and new ways to add value with digital experiences, inside and outside the vehicle. In the face of increased competition, the speed with which automakers are able to innovate and the extent to which they can engage and satisfy their customers in new ways will be crucial to future success or failure.

              Autonomous mobility at the crossroads

              For years, tech and innovation events like CES have been dominated by autonomous vehicles of all shapes and sizes. The technology is always impressive … at the shows. But, in the real world, progress has been slower than expected. For every success, it seems like there’s been at least one story of a scaled-back or canceled investment, an unfulfilled promise, or a serious safety scare.

              The pursuit of autonomous mobility is a double-edged sword. The cost of adding sensors for 20+ detection zones around the car is significant. And the volumes of data, the sophistication of algorithms, and the amount of computing power required to develop, test, and validate systems are eye-watering. And yet, the ability to offer customers safe and stress-free ways to travel; to give back quality time while getting from A to B, is a once-in-a-lifetime opportunity to build trust and open the door to a whole new world of services and revenue streams. It’s no wonder the pursuit of the various certification levels is so intense and why so many companies are taking different routes – from in-house development with tech partners to major alliances with tier-1 suppliers, and even acquisitions. Some companies are making more progress than others, but the race is still wide open.

              The in-car experience is evolving

              The transition to electric and the pursuit of autonomous-driving capabilities have major implications for the automotive customer experience, especially the in-car digital experience. With electric vehicles, we know that recharging away from home will involve idle time. And – though it may still be a way off – autonomous mobility will allow us to focus less on driving the car and leave us more time to do other things. Today, our first thought might be to reach for our smartphones or tablets, but this is a lost opportunity for vehicle manufacturers.

              And so the question becomes: How can your car keep you entertained and engaged while it charges or self-drives?

              The answers are emerging in the form of expansive screens, adaptive interfaces, the addition of extra screens for passengers, an increasing emphasis on in-car gaming, content consumption, subscription services, and almost unlimited ways to pass the time productively, recreationally, or relaxingly in a vehicle.

              And then there’s the potential to have an AI-powered assistant, or companion, that connects all the different services and is capable of providing pretty much any information you need about your journey, your agenda, upcoming commitments, highlights from your inbox or social media feed, and much more.

              All of these features represent potential points of differentiation, and many of them are revenue-generating opportunities (e.g. subscription-based services). Beyond direct revenue and new levels of customer intimacy, in-car digital interactions also create opportunities to generate new data and insights, which can (with the right levels of consent and anonymity, of course) be used to shape new products and services – inside and outside the vehicle – and new monetization opportunities.

              Speed and satisfaction – why they matter more than ever

              You could argue that the evolutions I’ve explored above are technology trends, much like many others. However, these trends are different in that if you can achieve the combination of safe autonomous or highly assisted mobility and engage customers with compelling in-car experiences, you can gain a level of trust, and access – and even companionship – that is unprecedented in the history of OEM-customer relationships. This brings with it the opportunity to develop deeper, longer, and more lucrative relationships.

              But the race for the hearts and minds of customers is intense, with a raft of new players (many from China) to compete against, new demographics, and rapidly evolving customer expectations. In this climate of increased competition, it is imperative that automotive companies intensify their innovation efforts in a bid to deliver the integrated and connected customer experience that will soon be taken for granted. And if your brand isn’t able to provide it, you can assume that another one will. 

              Balancing the need for speed and satisfaction with a ‘zero compromise’ approach to safety and security

              Against this backdrop of ultra-intense competition and a relentless focus on innovation, OEMs must remain vigilant and understand that speed to market can never take priority over safety and security.

              Assisted and autonomous mobility can offer comfortable, convenient, and stress-free travel. But they also mean taking a significant degree of responsibility for the safety of vehicle occupants. In short, ADAS and autonomous driving systems cannot fail. Failures will result in more than a few lost sales – they could lead to loss of life, high-profile court cases, and a complete loss of confidence in your brand.

              And though it’s less likely to be a life-or-death matter, automotive brands need to be vigilant about ensuring the cybersecurity of their vehicles and data ecosystems. Digital assistance or companionship, subscriptions, services, integrated payment solutions and ecosystem services (e.g. via wearable health devices, smartphones, etc.) will typically require some degree of data sharing. This opens the door for personalization and seamlessly convenient experiences, but it’s not without its risks. No brand wants to be the next one to appear in a high-profile data leak story and risk losing the hard-earned trust of its customers.

              Software is the key to safe, secure, and satisfying experiences

              So what’s the key to accelerating innovation cycles and customer satisfaction without compromising on safety and data security?

              The answer lies in your software strategy. After all, software is at the heart of assisted and autonomous driving systems, it drives immersive and engaging digital experiences through infotainment systems and more, and it can be the key to ensuring the security of personal data and the identification and elimination of sophisticated cybersecurity threats. The right software strategy and architecture (i.e. a simplified one) can also provide you with greater flexibility during times of supply chain instability, meaning you can maintain product availability while your competition potentially suffers. As many of us learned during the pandemic, simply making sure your cars are available to potential buyers can be the biggest advantage of all.

              Capgemini Research Institute: The Art of Software

              But the stakes are too high with software and the task of transforming into a software company is too big to go it alone. Here are three ways automotive companies can get their transformation right.

              1. Partner up to boost software capabilities

              Software-driven transformation is a broad and deep-reaching process, which can encompass upskilling your existing team, building new capabilities, and finding the right balance between maintaining your existing digital products and developing new ones. This is a huge undertaking, and so it makes sense to partner up with automotive software specialists and engineers who can share and instill industry best practices, build dedicated software factories for you, or support you in maintaining existing products or developing new ones.

              2. Use cloud, virtualization, and AI to achieve more

              Cloud and AI can be used to process and analyze the high volumes of data produced during autonomous driving system development and testing, to virtualize ECUs, and to support data spaces and service ecosystems. These technologies, combined with the suite of automotive-specific accelerators being built by hyperscalers today, can supercharge your innovation and product development cycles, enabling you to get to market faster with new products and services, while keeping your – and your customers’ – valuable data secure. 

              3. Look for external inspiration

              Automotive companies can’t be everything to everybody. It’s difficult (impossible?) to develop an infotainment UX that rivals that of smartphone makers like Apple and Google if it’s not your core business. Likewise, you won’t suddenly create ‘killer’ content and entertainment options if you’re just starting out. Instead, partner up with startups and niche players in differentiating domains and focus on the bigger picture.

              The road ahead is filled with complexity and exciting developments. And yet, for all the focus on new technology, there are still large groups of customers who care little for new tech, and who continue to value practicality, build quality, and affordability above all else. How organizations address these oft-divergent customer desires within their product portfolio will be a challenge for many ‘traditional’ OEMs.

              What we can say with confidence is that mobility experiences of the future – whether they’re autonomous or human-driven – must be satisfying, safe, and secure. Automotive companies must be quick to give their customers what they want. Check out our perspective on software in automotive to learn more. 

              Software-driven mobility

              Bringing together the strengths of Capgemini in one offer

              Author

              Alexandre Audoin

              Executive Vice President, Global Head of Industries, Sales & Portfolio, Capgemini Engineering
              Alexandre Audoin has led Capgemini and Capgemini Engineering global Automotive Industry for three years. Since July 2024, Alexandre is Capgemini Engineering Global Head of Industries, Sales & Portfolio with a special focus on the creation of Intelligent Industry, helping clients master the end-to-end software-driven transformation and do business in a new way through technologies like 5G, edge computing, artificial intelligence (AI), and the internet of things (IoT).

                5G Hybrid : promising a seamless coverage

                Cédric Bourrely
                Jan 5, 2024

                As the rollout of public and private 5G networks gains momentum in the consumer and industrial telecom markets, the convergence of terrestrial and space-based resources continues to become increasingly important for many stakeholders. With the emergence of “New Space”, democratization of satellite access, implementation of common standards and widespread participation by leading tech developers, network hybridization is one of the key trends in today’s connectivity market. 

                Until now, satellite network performance has been restricted to applications that do not require high data rates or low latency. Today, with the deployment of satellite constellations in Low Earth Orbit (LEO), we can now consider usage and data flows that are compatible with terrestrial 5G. 

                By combining the power of terrestrial networks with low-orbit constellation flexibility, hybrid 5G unlocks new opportunities for businesses, paving the way for 5G NTN (non-terrestrial networks). 

                Hybrid networks can be used for two key objectives: 

                • Extend operator network coverage for private and business users.   
                • Create connectivity bubbles in the industrial or security sectors, for example for tracking mobile assets.   

                Social challenges of hybridization

                Digital access has become a fundamental necessity for citizens as well as companies ; white zones, with zero connectivity, will represent 2% of the French population in 2023, mostly living in remote countryside or mountain areas. At the same time, mobile and Internet coverage remains very limited in “gray zones”, which today represent 38% of the French population.  

                Today, satellite is the only possible solution for maintaining regional balance. Deploying fiber or installing 4G or 5G antennas in less densely populated regions is not an economical or sustainable alternative. The obvious solution? Develop hybrid coverage, using complementary terrestrial and space networks, for “seamless” broadband connectivity. 

                A solution for public safety issues 

                Combining terrestrial and non-terrestrial networks also offers a viable solution to problems related to the safety of both people and assets. First and foremost, there are connectivity bubbles (or “tactical bubbles”) in defense and public security.  

                In case of fire, flood or earthquake, satellites can be deployed when terrestrial networks are cut off or saturated. These connectivity bubbles deployed on land support in-situ operations. Using satellites, they restore links with the outside world, beyond the affected area. 

                A technological response to contemporary industrial issues 

                The same logic is applicable to industrial activities operating in remote areas (offshore wind farms, photovoltaics in the countryside, dams in the mountains…), where terrestrial means of communication are either prohibited or technologically complicated.   

                There are also cases where supply chain players are required to monitor mobile assets. Satellites offer a global, continuous and cost-competitive means of tracking assets across a multimodal supply chain (sea, rail, air).  

                Critical infrastructure and operations are key areas for using hybrid network technologies. This is especially true with the ramp-up of 5G deployments as a replacement for obsolete Tetra technologies. 

                Complex assembly in a divided ecosystem  

                The implementation of hybrid networks has led to transformations throughout the telecom value chain. 

                Operators and companies will require a thorough understanding of new technical concepts, from a wide range of stakeholders, to ensure end-to-end implementation.  

                First of all, they need to understand how networks will be interconnected: what kind of architectures? What are the physical links between network cores? What types of antennas?   

                What are the network load shifts between devices and radio equipment (ground relay antennas to space? What direct links from equipment to space? etc.), or how to use 5G’s flagship modules (Network Slicing, MEC, etc.) to fully leverage these hybrid setups?   

                Next, we must consider dependencies on the chip and terminal industry: what kind of connectivity roadmap? What functionalities are required for what performance? What degree of sovereignty in networks and equipment supplies?   

                Network extension is a major issue for telecom operators, currently unaccustomed to interconnecting their networks with the space industry. There are numerous aspects to be considered (roaming agreements, new network architectures, equipment certification), which need to be managed meticulously to ensure quality services for both private and business clients. Finally, we need to find viable economic and ecological models to ensure virtuous, profitable and beneficial innovation.  

                The beginning of history and the importance of experience for a clear understanding 

                In this challenging situation, Capgemini and the European Space Agency are collaborating on hybrid networks.  Capgemini’s 5G Lab in Paris and ESA’s 5G/6G Hub in Oxford (UK) have been interconnected via low-orbit satellite networks.  

                The current 5G satellite initiative aims to test the technical means and value chain between terrestrial and satellite 5G from 2024. The objective is to highlight the possible uses and operational feasibility of this hybridization.     

                The possibilities offered by hybrid networks are enormous, and will be fully unraveled with the maturity of technological solutions, standards and models. The increasing pace and key trends in the market, combined with 5G networks’ maturity, represent a major technological issue which needs to be addressed immediately.  

                TelcoInsights is a series of posts about the latest trends and opportunities in the telecommunications industry – powered by a community of global industry experts and thought leaders.

                Meet the authors

                Cédric Bourrely

                Expert in Digital Transformation and Innovation

                Patrice Duboé

                CTO Global Aerospace and Defense, CTIO South and Central Europe
                Patrice Duboé has been working in innovation and technology for more than 20 years. He leads innovation and technology teams to deploy innovation at scale for global corporations and clients, with key partners and emerging startups.

                  The chiplet revolution

                  François Babin
                  4 Jan 2024
                  capgemini-engineering

                  Transforming the semiconductor landscape and creating unprecedented opportunities

                  The semiconductor industry is standing at the edge of a profound transformation, thanks to the advent of a game-changing technology: chiplets.

                  Throughout its history, the semiconductor industry has pursued relentless integration and miniaturization. However, the escalating costs and complexities associated with cutting-edge Integrated Circuits (ICs) on advanced semiconductor technology have led to a revolutionary alternative approach: chiplets.

                  Most contemporary chips are designed as a monolithic SoC (System-on-Chip), integrating all essential functions—such as the processor cores, domain-specific hardware accelerator, memory, and interfaces — into a monolithic die, ie everything is built into an integrated circuit on a single piece of semiconductor.

                  Chiplets are a game-changer. They consist of a self-contained semiconductor die that, when combined with other dies through advanced packaging techniques, forms a complex integrated circuit similar to a monolithic integrated circuit. This modular approach enhances scalability, cost-efficiency, and performance. It also enables the integration of diverse functions, such as general-purpose processing, domain-specific processing, and memory into a single system, overcoming some limitations of traditional monolithic designs.

                  The chiplet approach not only addresses the challenges of rising costs and complexities but also unlocks unparalleled flexibility. Heterogeneous chiplet designs enable tailored solutions for specific applications or market segments. Solution providers can modify or add relevant chiplets without disrupting the overall system, resulting in reduced development costs and faster time-to-market, as redesign efforts only affect the package or additional domain-specific dies, not the entire chip.

                  There are still crucial challenges in the chiplet domain such as power and thermal management. Effective multi-vendor support is required to manage those aspects across all integrated chiplets seamlessly. And the standardization of interfaces and testing will be vital to ensure seamless integration, though, notably, organizations such as the Open Compute Project and UCIe (Universal Chiplet Interconnect Express) have already released specifications for open source chiplet interconnect characteristics.

                  Semiconductor giants such as Intel, Nvidia, and AMD have been quick to adopt chiplet technology, successfully demonstrating its viability in manufacturing, testing, and packaging as chiplet adoption gains momentum, the development of an ecosystem of suppliers is underway to serve its needs in areas such as packaging and thermal management. This will facilitate more widespread implementation across the industry, transcending adoption, reducing over-reliance on a few major players.

                  The growing popularity of chiplet designs has sparked interest across the entire semiconductor value chain, including Intellectual Property (IP) and Electronic Design Automation (EDA) vendors.

                  Beyond the leading semiconductor companies, the chiplet approach presents opportunities for design houses and semiconductor service providers like Capgemini. Collaboratively developed General-Purpose chiplet dies can cater to a range of vertical applications, for example serving a consortium of automotive companies pursuing in-car digital services. Additionally, Domain-Specific chiplets or custom dies can be tailored to meet specific requirements.

                  In conclusion, chiplets represent a flexible, adaptable, and cost-effective alternative to traditional monolithic designs. With its potential to revolutionize chip design, packaging, and integration, the chiplet paradigm is poised to redefine the semiconductor landscape, driving innovation and efficiency across the industry.

                  Author

                  François Babin

                  Engineering Unit Director
                  François Babin currently leads the Silicon Engineering Center of Excellence at Capgemini, supervising a global team of SMEs, as well as the VLSI France team. He is passionate about silicon technology and the many achievements made possible by this magical world. A graduate of the Institut des Mines-Telecom Atlantique and holder of an Executive MBA from Toulouse Business School, François has spent his entire career in the world of semiconductors and embedded electronics, in both hardware and software domains, where he has acquired extensive experience and a global vision in these fields.

                    Edge AI that packs a punch

                    David Hughes
                    4 Jan 2024
                    capgemini-engineering

                    How we used boxing techniques to demo a promising new deep learning technology

                    “I fight for perfection”

                    – Mike Tyson

                    “Do you achieve it?”

                    – Charlie Rose

                    “Nah! No one does, but we aim for it…”

                    – Mike Tyson

                    Put simply, deep learning is a type of machine learning that aims to mimic the way the human brain works to recognize patterns and make decisions. Deep learning on mobile and edge devices, like smart wearables with limited computational resources, can be challenging. However, running machine learning models on edge devices and keeping the data local has advantages for data privacy, sustainability and latency.

                    With one of our staff a keen boxer, we took on the challenge of working out what types of punches were being thrown with the constraints of:

                    • A need for real-time feedback
                    • Minimal hardware
                    • Low powered devices

                    How does it work?

                    Capgemini developed an end-to-end demonstrator, where data was collected using a custom iPhone and Apple Watch application, allowing sensor readings to be automatically labeled. A hybrid deep learning model was built to accurately classify each punch type. The model was converted to an optimal form to run efficiently on low powered edge devices.

                    The demo streams raw sensor data from the watch to display on a screen, along with the punch classifications, as close to real time as the network connectivity allows. In this case, we’re using a single Apple Watch on the left wrist to generate sensor data (specifically accelerometer, gyroscope and orientation data). This data is sent to an AI model that determines which types of punches are being thrown by either hand.

                    When we demonstrate this to the attendees at various events, the reaction is often “Wait – how is that even possible?!”. Indeed, how do you know what the right hand is doing with a watch strapped to the left wrist? While it’s certainly easier to track the lefts, boxing is a whole-body sport – there are small but characteristic movements that happen in response to throwing a punch. These movements are what the ML model detects, and uses to classify the type of punch.

                    In a world where seemingly everything is smart and sensors are everywhere, this may seem like an artificial constraint, but often you can’t put sensors right where you want them – for example, due a harsh industrial environment or because they would be too cumbersome or intrusive for the wearer.

                    To run on low powered edge devices and give near real time results, the machine learning model had to be as lightweight and efficient as possible. Through a careful selection of model architecture and the type of optimization techniques discussed in this blog post on model optimization, we produced an AI model that could interface in real time on a low end smartphone. 

                    Applications

                    Our technology has potential applications across many domains. For example, you could extend this application to give feedback to a novice boxer and help them avoid common mistakes – the minimal hardware would make this a very portable solution. For patient monitoring, perhaps when managing chronic illness, ensuring privacy is crucial and minimizing the number of sensors could help ensure compliance and guarantee that solutions are minimally invasive and cost effective.

                    In industrial settings, being able to classify in real time close to where data is being generated allows for rapid intervention. It can also help to reduce the cost and associated carbon emissions from the transfer of large volumes of data.

                    In disconnected applications, like drone operation in remote locations, this approach can be used to improve autonomy, for example allowing these drones to locate a safe place to land in an emergency, through on-board real-time video analysis.

                    Conclusion

                    Capgemini is exploring the applications of this technology in sports equipment, but we believe that this is just the beginning; this technology certainly isn’t limited to the complex movements of the ‘sweet science’. Whether based on incoming sensor, video or other data, being able to analyze inputs in real time on low powered devices has a crucial part to play in unleashing the potential of intelligent products and services.

                    Want to see in action? Watch our boxing demo video

                    Interested in finding out more? Take a look at our Intelligent Products and Services offer and follow David on LinkedIn.

                    Author

                    David Hughes

                    Head of Technical Presales, Capgemini Engineering Hybrid Intelligence
                    David has been working to help R&D organizations appropriately adopt emerging approaches to data and AI since 2004. He has worked across multiple domains to help deliver cutting edge projects and innovative digital services.