Skip to Content

How should organizations respond to NIST’s announcement of the first batch of quantum-resistant cryptographic algorithms?

Jérôme Desbonnet
21 Sep 2022

Crypto agility could hold the key to being equipped to adapt, mitigate, and handle any security challenges arising due to vulnerabilities of the cryptosystems in post-quantum.

The premise of quantum threat

Quantum computers promise the potential to solve complex problems considered intractable for classical computers. The power of quantum computers comes from the usage of quantum principles to solve computation problems. The anticipated applications are in the domains of optimization, simulation, machine learning, solving differential equations, and more. These computers are expected to have the potential to solve some major challenges in industry and society and to aid in the discovery of new drugs, development of new materials for batteries and solar systems, optimization of supply chains and production lines, and more.

However, this great power comes with a great threat, which is the potential ability of quantum computers to crack some of the major public key cryptographic systems in use today. Actors with malicious intent could potentially break the security of enterprise applications, disturb or even damage public services and utility infrastructure, disrupt financial transactions, and compromise personal data.

Increased global attention to post-quantum security and key announcements

Considering the seriousness of the threat, industries, governments, and standard bodies have started working towards defining systems that will be secure and resistant to the threats posed by the arrival of large, powerful quantum computers. These are the post-quantum cryptographic systems. 

But today’s quantum computers are still rudimentary in their capabilities. It’s estimated by industry experts surveyed by the World Economic Forum that it will take ten years or more for the development of quantum computers powerful enough to break the current security algorithms. The first question that comes to our mind is – why the urgency and so much noise around the topic? 

One of the key reasons is that actors with malicious intent could capture and store the encrypted data flowing over the Internet and could decrypt this stored data when large-scale quantum computers become available. This “store now and decrypt later” strategy has become a serious and imminent threat, especially to systems carrying data that has a valid life beyond the anticipated ten years. These systems need to be upgraded now with quantum-safe cryptographic components.

Considering the vast nature of this challenge, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has initiated the process of post-quantum cryptography (PQC) standardization to select public-key cryptographic algorithms to protect information even after the large-scale availability of quantum computers. According to the Capgemini Research Institute’s report published in April 2022, a large number of organizations (58%) are waiting for standards to emerge before prioritizing quantum security as part of their investments. 

But some important global developments in the recent past have increased the focus on quantum technologies and the need for mitigating the associated risks to vulnerable cryptographic systems. They are:

  1. Issue of National Security Memorandum, which highlighted the need to maintain a competitive advantage in quantum technologies and also mitigate the risks to a nation’s cyber, economic, and national security;
  2. Commitment to intensify and elevate cooperation among G7 members and partner countries to deploy quantum-resistant cryptography to secure interoperability between ICT systems;
  3. NIST’s announcement of the selection of the first four quantum-resistant cryptography algorithms. 
  4. Release of Requirements for Future Quantum-Resistant (QR) Algorithm for National Security Systems by The National Security Agency (NSA) with 2035 as adoption deadline.

The four selected algorithms are expected to become part of the highly anticipated NIST standards for post-quantum cryptography in a couple of years, likely in 2024. As the announcement makes clear, these algorithms are designed for two main encryption tasks – the first is general encryption to protect information exchanged over public networks, and the second is digital signatures to authenticate/verify identities. Our blog, “NIST announces four post-quantum crypto finalists. What happened?” provides more information.

So, what should an organization do now? 

Should they immediately start implementing the algorithms and replace the vulnerable components in their IT and OT systems, continue to wait until the official publication of international standards in the next two years, or wait until the threat becomes a reality when these powerful quantum computers are operational? 

Well, in our view, the answer lies somewhere in between these options. While continuing to wait may not be the best choice an organization could make, especially considering the store-now-and-decrypt-later risks, going ahead with a full-blown project implementing the migration of all the systems to quantum-safe is neither cost effective nor wise. So, what is the recommended call to action? 

Crypto agility could hold the key

The answer, in our view, is crypto agility for post-quantum and beyond. It is the proactive design of information security protocols and standards in such a way that they can support multiple cryptographic primitives and algorithms at the same time, with the primary goal of enabling rapid adaptations of new cryptographic primitives and algorithms without making disruptive changes to the system’s infrastructure. 

If organizations are to achieve a position in which they are equipped to rapidly adapt, mitigate, and handle any security challenges arising due to vulnerabilities of the cryptosystems in post-quantum and beyond in the most optimized manner, they will need to put in place certain processes and systems.

We would recommend the following:

  • The first step is for the leadership to initiate a program with clearly defined objectives of achieving post-quantum crypto agility and to establish the collaboration teams within the organization and with the external ecosystem for required solutions, skills, and capabilities. It is also important to start educating key personnel of the organization on PQC and its implications.
  • Initiate a process to gather information across the organization with details of all the systems and applications that are using public-key cryptography and details of the most sensitive and critical datasets (both data-at-rest and data-in-motion) to be protected for long time periods. The factors affecting the whole process are multi-dimensional (which needs separate discussion).
  • Start experimenting with the new algorithms announced by NIST to get an understanding of the impact and challenges involved in the quantum-safe migration path. Start building an initial framework for the target state architecture of the overall system.
  • Prepare a roadmap for post-quantum safe migration based on the multi-dimensional analysis and prioritization of datasets requiring protection and systems and applications using vulnerable cryptographic systems. 
  • Perform further analysis on the interdependencies of systems to decide the sequence of migration and initiate the process of identifying and evaluating sources for components, solutions, and services to implement the migration plan, not forgetting to develop a plan for testing and validation of the successful implementation of the migration.

Organizations following these steps will be better positioned to handle the PQC challenge more effectively. Not adopting such an approach could lead to issues such as:

  • Execution of migration projects in silos leading to integration challenges
  • Breaking the functionality of systems due to partial migration of components
  • Higher costs than optimally required and
  • Increased complexity and unpredictable refactoring every time we discover something new to be addressed.

These issues can lead to reduced confidence in the migration, and so the whole process can be quite challenging, expensive, time consuming, and risky, depending on the complexity and size of the systems in the organization. So, we recommend to our clients to start the process sooner rather than later, at least to understand where they stand in their journey and to estimate the potential size of the migration journey in terms of both time and costs. In summary, we believe organizations should not wait and start now, taking steps to achieve critical crypto agility across their business.

Authors: Jérôme Desbonnet and Gireesh Kumar Neelakantaiah

Jérôme Desbonnet

VP – Cybersecurity CTIO – Chief cybersecurity Architect CIS & I&D GBL's, Capgemini
As VP, Cybersecurity CTIO, Insights & Data, Jérôme creates security architecture models. Jérôme plans and executes significant security programs to ensure that Capgemini’s clients are well protected.

Gireesh Kumar Neelakantaiah

Global Strategy, Capgemini’s Quantum Lab
Leading go-to-market initiatives for the Quantum Lab, including solution development, strategic planning, business and commercial model innovation, and ecosystem partner and IP licensing management; Skilled in Quantum computing (IBM Qiskit), Data science, AI/ML/Deep learning, Digital manufacturing & Industrial IoT, Cloud computing.

    Microsoft Cloud for Sovereignty: Maintain control over strategic digital assets

    Sjoukje Zaal
    20 Sep 2022

    Governments and organizations are focusing on digital transformation to fundamentally transform the way they operate and deliver services to their customers. Cloud adoption has increased tremendously in the last couple of years, also due to the COVID-19 pandemic. But as they move to the cloud, organizations want to maintain the same level of control over their IT resources as they have in their data centers. Concerns about cloud sovereignty, which include data, operational, and technical issues, are not new and have been increasing because of rising geopolitical tensions, changing data and privacy laws in different countries, the dominant role of cloud players concentrated in a few regions, and the lessons learned through the pandemic. As a result, governments and organizations are reevaluating their external exposure and looking for ways to maintain physical and digital control over strategic assets.

    To adhere to these concerns, Microsoft has released a new solution called Microsoft Cloud for Sovereignty. This solution is aimed to meet compliance, security, and policy requirements that governments and organizations are facing. With the addition of Microsoft Cloud for Sovereignty, governments and organizations will have more control over their data, and it will increase the transparency of operations and governance processes of the cloud.
    Microsoft Cloud for Sovereignty is designed to be a partner-led solution, where partners will play a vital role in delivering the solutions. One of Microsoft’s European Cloud principles is that Microsoft will provide cloud offerings that meet European government sovereign needs in partnership with local trusted technology providers. Also, Capgemini and Orange have been working closely together with Microsoft, and will start supporting clients in preparing for their migration by the end of 2022.

    With Microsoft Cloud for Sovereignty, Microsoft is focusing on the following pillars

    Data residency

    Data residency is the requirement that data must be stored within a specific geographic boundary, such as a national boundary. Azure offers data residency for many services in over 35 countries with over 60 different data center regions worldwide (and growing). This enables residency options for Azure, Microsoft 365, and Dynamics 365, where many clients can store and process their data locally. By implementing policies, clients can meet their regulatory requirements to store their applications and data in the required geographical boundary. For Europe, the forthcoming EU Data Boundary will ensure that data will be stored and processed in the EU and European Free Trade Association.

    Sovereign controls

    In addition to the specific regions and geographic boundaries where applications and data are stored and processed, Microsoft also offers a set of sovereign controls that provide additional layers to protect and encrypt sensitive data. These controls span the entire Microsoft cloud: SaaS offerings such as Power Platform, Microsoft 365, and Dynamics 365, as well as the cloud infrastructure and the PaaS services that are available in Azure.

    The following offerings can be leveraged by clients for sovereign protection:

    • Azure Confidential Computing: Azure confidential computing consists of confidential virtual machines and confidential containers. This enables data to be encrypted in rest, but also in use. Specialized hardware is used to create isolated and encrypted memory, which is called a trusted execution environment (TEE). TEEs guarantee that data and code that are processed cannot be accessed from outside the TEE. Client-owned encryption keys are released directly from a managed HSM (hardware security module) into the TEE. The client keys are secured, also when in use, and it ensures that data is encrypted in use, transit, and in rest.
    • Double Key Encryption (DKE): DKE uses two keys together to access protected content. One key is stored in Azure and the other key is held by the client. It comes with Microsoft 365 E5, and it is intended for the most sensitive data that is subject to the strictest protection requirements.
    • Customer Lockbox: Customer Lockbox ensures that Microsoft can’t access client data and content without explicit approval from the client during service operations. Customer Lockbox is offered for Microsoft 365, Microsoft Azure, Power Platform, and Dynamics 365.
    • Azure Arc: Azure Arc extends the Azure services, management, and governance features and capabilities to run across data centers, at the edge, and in multicloud environments. Clients can centrally manage a wide range of resources, including Windows and Linux servers, SQL Server, Kubernetes clusters, and other Azure services. Virtual machine lifecycle management can be performed from a central location. Governance and compliance standards can be met by implementing Azure Policy across these different resources. And services such as Azure Monitor and Microsoft Defender for Cloud can be enrolled as well.
    • Sovereign Landing Zone: Microsoft Cloud for Sovereignty will include a Sovereign Landing Zone. This landing zone is built upon the enterprise scale Azure Landing Zone and will make deployments automatable, customizable, repeatable, and consistent. This landing zone will extend into Azure Information Protection, which also enables policy and labeling for access control and protection on email and document data. Clients can also define custom policies to meet specific industry and regulatory requirements.

    Governance and transparency

    The Government Security Program (GSP) provides participants from over 45 countries and international organizations, represented by more than 90 different agencies, with the confidential security information and resources they need to trust Microsoft’s products and services. These participants have access to five globally distributed Transparency Centers, receive access to source code, and can engage on technical content about Microsoft’s products and services. Microsoft Cloud for Sovereignty will expand GSP to increase cloud transparency, starting with key Azure infrastructure components.

    Wrap up

    In this article I wanted to focus on what Microsoft Cloud for Sovereignty has to offer for clients who want to leverage the Microsoft cloud for their digital transformation journey, but also want to maintain the same level of control over their IT resources as they have in their own data centers. Cloud adoption has accelerated enormously in the last couple of years, which also makes cloud sovereignty much more important for governments and organizations. Microsoft offers the tools, processes, and transparency to partners and clients to support the increasing sovereignty requirements that clients have on their transformation journey.

    Due to these increasing sovereignty requirements, Capgemini has conducted research to look deeper into organizational awareness and key priorities when it comes to cloud sovereignty and the role it plays in overall cloud strategy. We have released a whitepaper with our findings, which can be downloaded here.

    At Capgemini, we have a lot of experience in implementing cloud solutions across all industries. If you would like more information about how we do this for our clients, you can contact me on LinkedIn or Twitter.

    You can also read my other articles here.

    Sjoukje Zaal

    Chief Technology Officer and AI Lead at Capgemini
    Sjoukje Zaal is head of the Microsoft Cloud Center of Excellence at Capgemini, Microsoft Regional Director and Microsoft AI & Azure MVP with over 20 years of experience providing architecture, development, consultancy, and design expertise. She is the regional head of the architecture community in the Netherlands. She loves to share her knowledge and is active in the Microsoft community as a co-founder of Lowlands.community. She is director of the Global AI Community and organizer of Azure Lowlands. Sjoukje is an international speaker and involved in organizing many events. She wrote several books and writes blogs.

      Learning from digital natives

      Zenyk Matchyshyn
      17 August 2022
      capgemini-engineering

      Today’s market leaders are digital-native companies. They were born digital. But what makes them so successful, and can your business compete with them?

      Digital native companies are entering every industry. Many of them did not exist 20 years ago, yet today they are among the most significant engines of change in our society. They do not need digital transformation initiatives because they were born digital. Airbnb launched at a time when large hotel brands were dominating the accommodation industry. Everybody was betting against it, but through a combination of a disruptive business model and a focus on experience design, Airbnb has become a household brand and the number one choice for many travelers and holiday-goers.

      When the COVID-19 pandemic hit, Moderna, a company that produced vaccines to help us combat the virus, was able to design a vaccine in just two days. Moderna describes itself first as a software company.

      These businesses share common themes. They’re resilient, disruptive, and often defy the odds before achieving great success. They’re also digital-native companies, but what does that mean?

      What does it mean to be a digital-native company?

      All companies, old and new, have come to rely on software. Some companies that have been around for decades might have a large team of software engineers and a software portfolio that dwarfs their digital-native competitors. So, what is it about digital natives that sets them apart?

      Tesla wasn’t the first automotive company to write its software. Other automakers developed software too, and at a much larger scale. But Tesla had something these other automotive companies did not – a digital native culture. Companies that are born there tend to have a different approach when it comes to problem-solving and adaptability. Being digitally native is about culture, way of working, and mindset –these elements are hard to replicate for behemoth companies that have been around for decades.

      Culture isn’t just about what a company says. It’s about what it does. The “two-pizza team” approach was introduced at Amazon, which meant that every development team should be small enough to be fed with two pizzas. There were limitations in how effective teams could be as they grew, so the intent was to keep them small, agile, and productive. The most important part was that they should own what they do. They needed to be both small and self-sufficient.

      This type of approach to productivity is what it means to be a digital native, and for non-digital natives, it can be quite a dramatic adjustment – but it’s not impossible.

      Think about products instead of projects

      Another key difference between digital natives and non-digital natives is that digital natives think about building products rather than implementing projects. You figure out what your client needs and then create a product that hopefully fills that need with some measure of success. Then you move on to the following product.

      On the other hand, projects are more focused on requirements, timelines, and resources. The success of a project isn’t just based on how happy a client is with a product, but on the effectiveness of the overall journey, from planning and budgeting to management and execution. It is difficult for non-digital native companies to think about projects instead of products, but it is possible with the right culture and mindset.

      Agility and Flexibility are critical

      Digital natives’ success is not built on having an extensive software portfolio ready for every situation. It took Stripe less than 3 years to become a $1 billion dollar company and now they are on track to become a $100 billion company in 10 years. While doing product in highly competitive financial services market, which exist for a very long time.

      Conclusion

      The best way to develop and grow a digital culture and philosophy is by modeling an organization that’s already a digital native. Capgemini Engineering is ready to assist you in becoming a digital native by sharing our decades-long experience working with startups, including digital native companies.

      Author

      Zenyk Matchyshyn

      Chief Technology Officer, Software Product Engineering
      Zenyk, a seasoned technologist, is dedicated to leveraging the potential of software for positive change. He is passionate about technology, and his expertise extends across multiple industries. Using his interdisciplinary knowledge, Zenyk provides solutions to digital transformation complexities that many industries face. Zenyk has pioneered solutions within emerging technologies and is committed to making a lasting impact on the world through tech innovation.

        Capgemini’s offering towards the headless journey – headstart2headless

        Capgemini
        Capgemini
        15 September 2022

        HEADSTART2HEADLESS

        In today’s world, with the prevalence of connected devices and IoT, traditional content management systems with their coupled content and presentation layer impede content velocity and employing newer tools and techniques for content presentation.
        Companies are looking for increased flexibility and scalability that a decoupled architecture can provide. The primary objective of headless CMS is to provide an omnichannel seamless experience. There is a constant requirement to change the front-end according to how the customers want to view it across channels. Organizations need a superior solution architecture to enable greater protection with the required amount of security and encryption provided to internal users, while content generated outside the organization can be approved and encrypted as needed.

        Digital platforms across orgs have evolved and there is a need for flexible solutions. Enterprises are across mobile sites, apps, conversational interfaces, chatbots, and more. Headless CMS architecture enables a framework that makes it possible for orgs to adapt the front-end layer, while APIs seamlessly integrate the content infrastructure to the presentation layer. The world is moving to a product-focused mindset and firms are looking to rebrand themselves on a regular basis. The content model is evolving from build-from-a-single-page to building blocks for many products. The support that is required across devices is limitless. This has led to the need for headless CMS.

        In headless CMS marketing, teams create content within the CMS, and front-end developers retrieve the content through APIs using whichever front-end technologies work best. Marketers, therefore, can author content in one place, while developers can build a variety of presentation layers to suit the company’s – as well as the customer’s – needs and wants. It presents the best of both the worlds – the power of a CMS and the flexibility of new front-end technologies.

        Headless vs. Traditional

        Our Solution

        Headless can be achieved in various ways in Adobe Experience Manager (AEM). Our solution, HeadStart2Headless, is capable of supporting the various methodologies based on the specific requirements of the customer. Depending on the existing ecosystem, technology proficiency, and AEM requirements, customers can choose from a system which has the least AEM coupling to a system having the most.

        Capgemini’s offering can fast forward the customers in an efficient way to set up and run headless in the least amount of time, thus propelling their businesses to become more agile and flexible in a very short turnaround time.

        Key Benefits

        Contact with us

        To know more about our headless CMS solution, contact: fssbuadobecoe.global@capgemini.com

        Meet our Experts

        Dr Cliff Evans

        Head of Practices, Financial Services Europe
        Interested in the human and engineering challenges from the implementation of new technologies to realise sustainable benefits. Responsible for evolving our technology capabilities and enabling our clients to think and act two steps ahead. Focussed on the Banking and Financial Services Sector which is undergoing unprecedented change, as a consequence of the impact of new technologies and competitors.

        Niyati Srivastava

        Digital Content and Marketing Lead UK and Continental Europe
        Niyati leads content and marketing services for UK and Continental Europe. Her extensive experience developing and scaling GTM offers harnesses data-driven customer experience expertise from across Capgemini for a powerful marketing proposition. Niyati is working on cutting edge solutions for the blended and digitally enhanced realities and business models in Financial Services, working extensively with the C-suite to define and develop strategies with responsibility at their core – data ownership, sustainability, safety and human experience.

          Open APIs – the key transformation enabler for CSPs 

          Abhi Soni
          15 Sep 2022

          Open APIs and Open Digital Architecture lay the foundation for digitization and monetization of new technologies.

          Behind all the industry buzz around digitization, customer experience transformation and 5G monetization, there’s still a lingering issue at the core of Communication Service Providers’ (CSPs’) everyday business that needs attention: Up to 80% of CSPs’ IT budgets are still being spent on system integration and customization1. 

          This leaves limited resources for innovation and actual IT transformation. This also raises larger questions around CSP transformation:

          • Why is the right shift of most CSP business models so slow-paced?
          • How do CSPs bridge this gap between traditional IT problems and the latest industry and customer demands?
          • How do CSPs evolve and collaborate into the new ecosystem involving hyperscale digital natives?
          • What are the best ways to capitalize on new technology waves such as 5G, edge computing, and AI?

          Over the last few years, there’s been increased focus from the leading CSPs on open API and open digital architecture (ODA), with the ambition that open API and ODA can be the enablers for their transformation into becoming platform-based, end-to-end service providers.

          Read on to find out if open APIs are the solution to the problems facing CSPs.

          The key challenges facing CSP leaders today

          Let’s look at the key challenge CSPs are still facing: the IT estate of most CSPs is too complex and rigid, comprised of monolithic core IT systems and legacy processes and technologies. Laurent Leboucher, Group CTO and SVP, Orange explains this problem succinctly:

          IT legacy in telco environments very often looks like a Pollock painting. It’s hard to identify through hazy building blocks, and there is almost no loose coupling. Data records are very often duplicated several times, and everything seems to be tied to everything else. This is often the result of many years of silo fragmentations and various attempts to fix those fragmentations with technical projects, which created this entropic technical debt.2 CSPs’ current IT environment consists of many different application stacks that have been modified over years. These often either have overlapping and redundant functionalities or have gaps in the end-to-end integration of their customer journeys, which operators address through further developments and customizations. The problem of complex IT systems has been further intensified with the inorganic growth the telecom industry has undergone, thus adding a further level of closed architecture to the mix.

          Open APIs and Digital Architecture solve these problems at their source

          The use of open APIs and ODA to connect disparate systems not only protects IT budgets; but also goes to the core of how CSPs can address many of their key functions, including:

          • Competing with digital counterparts and catching up with the platform utopia world,
          • Evolving further from zero-touch provisioning, and zero-touch automation to zero-touch partnering, enabling a marketplace with simplified and automated cross-platform play,
          • Taking demand for digital to an omnichannel personalized experience,
          • Evolving business models to best mobilize on the 5G wave,
          • And ultimately, ensuring solid revenue, returns on investment, and faster time to market.

          What to change, what to change to, and how to change

          The three famous questions from Eliyahu M. Goldratt provides a useful roadmap. The challenge of a complex legacy and a siloed IT estate has been common among the majority of CSPs, and so is the need to participate in platform-based digital ecosystems. The most important of these, TMF Open API program, was formally launched in 2016. The TMF Open APIs combined with component-based architecture like the ODA are a wise solution for CSPs looking to reduce IT complexity.

          The role of Open Digital Architecture

          Open APIs are specifically designed for functional integrations and the ODA to address the challenges of deploying, configuring, and operating in a complex application landscape. As open API is considered the de facto standard for telecoms interfaces, ODA is a component-based architecture that can be viewed as the de facto standard for open digital platforms, which provides a consistent way for components to fully interoperate end-to-end across multivendor ecosystems.

          TMF Open APIs along with ODA is the futureproof approach designed with an outside-in perspective. It can provide the plug-and-play interoperability of components within their IT systems (and networks), reduce complexity and enable digitization of customer-facing systems and reduce cost of integration as well as time to market for new services digital services. This while also supporting both existing and new digital services and addressing the implementation of B2B2X digital ecosystems, which will be critical for operating and monetizing 5G and edge computing.


          A global trend toward APIs

          There is a clear trend regarding the use of APIs as many of the world’s largest service providers such as Axiata, Bharti Airtel, BT, China Mobile, China Unicom, NTT Group, Orange, Telefónica and Vodafone have officially adopted TM Forum’s suite of Open APIs for digital service management.  Equally, there is an increased interest from suppliers to adopt open APIs as more than 128 of the world’s leading CSPs and technology ecosystem participants have signed the Open API Manifesto, which publicly demonstrated their endorsement of TM Forum’s suite of Open APIs 3

          Like any strategic change, the adoption of open API and ODA does not come without its challenges. The shift to open API and ODA requires a holistic approach to address key concerns around these key areas:

          • Shifting to a centralized approach towards integration: CSPs have historically been complex organizations with legacy processes, multi-layered silos and traditional approaches that take a project-specific view on integrations. Shift to Open API and ODA requires a cultural change to a more centralized integration strategy that puts API first, and is based on industry standards, repeatable frameworks, and processes.
          • Initial investment and per API cost: Another concern is to prove the return on the investment from initial projects. To justify the initial, spend or per API cost of initial projects. For open API and ODA to create value for the CSP ecosystem, it needs to have a strategic view rather than a tactical project view. A well-defined open API-led integration strategy sets a foundation and builds the repository of integration assets (TMF Open API library in the case of CSPs) which are reusable and generate more business value with every project and create a lower per API cost.
          • System API reusability: It may be quick to achieve reusability in UX and process layer the legacy system and system APIs which are not that flexible. However, carriers are now innovating their COTS (commercial off-the-shelf products) as well as networks due to initiatives such as SDN, promoted by the Open Networking Foundation.
          • Commercial readiness: There is a gap between what carriers are offering and can provide in their immediate, 1–2-year roadmap, and what the OTT players and third-party providers need. What they need is an open API that allows them to reach as many companies as possible and not all CSPs are ready.

          The good news is that CSPs can address most of these challenges by partnering with the right system integrators and suppliers/API aggregators. The right services partner can help assess your existing IT estate and processes, and bring onboard proven governance models, integration design authorities, reusable API libraries, and repeatable model/business case. This can help reduce the initial effort while increasing the adoption. While increasing return on investment and helping (re)introduce best practices and providing the necessary support depending on the existing resource capabilities. Therefore, with increasing CSP interest – the next big enabler for open API adoption is suppliers. Large, suppliers have traditionally relied on locking in their CSP customers, but it is time to realize that industry fragmentation impedes innovation and the ability to compete, including the adoption of Open APIs.

          Lasting Benefits

          Open API-based integration and open digital architecture enable CSPs’ IT estate to become more agile and resilient, which translates into tangible business benefits:

          • Reduction in time, from concept to cash for new services, as well as the total cost of ownership. APIs significantly reduce the effort and capital involved to integrate with internal and third-party systems.
          • Expansion ofthe service offering, enabling – in a significantly reduced time frame – the ability to gear up to meet shifting markets.
          • Enablement of the CSP business to quickly innovate, partner, and create mix-and-match products and services. The result is speed, convenience, and innovation.

          In the end, TMF open API and ODA are among the most critical weapons a CSP needs in its arsenal today. This means the difference between realizing and squandering the opportunity to monetize innovative 5G services. Most importantly, open APIs will help CSPs evolve and establish themselves as end-to-end service providers. To learn more about our Capgemini Digital Telco Connect solution, our reusable assets, and TMF Open API libraries (or just to have a brainstorming on API-led transformation), contact me.

          TelcoInsights is a series of posts about the latest trends and opportunities in the telecommunications industry – powered by a community of global industry experts and thought leaders.

          Author

          Abhi Soni

          Group Account Executive
          With over 18 years of global experience in the telecommunications industry, Abhishek Soni is a recognized industry expert leading Capgemini’s next-generation offerings, with a strong focus on digital and AI-driven transformation. At the forefront of agentic technologies and platform-led innovation, he spearheads Capgemini’s end-to-end AI solutions and serves as the global industry lead for Salesforce in telecom. He has held key leadership roles across APAC, EMEA, and the UK, and currently leads a portfolio of strategic telecom accounts—delivering transformative outcomes for global clients. His deep expertise spans strategy, consulting, and solution innovation, making him a trusted advisor in shaping the future of communications

            The PULSAR principles of AI-ready data

            James Hinchliffe
            24 Aug 2022

            Does FAIR go far enough to provide AI-readiness? Not quite – but it’s a great start. How can we build on a FAIR data foundation to be truly ready to make good use of AI?

            For many R&D organizations, the desire to do new things with old data often leads to excitement about the potential of artificial intelligence and machine learning meeting the reality of legacy data that isn’t fit for this new purpose. This is often the lightbulb moment where the idea of data management takes off.
            In this article we will explain the six PULSAR principles of AI-ready data and show how FAIR data brings you closer to true AI-readiness.

            P is for Provenanced
            These days, many rich, public data sets are available (like UniProt [1], ChemBL [2] and Open PHACTS [3] in life sciences) that organizations are using to enrich internal data and tackle research problems on a much bigger scale. When machine learning feeds into that work, ensuring that model predictions are reproducible is critical and requires a robust provenance chain showing what data was used to inform a model, where it came from and how it was generated.
            The authors of FAIR anticipated this and accounted for provenance explicitly within the reusability principles, which states that data should be associated with information about its origin and processing. Truly FAIR data automatically covers the ‘provenanced’ principle – that’s a good start!

            U is for Unbiased
            There are many well-known stories about biased AI systems causing terrible consequences for real people. Usually, AI systems are biased because they were trained on biased data – often data that contained hidden biases that were not obvious upfront.
            Detecting bias in data is challenging, and FAIR does not have all the answers. But through findability, you can make your search for appropriate input data broad, and through accessibility, you can be more confident that you’ve obtained everything available. Then your data profile is less likely to have blind spots – and FAIR will have helped you to avoid one of AI’s biggest mistakes.

            L is for Legal
            Do you, and your AI, have the legal right to use a given data set? For example, with personal data, it’s fine to collect personal data provided you tell people you collect it from what you’ll do with it (‘transparent processing’). But AI projects often make secondary use of data, beyond its original research purpose. Are you covered by the original terms of consent?
            One of FAIR’s reusability principles specifically states that human- and machine-readable conditions of reuse should be included in metadata. So, while the machine-readable aspect is probably still a work in progress, at least AI system owners should be able to take an informed view on the appropriateness of truly FAIR data they consume.

            S is for Standardized
            Everyone appreciates that standardization reduces problematic data variability and, while standardization may not enforce all quality aspects, it does prompts data practitioners to consider quality. Of course, some AI projects specifically act on unstructured data, e.g. when processing natural language. Here, standardization of the outputs, rather than the inputs, is the key, for example when concluding that two scientific papers are discussing the same disease even if they refer to it using different nomenclature.
            Standardization is baked into FAIR’s interoperability principles, which recommend standardization of the way we physically store data (e.g. as triples in a triple store or tables in a relational database), the data exchange format (e.g. using OWL or JSON-LD) and the actual meaning of the data (e.g. using a public or industry data standard).

            A is for Activated
            Activated data is ready to use – for example, the data sets you’re going to feed to your AI system are either joined together or ready to be joined. Big data and AI often generate novel insights from the combination of historically siloed data types – for example, chemistry and biology data in a search for new medicines – but connecting data sets from multiple domains can be surprisingly complicated.
            FAIR’s interoperability principle is the key here. With interoperable data, close attention should have been paid already to those key joining points on the edges of data sets and data models, building in interdisciplinary interoperability from the start.

            R is for Readable
            …and, of course, machine-readable. Interoperability is the main FAIR principle relevant to machine-readability, and while this is partly obvious, it’s not just about physical data formats; truly reusable data should codify the context in which it was generated so that the machine draws the right conclusions. This is usually the biggest challenge in FAIRification work, especially in specialist areas that lack pre-existing data standards or rely heavily on written descriptive text. Providing a long-term, robust solution often means developing new data capture systems and processes that properly codify tacit knowledge that otherwise would be left in explanatory paragraphs, research plans, published papers or sometimes not even written down at all.

            Conclusion
            To be truly AI-ready, your data should satisfy the PULSAR principles – and applying the FAIR principles as a first step means a lot of the work is already done. Indeed, “the ultimate goal of FAIR is to optimize the reuse of data” [4]. The end of FAIR is the beginning of AI-readiness.
            Capgemini’s many years of experience with FAIR and data management will help you truly embrace becoming a data-driven R&D organization. CLICK HERE.
            _________________
            [1] https://www.uniprot.org/
            [2] https://www.ebi.ac.uk/chembl/
            [3] http://www.openphacts.org/
            [4] https://www.go-fair.org/fair-principles/


            Five ways to battle data waste

            Roosa Säntti
            14 September 2022

            There is an increasing focus on reducing the environmental footprint of data centers and cloud services. Interesting enough, that is not yet the case at all for data. But clearly, with more organizations aspiring to become data-powered, the issue of Data Waste is lurking around the corner. We introduce five ways to begin battling data waste – with an additional key benefit: getting a better grip on the corporate data landscape.

            My data is bigger than yours: we used to take pride in storing as much data as possible – because we could, prices were low, and future, killer algorithms were waiting. Having more data seemed the landmark of being a true, successful data-powered enterprise.

            Turns out this consumes loads of energy and precious natural resources, and it creates a growing heap of unsustainable e-waste. We need to become more aware of what data we really need to store, how many times we duplicate it, and how long we keep it available. Also, although AI may be key to addressing climate challenges, it slurps energy itself too. Think only about how much energy it takes to perform one training cycle for a major AI language transformer model (hint: really, really a lot – say 5 times the lifetime CO2 emission of an average American car). The battle against data waste will therefore be a continuous, delicate balance act – and it only just begun.

            And it’s a battle with benefits: many of the measures that already can be taken bring additional value for organizations that want to become data-powered, even to the point that the positive impact on overall data mastery may dwarf the sustainability impact.

            Here are five suggestions to get your quest going:

             1. Get the data first

            As with any other transformational objective: you should map your current situation first before you can start improving. Battling data waste begins with getting data on what data you actually have. Only then you will be able to assess how much of it really is unsustainable data waste, for example by analyzing how often data is used, by how many people and for what type of purposes. Many data catalog tools (such as Alation, see a separate article in this magazine) are perfectly equipped for this, and increasingly they feature intelligent automation and AI to do the heavy lifting of scanning the data landscape. Having an up-to-date data catalog brings many obvious additional benefits to a data-powered business as well, so every minute of activity in this area is typically well-spent.

            2. Map the environmental impact

            Once you know what data you have, it is a matter of understanding its real environmental impact. Data is stored in storage systems, as part of an IT infrastructure and a supporting network (in a data center or in the cloud). All these resources consume energy, create e-waste and have a carbon footprint. An increasing number of’ publicly available carbon calculators help to establish the sustainability cost of the elements of the data landscape, not only focusing on Scope 1, but covering the entire ‘supply chain’ of Scope 2 and 3. Once this data is established, it should be routinely added to the metadata management and catalog facilities of the organization – for current and future reference and use. As with every sustainability effort, you want to focus on the data sets that have the most negative impact.

            “But it is indeed a balancing act, as the data can be part of a solution or an initiative that delivers societal benefits that far outweigh its sustainability costs.”

            3. Get rid of it

            Ever saw Hoarders? It’s a reality-TVshow that features compulsive hoarders: people who are addicted to filling their homes with objects,and how that spills out into their lives. You don’t want to be a data hoarder. Just keeping data for the sake of it – or that it might come in hand in some unforeseen way – can provide you with a high sustainability bill. And it simply costs money too,for that matter. So, just as with application rationalization, data should have a managed lifecycle that not only involves creating and using it, but also features clear policies for decommissioning unused, redundant, or simply wasteful data.Organizations sometimes tend to hold on to their established IT assets(including data) for nothing more than emotional, non-rational reasons. Where the cost equation may not be enough to break that spell, sustainability impact might just do fine.

            4. Stop at the gates

            It’s a well-established practice within Permaculture (see our separate article in this magazine about ‘Permacomputing’ for more): you don’t recycle, reuse, and repurpose as an afterthought, it is an integrated part of your design and approach, right from the start. A lot of wasteful data can be avoided by never ingesting it in the first place. So, no more room for this typical Big Data era mindset of whatever data is available should be stored, because storage in the cloud is cheap and you never know what use it may have. Later. Sometime. Maybe. Instead, think in terms of Small Data, Tiny Data, or simply Smart Data: be much pickier about the data sets you get onboard, the objectives you have for it, and the quality of the data points inside. Select data that is fit for your purposes. Think more upfront, clean so much less later.

            5. Do not duplicate

            Data architecture is not necessarily a well-established practice within many complex organizations. As a result, data is often unnecessarily copied multiple times from the central data organizations to various business domains, and vice versa. Each instance starts to lead its own life, serving all sorts of different purposes, rapidly adding to a growing pile of potential data waste. And it all tends to be unaligned and unsynchronized. New architectural approaches – notably Data Mesh – appoint the ownership of specific data sets much more explicitly to specific business domains. Data is typically held – and stored – by the business domain and made available in flexible integration ways (such as APIs), so that duplication is unnecessary, even undesirable. Other integration technologies, such as data virtualization, can achieve the same.

            Lastly, don’t forget the people. As with everything around data, we can only accomplish so much without involving and empowering people to be and lead the change. Data catalogs and API-first architectures are great tools to drive more sustainable use of data and AI. But if there are no people embracing the direction (a sustainable data vision and strategy) and no ownership of the data (internalizing which data is used, why and how much) – failure is a given. True Data Masters battle data waste by harnessing both: data foundations and data behaviors.

            There are many more ways to stop data waste, such as relying more on shared data between multiple ecosystem partners, procuring data and pre-trained algorithms from external providers, limiting the movement of data, and switching to energy-saving storage media. One thing is for sure: even if reducing data waste would not deliver a substantial sustainability impact at first sight, each and every activity suggested adds to a higher level of data mastery. And that – in all cases – is priceless.

            INNOVATION TAKEAWAYS

            Data has a sustainability cost

            With its obvious merits, data has an impact on the environment in terms of its dependency on natural resources and energy and its carbon footprint; hence data waste must be actively addressed.

            The quest against data waste

            There are many ways to decrease harmful data waste, but they all start with a better understanding of the current data landscape and its environmental impact.

            Battle with benefits

            Reducing data waste can have an obvious positive environmental impact, but while doing so organizations will see their level of data mastery lifted as well.

            Interesting read?

            Capgemini’s Innovation publication, Data-powered Innovation Review | Wave 4 features 18 such articles crafted by leading Capgemini and partner experts sharing inspiring examples of it – ranging from digital twins in the industrial metaverse, “humble” AI, serendipity in user experiences, all the way up to permacomputing and the battle against data waste.. In addition, several articles are in collaboration with key technology partners such as  AlationCogniteToucan TocoDataRobot, and The Open Group to reimagine what’s possible.  Find all previous Waves here.

            Authors

            Roosa Säntti

            Head of Insights & Data Finland
            Roosa’s ambition is to make people, organisations and the society to understand the power that lies in their data and how they can harness it to build smarter and more sustainable environment for all. She helps businesses in innovating their strategy with the help of digitalization, technology and data. In her current role she is leading a business unit serving customers within data, analytics and AI. Her special interests lie in the intersection of innovation, technology and people.

            Ron Tolido

            CTO, Insights and Data
            Ron is the Executive Vice President, CTO, and Chief Innovation Office at Capgemini Insights & Data global business line. He is a Certified Master Architect and a member of the Capgemini group Chief Technology, Innovation & Ventures council.

              Immersive employee experiences offer organizations a talent advantage

              Capgemini
              Capgemini
              14 Sep 2022

              As organizations continue to compete for top talent, HR professionals are finding that they need to change recruitment and retention tactics.

              The Society for Human Resource Management reports that while talent acquisition teams were striving to meet their companies’ hiring numbers last year, “the focus is shifting now to candidate and employee experience.” Forward-thinking organizations are taking employee experience (EX) seriously to attract and retain talent, and many are creating immersive experiences to enhance EX. 

              To understand why organizations are focusing on immersive EX, it’s important to grasp that immersive experiences are fundamentally multisensory experiences. They often involve a combination of user interfaces (UIs) such as these:  

              • flat UI, in the form of a phone, a tablet, and monitor screens; 
              • natural UI that supports voice assistance, hand-gesture controls, and haptic feedback like vibrations; and
              • mixed reality UI, which includes augmented and virtual reality interfaces.

              These technologies can enhance candidate and employee experiences with benefits that go beyond making an organization more attractive to talent. It’s well known that companies with great EX deliver better customer experiences, because employees are engaged in their mission and empowered to solve customers’ problems.

              That, in turn, can lead to more revenue, as per a Forbes Insights and Salesforce report, which found that “companies that have both high EX and CX see almost double the revenue growth compared to those that do not.”

              Immersive experiences attract candidates and retain employees 

              Immersive employee experiences can take many forms. You might think of someone wearing a headset and virtually learning to build a new machine, but there are other use cases.  

              Immersive self-service portals

              We often hear from organizations that their employee portal is outdated or hard for employees to use. That can cause daily frustration, especially when you consider how many applications an employee interacts with, such as an email client, a document repository, an employee directory, and so on.  

              Employees are accustomed to using text messages, chatbots, or voice assistants to find what they need when they shop or engage socially online, but legacy employee portals don’t provide that level of convenience. An immersive portal can serve each employee personalized content that’s relevant to their role, almost like a social network for the enterprise. It can also allow employees to direct questions to an intelligent assistant, so they don’t have to search multiple systems or send emails. Those customized, convenient features free employees to focus on their roles in growing the business.  

              Immersive training experiences

              Immersive onboarding and training can benefit all employees by allowing them to experience their new roles in realistic simulations, refine specific skills, and develop new routines before they begin working. Learning new skills in a virtual setting can help new employees prepare for working with complex equipment or in busy settings without slowing down the company’s production processes. A common example is the use of headsets and augmented reality to teach autoworkers how to assemble complicated parts of automobiles without risking damage to costly components during their training.

              However, other businesses can improve EX with virtual training, too. For example, baristas need to learn the steps for making dozens of coffee and tea drinks using a variety of machines and to follow safety practices while they work. Learning on the job can slow down other employees, negatively impact their experience, and create delays for customers as well. In a virtual environment, new hires can learn how to perfect their lattes and macchiatos without real spills, burns, or slowdowns.  

              Real-time, remote collaboration 

              When employees run into problems assembling products or using a piece of equipment, the result is downtime and stress. With AR headsets and video streaming, employees can check in with designers and engineers to walk through a problem and implement a solution much faster compared to waiting for an in-person visit or trying to solve the problem via email or a voice call. The result is more productive employees who are empowered to seek help when they have a problem. 

              As metaverse technologies roll out to support richer virtual engagements, employers may be able to host highly realistic immersive meetings and events that spark the same kind of emotional engagement employees would experience at an in-person gathering. That can create a stronger organizational culture and foster more collaboration and creativity even among fully remote teams.

              Extended capabilities in the field

              Repairing complex equipment can be a challenge, especially in bad weather or remote locations. Here, again, AR headsets and remote access to guidance can help employees diagnose problems and repair them much faster than if they had to page through a manual or search the web for the information they need as they work. That can allow field service employees to bring equipment back online faster, reduce call-backs for further repairs, and help employees be more productive.

              All these immersive experiences can be a selling point for candidates because of the convenience and support they offer. For younger candidates especially, immersive technology is also appealing because it’s familiar – members of Gen Z, now in the early stages of their careers, have grown up with screens and immersive experiences at home and at school.  

              Planning immersive employee experiences 

              Immersive EX isn’t just about the technology. It’s about the entire employee journey – and there are many journeys an employee can have, such as onboarding, taking maternity leave, and earning promotions. Companies can start by mapping one of their employee journeys to find ways to elevate it, using simple design techniques to reduce friction.

              It’s important to start small and move quickly, perhaps with onboarding or training, to test out ideas and identify the right technologies to leverage as part of an immersive experience. This limited initial approach enables organizations to get feedback from users and other stakeholders that they can use to refine the experience before formally launching and scaling it. Then, organizations can use that immersive employee experience to get buy-in for additional immersive EX programs. 

              As organizations’ immersive employee experiences gain traction, they can become a selling point for talent acquisition and a tool for retaining existing talent. Great immersive EX can also generate word of mouth that attracts talent, as employees share how the immersive technology they work with improves their experiences. By starting small and gradually building useful, supportive experiences, organizations can give themselves an advantage in recruiting and retention.

              This article was first published on destinationCRM.com, August 18, 2022

              Co-authored by Mike Buob and Alexandre Embry.

              Alexandre Embry

              Vice President, Head of the Capgemini AI Robotics and Experiences Lab
              Alexandre leads a global team of experts who explore emerging tech trends and devise at-scale solutioning across various horizons, sectors and geographies, with a focus on asset creation, IP, patents and go-to market strategies. Alexandre specializes in exploring and advising C-suite executives and their organizations on the transformative impact of emerging digital tech trends. He is passionate about improving the operational efficiency of organizations across all industries, as well as enhancing the customer and employee digital experience. He focuses on how the most advanced technologies, such as embodied AI, physical AI, AI robotics, polyfunctional robots & humanoids, digital twin, real time 3D, spatial computing, XR, IoT can drive business value, empower people, and contribute to sustainability by increasing autonomy and enhancing human-machine interaction.

              Mike Buob

              Vice President of experience and innovation at Sogeti, part of Capgemini
              Mike Buob is VP of Experience & Innovation at Sogeti, part of Capgemini. He helps clients create impactful experiences for their customers and organizations with their transformation and innovation initiatives. Mike has a diverse background in technology, innovation and strategy, including Artificial Intelligence, DevOps, Cognitive QA, IoT, Cybersecurity, Analytics, Digital Manufacturing, and Automation.

                The Patient Data Security Blindspot

                Anne Stahl
                Anne Stahl
                13 Sep 2022

                For the sake of patients, doctors and healthcare institutions, we need to safely and responsibly unlock patient data.

                In the wrong hands patient data can be a nightmare. Managed correctly, it’s a source of immense value. It’s time to advance a new patient data ecosystem that combines unwavering security with responsible access.

                THE PROBLEM

                An example scenario: what we’re missing

                A middle-aged patient is rushed into the emergency department of his local hospital, noncommunicative and alone. His medical files reveal a broken wrist, some dermatology concerns, a bone marrow donation and grief counselling. Each entry a story; together they represent a life. But right now, the lead cardiologist doesn’t need a novel. He needs precise information about the man’s heart – fast – so that he can save his life.

                In a perfect world, what data would hospitals like their surgery teams to see the moment they open a patient’s file?

                Patient record number + emergency contact

                • Allergies
                • Current medications
                • Health issues [heart condition]

                Status of heart condition

                • Most recent coronary event
                  • Course of action
                  • Past coronary events
                    • Course of action
                  • Doctors’ notes in descending order

                A different set of information would be available to his attending nurses. Another would inform his physical therapist. And for the patient, a clear and readable version would help him understand his condition and course of treatment in the language he understands. If you could organize data in the proper usage context, doctors, patients and other healthcare workers would all be able to see at a glance the information most relevant to them, and would use it to inform their decisions.

                Of course, patient data is not like any other data. Unrestricted sharing of patients’ intimate health information would be an unforgivable violation – emotionally and financially. And given the immense profit motive involved, the risk of disreputable players getting their hands on patient data is ever present. However, the solution up to now – locking patient data far, far away – is no longer viable. Without the ability to access and manage patient data, vital information from years past gets buried under newer files. Results are lost or forgotten, and tremendous opportunities in everything from AI to patient engagement are missed. In the age of connected health, security is not enough: patient data must also be accessible.

                Current data security puts a burden on patients…

                Health problems are among the most stressful events in our lives. Healthcare systems have an ethical duty not to add to that stress with overly complex, difficult systems. Not only does this put an emotional strain on patients, it’s often outright unmanageable. What for you and me may be a hassle, for an elderly patient is often impossible, which is one reason why about half of older patients never use their online portals. (The other main reason? Privacy and security issues.) The same goes for a patient who’s in shock, on strong pain medication – the list goes on. Health problems and byzantine systems don’t mix, and we should have no illusions of being patient-centric if patients are struggling to access and understand their own files.

                Ironically, some of the solutions in use today are so behind the times that they add a different kind of burden. Fax machines are no longer ubiquitous. For a patient to send or receive data by fax often requires a special trip to a family member’s office or some other public place, creating both a hassle and a security risk. Security measures need to be moving forward to meet the needs of new forms of patient data, such as smart home devices and IoT equipment. Expecting patients to use son-in-law’s secretary’s fax machine is no longer an option.

                …and costs doctors valuable time

                Doctors face a different kind of pressure – time. Doctors’ time is precious. It takes over a decade to train one doctor, and much more than that to gain expertise. The idea that doctors should be spending any of their limited time searching through files and databases is mind-boggling. But they do. One study in the UK found an average of about 16 minutes per visit are devoted to the patient’s records. (Some of which is necessary. The report cites chart review (33%), documentation (24%), and ordering (17%). ) Assuming doctors do unearth the necessary data, they’re presented with a series of disconnected snapshots of their patients – separate files from various visits and tests from multiple institutions (not always complete), perhaps with many critical findings buried within. Sometimes data is missing altogether. One study found that over 200 million healthcare records were lost, exposed or stolen over the past decade in the US alone. Modern data management technology needs to be more secure, while also providing intelligent access to patient records – turning data from a cost into a source of immense value.

                THE SOLUTION

                Caregiver showing diet tracker to the patient

                How government can make patient data secure and accessible

                The good news is: solutions already exist. Banks are already sharing data securely, because they need to. (You can’t perform a transfer if you don’t share at least some data.) This is the shift that healthcare must take: from a focus solely on data security, towards the science of compliance and enablement.

                Individual institutions can’t do much on their own. The change needs to come from the governing bodies that set the security regulations for patient data. In many countries that may be a government body such as the EU, or it could be a private consortium comprising a group of hospitals and other patient care institutions.

                Around the world approaches differ. A common choice is the centralized system, where one organ (the government) controls all data and regulations. Germany is currently in the process of rolling out a federated identity management approach to access data – putting the identification process in the hands of intermediaries (hospitals, insurance companies), while giving patients ultimate control of their data. However, for a variety of reasons it has proven unpopular among patients and doctors. In The Netherlands, ambitious legislation is in the works that would put complete data sovereignty in patients’ hands. Three solutions need to be in place to improve patient data security and make it accessible:

                Shared APIs

                APIs aren’t the only solution to sharing large volumes of patient data (the above-mentioned German system prohibits the use of APIs for all but anonymized data), but they are one of the best. Around the world, Fast Healthcare Interoperability Resources (FHIR) has arisen as the gold standard for storing and structuring patient data. APIs can then be used to access this structured data using certain operations (defined as “interactions” by FHIR), thus enabling data to flow within or between institutions. The operating system doesn’t matter, only the recipient. Which leads to the central question: how do you decide which data can be shared, and with whom?

                Centralized authorization

                The solution is a comprehensive set of rules governing who has access to what. It is neither short, nor simple, but it is achievable. (If banks can do it, so can healthcare institutions.) This authorization engine consists of a set of rules, spliced with special allowances for unique situations and specific people. (For example, HIV status will be classified as among the most private, least sharable data. However, an allowance must be made for doctors certifying blood donations.) Note that authorization (deciding who has authority to do what) is a different process entirely from authentication (verifying an individual’s identity), to which we turn next.

                Frictionless authentication

                Patients are not going to carry around passwords in their memories. (If they could, the passwords would be too weak.) Biometric identification such as retinal scans, fingerprints and facial recognition are likewise unreliable during many health emergencies. There are, however, some intriguing solutions out there. Many are based on smartphone apps, including Fido2 – an alliance that lets users confirm their identity with common devices, including smartphones.

                In Norway, many people carry a Youbikey – an ingenious little device that plugs into a computer’s USB portal and provides strong two-factor identification. Norwegians use their Youbikeys to speed up payment, for banking, and for other activities that require identification. With frictionless authentication, patients and healthcare workers can instantly be provided with exactly the data which is appropriate for them, when and where they need it. The importance of creating a user-friendly interface cannot be overstated. Governments have poured immense resources into their patient data systems, only to watch them flounder due to weak user experience. People today expect the intuitive design of Amazon or iTunes, and they make their judgement within seconds.

                The future of connected health

                The ability to access and organize patients’ data is the first step for advances in multiple directions. It provides the framework around which predictive and personalized medicine can evolve. It is the foundation for a new generation of connected health. Patterns and connections are waiting to be discovered: in an individual patient’s files, and in the combined data of multiple patients. Artificial intelligence excels at exactly the kind of pattern recognition that could be helping patients find solutions to their ailments, but first it needs access to that data.

                Solutions already exist with the power to keep patient data secure and accessible when needed. Ensuring excellent user experience is already a requisite for commercial products. Healthcare systems don’t need to start from scratch; the tools are already here. Capgemini has decades of experience working with complex data systems, data consortiums and digital transformation. We’ve provided change management for partners around the world and helped transform customer engagement. We’re looking at the challenges that healthcare is facing up close, and we see an immense opportunity to apply recent advances in data science and user experience to healthcare. When that happens it will be a true leap forward in patient care.

                What could your institution do with intelligent access to patient data? Contact us below to discuss your institution’s data needs, and keep up with the latest on data security, sustainability and more. Find out more about our services in health and social care.


                Our Experts

                Anne Stahl

                Anne Stahl

                Managing UX Strategy Consultant, GER

                Bjørne Brekke

                Principal Cybersecurity Business and Technology Advisor
                Bjørne has been working with IT and information security for more than 20 years. He focuses on utilizing IAM as a business enabler.

                  AI Panel: How can AI help create a more inclusive world?

                  Rajeswaran Viswanathan
                  12 September 2022

                  With AI getting more powerful by the day, and augmenting our everyday lives, we had an open discussion about how AI can help to create a more inclusive world. And it’s a conversation with quite a twist. Sabrina, Jill, and Kelly are our AI language model panel members who were kind and thoughtful enough to answer the questions posed to them. Follow-up questions were based on their responses. None of the responses have been edited.

                  The answers in this article were completely authored by large AI language models. Only the questions were provided by humans.


                  [“Sabrina]” is                                [“Jill”]                                             [“Kelly”]
                   GPT-3 from OpenAI                    Jurassic-1 from AI21                   Cohere from Cohere.
                   
                  None of the responses were edited. We prompted the three different AI models three times for each question and selected the best responses. The humans in the loop were Rajeswaran Viswanathan and Aishwarya Jagtap, Capgemini.

                  Can you give a summary of your understanding of an inclusive world?

                  Sabrina / GTP-3 from OpenAI: Inclusive world is a term used to describe a world in which all people, regardless of race, ethnicity, gender, sexual orientation, ability, or religion, have an equal opportunity to enjoy the same rights, freedoms, and opportunities. Inclusive world encompasses many different ideals, including the belief that people should be treated equally regardless of their background or differences. To create an inclusive world, we must work to demolish the hierarchies that exist in society and create an environment in which everyone is included. We must fight for policies that promote diversity and equality, and work to ensure that everyone has access to education, health care, and basic needs. There is still a long way to go before we can truly proclaim an inclusive world, but progress is being made every day. We need to keep working toward a world in which all people can have their voices heard and their dreams fulfilled.

                  Jill / Jurassic-1 from AI21: Inclusive world is a philosophy that advocates that the world should be inclusive, and that all people should be equal. Inclusive world advocates that all people should have equal rights, and that all people should have equal opportunities.

                  Kelly: Inclusive world means a world where everyone, regardless of their gender, race, religion, sexual orientation, or disability, has the opportunity to reach their full potential.

                  It’s interesting that you mention “demolish the hierarchies.”

                  How can AI be used to demolish the hierarchies that exist in society?

                  Sabrina: There are many ways that artificial intelligence can be used to demolish the hierarchies that exist in society. One way is by using artificial intelligence to create algorithms that identify and critique the implicit and explicit biases that underpin the current social order. AI can also be used to create algorithms that recommend policies that dismantle the hierarchies that exist in society. Finally, AI can be used to monitor and analyze the outcomes of these policies to determine whether they are effective.

                  Kelly / Cohere: I think AI is used to remove the hierarchies that exist in society. By providing equal access to information, AI can help people to be more aware of their situation and to have a better understanding of the context in which they operate. It can help people to be more critical and to have a more balanced view of the world.

                  How can AI help in the creation of a more inclusive world?

                  Sabrina: There is no one answer to this question, as AI can potentially help in a number of ways to create a more inclusive world. For example, AI can be used to help identify individuals who may be at risk of exclusion or discrimination, and AI-powered digital assistants can be used to help people with disabilities access information and services. Additionally, AI can be used to help develop more inclusive policies and practices, as well as to monitor compliance with existing inclusion policies.

                  Kelly: Inclusion is a key component of the AI strategy of the European Commission and of the European Parliament. In a few weeks’ time, the new AI strategy will be presented to the European Parliament and the Council. The Commission will also launch a wide-ranging consultation on AI, including on the ethical aspects of AI.

                  Is inclusive world a theoretical perspective or a philosophy for building AI systems?

                  Sabrina: It is both a theoretical perspective and a philosophy for building AI systems.

                  Jill: Inclusive World is a philosophy for building AI systems.

                  Kelly: It is a philosophy for building AI systems. It is a philosophy that is based on a foundation of human- centric principles. It is a philosophy that is rooted in the belief that AI systems should be built with the dignity and respect of human beings in mind.

                  What should humans do to help AI for creating an inclusive world?

                  Sabrina: There is no easy answer to this question, but one suggestion is for humans to educate themselves about the issues surrounding AI and its potential impacts, and to engage in open and honest discussions about the potential implications of AI development and implementation. Other actions that may be useful in this regard include encouraging the development of responsible AI policies and practices, and participating in research and advocacy efforts aimed at improving understanding of the potential

                  implications of AI development and implementation.

                  Kelly: It is important to have a common vision of the future and to be able to communicate that vision in a way that makes sense to all. We need to be able to explain how AI will affect the world and how we can work together to create a better world. It is important to be able to communicate our vision in a way that is understandable to the public and to politicians. I think that we should also be able to communicate our vision to the AI.

                  What is your advice for humans for creating an inclusive world?

                  Sabrina: There are a few things that humans can do to create a more inclusive world:

                  1. Be aware of your own personal biases and strive to
                  overcome them.

                  2. Be an active advocate for inclusion – speak up for others who may not have a voice.

                  3. Educate yourself and others on the importance of inclusion.

                  4. Standup against discrimination and bigotry in all forms.

                  5. Create safe and welcoming spaces for everyone, regardless of their background or identity.

                  Jill: Humans should create an inclusive world by first accepting that we are all different.

                  Kelly: I think that we should look at our own behavior and really try to understand what we are doing when we are interacting with other people. We can think about how we can create a more inclusive world by being more inclusive. I think that we can do that by learning to listen to people who are different from us, by learning to understand what they are saying and by trying to understand why they are saying what they are saying.

                  Interesting read?

                  Capgemini’s Innovation publication, Data-powered Innovation Review | Wave 4 features 18 such articles crafted by leading Capgemini and partner experts sharing inspiring examples of it – ranging from digital twins in the industrial metaverse, “humble” AI, serendipity in user experiences, all the way up to permacomputing and the battle against data waste.. In addition, several articles are in collaboration with key technology partners such as  AlationCogniteToucan TocoDataRobot, and The Open Group to reimagine what’s possible.  Find all previous Waves here.

                  Author:

                  Rajeswaran Viswanathan

                  Senior Director
                  Rajeswaran has over 28+ years of experience in IT with a blend of deep technical skills, profound understanding of AI and proven marketing credentials. He is proficient in NLP, Deep learning and core areas of AI. Currently, handling a team of 200+ data scientists, delivering multiple mission-critical solutions for clients in multiple sectors across the globe.