Skip to Content

Why Europe’s provision of eHealth services needs a digital injection

Niels van der Linden – EU lead
Niels van der Linden
23 Sep 2022

The online maturity of eHealth services was rated at just 63 percent in the European Commission’s 2022 eGovernment Benchmark survey of businesses and citizens. Clearly, there is room for improvement in Europe’s delivery of digital health services.

Health and wellbeing have been uppermost in our minds in the past two years as we’ve all learned to live with the coronavirus pandemic. Who hasn’t used at least one digital health service, such as booking a vaccination via a mobile phone or requesting a medical consultation online? For public sector health organizations, the pandemic has accelerated the shift to online services provision, with digital now increasingly the norm.

But how do eHealth services match up to other areas of eGovernment service delivery? What’s working well, and where is there room for improvement? The European Commission has recently published the eGovernment Benchmark 2022, in which we find answers to these questions. Although this is the 19th edition of the report, it is the first time that services relating specifically to health have been included in the annual assessment of digital government services. The findings are based on a survey of businesses and citizens from 35 participating countries, comprising 27 EU Member States and eight other countries. They visited more than 14,000 public sector websites to rank online government services based on a combination of factors (user centricity, transparency, key enablers, and cross-border services). This enabled the report authors, led by Capgemini, to allocate digital maturity scores and recommend next steps on the eGovernment journey.

eHealth services – laggards and leaders

So, just how mature is the delivery of eHealth services across Europe? Before we delve into this, let’s first clarify what we mean by eHealth services in this context. The eGovernment Benchmark report measured services related to obtaining basic healthcare, searching relevant healthcare providers, applying for the European Health Insurance Card (EHIC), e-consultations and medical records.

An average maturity score of 63 percent shows eHealth services falling short when compared to other eGovernment services. For example, the level of online services when moving home was ranked at 71 percent. It’s clear that online options for people in need of healthcare services have room to improve. As always, there are leaders and laggards, with Luxembourg (97 percent), Estonia (93 percent) and Malta (91 percent) setting a precedent for the maturity of eHealth services. Turkey too is among those paving the way with well-integrated solutions. It integrates multiple healthcare services via the mhrs.gov.tr portal and enabiz.gov.tr health records. This enables multiple healthcare institutions to provide their services via a single platform and health data to be securely accessed. In another best practice example, we see ePrescriptions being proactively shared between doctors and the pharmacy without the need for any user action (except for picking up the medicine) in Estonia, the Netherlands, and Sweden.

At the other end of the scale, eight countries have a maturity score below 50 percent, meaning that citizens in these countries still need to refer to non-digital means to access government health services. And while in almost nine out of ten countries (88%) citizens can apply for and access their personal health records online, the completeness of these online health records differs. In some countries citizens can access their entire medical history. In others only minimal information about vaccinations and medical visits is available. And in some, such as the Netherlands, patients cannot access their full health records online– only partial records from a single provider.

Highs and lows of eHealth

Just as some countries are ahead of the digital curve, so too are certain aspects of eHealth service delivery. There is generally a difference in maturity between administrative eHealth services, such as booking an appointment, and actual healthcare services, such as e-consultations.

The administrative procedures around healthcare, such as looking for information about where and how you can get healthcare, are now to a large extent digitalized. Three out of four health-related services are available online, with 93 percent of those being mobile friendly. Another high note is that websites where users can obtain e-prescriptions are almost always mobile friendly (99 percent). However, compare these high scores with just 64 percent for the transparency of health data and an even lower 53 percent for the transparency of service design on government health portals. The use of eDocuments is something of a success story. The ability to download or submit documentation electronically (eDocuments), rather than having to send them in physical format, is possible in almost nine out of ten services. However, while being able to authenticate yourself using electronic identity (eID) is possible in 76 percent of services, this drops to just 58 percent when it comes to applying for e-consultations with a hospital doctor, with 22 percent of countries still requiring authentication offline and in person. Here we see the difference in maturity between administrative and actual healthcare services noted above.

One area in particular stands out as in need of improvement: the ease of cross border access to online public eHealth services. Currently, just 42 percent of services are entirely available online, while users from another country can find information about only 14 percent of the services in a language other than the national language of the respective country. Further, for over 40 percent of health-related services, cross border users do not know how to obtain the service, nor can they find any information about the service online. In May 2022, the European Commission launched the European Health Data Space, the first common EU data space for health data which promises to create a single market for electronic health record systems, which should help EU Member States address this issue.

Recommendations for improving eHealth services

So, how should healthcare organizations ramp up their delivery of eHealth services? The eGovernment Benchmark report offers a number of recommendations. These are designed to help healthcare organizations achieve the EC’s three health priorities of giving citizens secure access to their health data, including across borders, providing personalized medicine through shared European data infrastructure, and empowering citizens with digital tools for user feedback and citizen-centered care.

The following extrapolates health-specific guidance from the broader eGovernment services recommendations:

  • Realign the citizen journey and create a well-aligned ecosystem. Health interventions often involve multiple service providers who must reorganize themselves to fulfill the entire patient journey online. Currently, 71% of the health entities assessed offer a digital mailbox solution. These help citizens to safely communicate with their governments and find all relevant communication in a single online environment.
  • Respect both national and cross-border patient needs. Offer interoperable services in multiple languages and accept interoperable eIDs — currently the ability to register and (re)schedule a hospital appointment is possible for only 34 percent of cross border users.
  • Co-create services with users. In line with the EU’s Declaration on European Digital Rights and Principles, citizens should be able to engage in policy-making processes online and help to design online government services.
  • Adopt more data-driven service processes. By reusing previously provided information, more citizen journey services can be provided proactively. Prefilled personal information is at its best in services for obtaining an e-prescription from a hospital doctor and applying for electronic health records(prefilled in 93% of the countries for both services).

The coronavirus pandemic has highlighted the need for digital health services. Mature eHealth services also provide wider environmental and societal benefits – for example, they can reduce unnecessary travel for patients to care providers, while making it easier for informal caregivers  to provide support.

Online health portals and well-orchestrated patient journeys. Transparent and inclusive health service design. Interoperable cross-border services. The eGovernment Benchmark 2022 discusses the different aspects of eHealth transformation currently underway and benchmarks eHealth services maturity against other eGovernment service areas. How mature are your organization’s digital services? Read the full report to see the bigger picture and benchmark your organization.

Find out more

Authors

Person in a dark blue suit, white shirt, and light blue tie with a blurred face.

Niels van der Linden

Vice President and EU Lead at Capgemini Invent
“Making it easy for citizens and businesses to engage with government increases the uptake of cost-effective and more sustainable digital services. Currently, however, many governments do not yet share service data, missing out on the one-government experience and preventing them deriving actionable insights from monitoring and evaluating the state-of-play. We help to design, build, and run trusted, interoperable data platforms and services built around the needs of citizens and businesses.”
Person in a blue suit and tie with a blurred face, standing against a gray background.

Sem Enzerink

Senior Manager and Digital Government Expert, Capgemini Invent
“Let’s shape digital governments that are well-connected. Well-connected to their users, to each other and to the latest technologies. Europe is ready for a new generation of digital government service to impact and ease the lives of citizens and entrepreneurs.”

Nicole Cienskowski

Account Manager Public Health

Richard Bussink

Director – Lead Health at Capgemini Invent
“Health is a changing sector with more focus than ever on digital transformation. This transformation is driven by changing demographics, shortage of health professionals, increased expectations of citizens and the potential of data driven health. Health care will become more and more remote care, supported by health data ecosystems, focus on health prevention, sustainability, and new ways of working through innovation.”

    Putting the “quantum” into machine learning

    Barry Reese
    22 Sep 2022

    Applying quantum to machine learning for quality assessment, part of the BMW Group’s Quantum Computing Challenge, we found that combining classic and quantum approaches makes it possible to build a more accurate model, using less training data.

    In the first article in this series, Our holistic approach to the BMW Group’s quantum computing challenge, we outlined Capgemini’s fruitful participation in the BMW Group’s Quantum Computing Challenge, and our holistic approach to applying quantum to automated quality assessment. In this blog, I’ll focus on our work around the BMW Group’s specific requirement to investigate the relevance of quantum techniques and technologies to machine learning (ML) in the context of quality assessment.

    I was asked to lead this element of the project because I’ve worked for more than 10 years on artificial intelligence and ML, implementing them within the automotive industry, among other sectors. For the published use case, I collaborated with members of Capgemini’s established community of quantum experts as well as automotive specialists.

    Can quantum techniques enhance machine learning?

    Vehicle manufacturing plants need to check all industrial components for flaws such as cracks. These flaws are rare, but the impact of missing one in the multi-stage manufacturing process would be serious. Classical (i.e. non-quantum) ML can partially automate the task by reviewing camera or infrared images and assigning each one to either “good” or “defective” categories.

    We wanted to know whether quantum machine learning (QML) could help overcome two major limitations of classical ML in this context. One limitation is that auto manufacturers are typically looking for exceedingly small defects in exceedingly large high-resolution images, which is computationally expensive. A second limitation relates to the fact that any ML model has to be fed with appropriate data in order to learn – but because current automotive quality processes are already so good (though not perfect), it can take years to accumulate enough real-life flaw data examples to train a classical ML model.

    Combining quantum and classical

    To explore whether quantum can help, we took a classical convolutional neural network (CNN) – today’s most common image classification tool – and combined it with a “quanvolutional” neural network (QNN) as explained by Reese et al. We decided to split each large image of a component into small parts. All these parts are fed into a QNN layer that pre-processes the image. The output from this layer is in turn fed into a classical CNN that predicts whether the part is defective.

    The benefit of pre-processing in a QNN layer is that it allows us to take advantage of quantum concepts, such as entanglement, to add “depth” to the image by embedding features selected through the quantum circuit. Using quantum kernels instead of classical ones, we can find patterns that were otherwise not found. In the following images (Figure 1), one can see the effect of the quantum preprocessing in the enriched quantum images. This means that the ML model can consider many additional possibilities and can learn faster from the enhanced data. 

    The reason for splitting the image is to overcome the fact that large volumes of data can’t be loaded into a quantum device without sacrificing resolution. This is a current hardware limitation that is likely to remain for the foreseeable future.

    The effect of image splitting and our quantum circuit

    Figure 1. The effect of image splitting and our quantum circuit

    An innovative approach to image processing

    Tested against the benchmark of an unenhanced classical CNN, the combined model exceeds expectations. It learns to generalize faster based on less input data and achieves above 97% accuracy compared to 80% in our benchmark classical model. Our quantum model is able to achieve this level of accuracy with 40% train / 60% test, whereas a typical classical model needs 70% train / 30% test. By needing fewer training images and locating the flaws more accurately with less data, it overcomes the two limitations of classical ML that concerned us because the most expensive part of this process is obtaining the training data and then locating the crack once it is identified.

    This sustainable, efficient, and innovative approach to image processing could be useful in any situation where the outcomes are critically important and training data is expensive or scarce, or where there’s a new pattern of failure (for example, in a new component).

    Since only pre-processing takes place on a quantum platform, this quantum-inspired solution is effectively a hybrid one and can be implemented mostly on classical machines. Therefore, we believe that using our approach, clients can benefit from quantum thinking and experimentation before quantum hardware matures fully.

    Proud though we are of our QML achievement, the real power of our approach derives from our holistic perspective on the BMW Group’s challenge, which revealed unforeseen opportunities for application of other aspects of quantum technology. Our next blog will look at one of the most important of those opportunities: enhancing image capture and sensing through quantum.

    Barry Reese

    Quantum Machine Learning Lead
    As a Quantum Machine Learning expert, my passion is finding solutions that improve the life and work of people. My mission is to investigate and build quantum applications on near-term quantum devices and to understand how quantum can be transformative for the future of computing. From quantum technologies, I am able to learn something new every day.

      How should organizations respond to NIST’s announcement of the first batch of quantum-resistant cryptographic algorithms?

      Jérôme Desbonnet
      21 Sep 2022

      Crypto agility could hold the key to being equipped to adapt, mitigate, and handle any security challenges arising due to vulnerabilities of the cryptosystems in post-quantum.

      The premise of quantum threat

      Quantum computers promise the potential to solve complex problems considered intractable for classical computers. The power of quantum computers comes from the usage of quantum principles to solve computation problems. The anticipated applications are in the domains of optimization, simulation, machine learning, solving differential equations, and more. These computers are expected to have the potential to solve some major challenges in industry and society and to aid in the discovery of new drugs, development of new materials for batteries and solar systems, optimization of supply chains and production lines, and more.

      However, this great power comes with a great threat, which is the potential ability of quantum computers to crack some of the major public key cryptographic systems in use today. Actors with malicious intent could potentially break the security of enterprise applications, disturb or even damage public services and utility infrastructure, disrupt financial transactions, and compromise personal data.

      Increased global attention to post-quantum security and key announcements

      Considering the seriousness of the threat, industries, governments, and standard bodies have started working towards defining systems that will be secure and resistant to the threats posed by the arrival of large, powerful quantum computers. These are the post-quantum cryptographic systems. 

      But today’s quantum computers are still rudimentary in their capabilities. It’s estimated by industry experts surveyed by the World Economic Forum that it will take ten years or more for the development of quantum computers powerful enough to break the current security algorithms. The first question that comes to our mind is – why the urgency and so much noise around the topic? 

      One of the key reasons is that actors with malicious intent could capture and store the encrypted data flowing over the Internet and could decrypt this stored data when large-scale quantum computers become available. This “store now and decrypt later” strategy has become a serious and imminent threat, especially to systems carrying data that has a valid life beyond the anticipated ten years. These systems need to be upgraded now with quantum-safe cryptographic components.

      Considering the vast nature of this challenge, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has initiated the process of post-quantum cryptography (PQC) standardization to select public-key cryptographic algorithms to protect information even after the large-scale availability of quantum computers. According to the Capgemini Research Institute’s report published in April 2022, a large number of organizations (58%) are waiting for standards to emerge before prioritizing quantum security as part of their investments. 

      But some important global developments in the recent past have increased the focus on quantum technologies and the need for mitigating the associated risks to vulnerable cryptographic systems. They are:

      1. Issue of National Security Memorandum, which highlighted the need to maintain a competitive advantage in quantum technologies and also mitigate the risks to a nation’s cyber, economic, and national security;
      2. Commitment to intensify and elevate cooperation among G7 members and partner countries to deploy quantum-resistant cryptography to secure interoperability between ICT systems;
      3. NIST’s announcement of the selection of the first four quantum-resistant cryptography algorithms. 
      4. Release of Requirements for Future Quantum-Resistant (QR) Algorithm for National Security Systems by The National Security Agency (NSA) with 2035 as adoption deadline.

      The four selected algorithms are expected to become part of the highly anticipated NIST standards for post-quantum cryptography in a couple of years, likely in 2024. As the announcement makes clear, these algorithms are designed for two main encryption tasks – the first is general encryption to protect information exchanged over public networks, and the second is digital signatures to authenticate/verify identities. Our blog, “NIST announces four post-quantum crypto finalists. What happened?” provides more information.

      So, what should an organization do now? 

      Should they immediately start implementing the algorithms and replace the vulnerable components in their IT and OT systems, continue to wait until the official publication of international standards in the next two years, or wait until the threat becomes a reality when these powerful quantum computers are operational? 

      Well, in our view, the answer lies somewhere in between these options. While continuing to wait may not be the best choice an organization could make, especially considering the store-now-and-decrypt-later risks, going ahead with a full-blown project implementing the migration of all the systems to quantum-safe is neither cost effective nor wise. So, what is the recommended call to action? 

      Crypto agility could hold the key

      The answer, in our view, is crypto agility for post-quantum and beyond. It is the proactive design of information security protocols and standards in such a way that they can support multiple cryptographic primitives and algorithms at the same time, with the primary goal of enabling rapid adaptations of new cryptographic primitives and algorithms without making disruptive changes to the system’s infrastructure. 

      If organizations are to achieve a position in which they are equipped to rapidly adapt, mitigate, and handle any security challenges arising due to vulnerabilities of the cryptosystems in post-quantum and beyond in the most optimized manner, they will need to put in place certain processes and systems.

      We would recommend the following:

      • The first step is for the leadership to initiate a program with clearly defined objectives of achieving post-quantum crypto agility and to establish the collaboration teams within the organization and with the external ecosystem for required solutions, skills, and capabilities. It is also important to start educating key personnel of the organization on PQC and its implications.
      • Initiate a process to gather information across the organization with details of all the systems and applications that are using public-key cryptography and details of the most sensitive and critical datasets (both data-at-rest and data-in-motion) to be protected for long time periods. The factors affecting the whole process are multi-dimensional (which needs separate discussion).
      • Start experimenting with the new algorithms announced by NIST to get an understanding of the impact and challenges involved in the quantum-safe migration path. Start building an initial framework for the target state architecture of the overall system.
      • Prepare a roadmap for post-quantum safe migration based on the multi-dimensional analysis and prioritization of datasets requiring protection and systems and applications using vulnerable cryptographic systems. 
      • Perform further analysis on the interdependencies of systems to decide the sequence of migration and initiate the process of identifying and evaluating sources for components, solutions, and services to implement the migration plan, not forgetting to develop a plan for testing and validation of the successful implementation of the migration.

      Organizations following these steps will be better positioned to handle the PQC challenge more effectively. Not adopting such an approach could lead to issues such as:

      • Execution of migration projects in silos leading to integration challenges
      • Breaking the functionality of systems due to partial migration of components
      • Higher costs than optimally required and
      • Increased complexity and unpredictable refactoring every time we discover something new to be addressed.

      These issues can lead to reduced confidence in the migration, and so the whole process can be quite challenging, expensive, time consuming, and risky, depending on the complexity and size of the systems in the organization. So, we recommend to our clients to start the process sooner rather than later, at least to understand where they stand in their journey and to estimate the potential size of the migration journey in terms of both time and costs. In summary, we believe organizations should not wait and start now, taking steps to achieve critical crypto agility across their business.

      Authors: Jérôme Desbonnet and Gireesh Kumar Neelakantaiah

      Jérôme Desbonnet

      VP – Cybersecurity CTIO – Chief cybersecurity Architect CIS & I&D GBL's, Capgemini
      As VP, Cybersecurity CTIO, Insights & Data, Jérôme creates security architecture models. Jérôme plans and executes significant security programs to ensure that Capgemini’s clients are well protected.

      Gireesh Kumar Neelakantaiah

      Global Strategy, Capgemini’s Quantum Lab
      Leading go-to-market initiatives for the Quantum Lab, including solution development, strategic planning, business and commercial model innovation, and ecosystem partner and IP licensing management; Skilled in Quantum computing (IBM Qiskit), Data science, AI/ML/Deep learning, Digital manufacturing & Industrial IoT, Cloud computing.

        Microsoft Cloud for Sovereignty: Maintain control over strategic digital assets

        Sjoukje Zaal
        20 Sep 2022

        Governments and organizations are focusing on digital transformation to fundamentally transform the way they operate and deliver services to their customers. Cloud adoption has increased tremendously in the last couple of years, also due to the COVID-19 pandemic. But as they move to the cloud, organizations want to maintain the same level of control over their IT resources as they have in their data centers. Concerns about cloud sovereignty, which include data, operational, and technical issues, are not new and have been increasing because of rising geopolitical tensions, changing data and privacy laws in different countries, the dominant role of cloud players concentrated in a few regions, and the lessons learned through the pandemic. As a result, governments and organizations are reevaluating their external exposure and looking for ways to maintain physical and digital control over strategic assets.

        To adhere to these concerns, Microsoft has released a new solution called Microsoft Cloud for Sovereignty. This solution is aimed to meet compliance, security, and policy requirements that governments and organizations are facing. With the addition of Microsoft Cloud for Sovereignty, governments and organizations will have more control over their data, and it will increase the transparency of operations and governance processes of the cloud.
        Microsoft Cloud for Sovereignty is designed to be a partner-led solution, where partners will play a vital role in delivering the solutions. One of Microsoft’s European Cloud principles is that Microsoft will provide cloud offerings that meet European government sovereign needs in partnership with local trusted technology providers. Also, Capgemini and Orange have been working closely together with Microsoft, and will start supporting clients in preparing for their migration by the end of 2022.

        With Microsoft Cloud for Sovereignty, Microsoft is focusing on the following pillars

        Data residency

        Data residency is the requirement that data must be stored within a specific geographic boundary, such as a national boundary. Azure offers data residency for many services in over 35 countries with over 60 different data center regions worldwide (and growing). This enables residency options for Azure, Microsoft 365, and Dynamics 365, where many clients can store and process their data locally. By implementing policies, clients can meet their regulatory requirements to store their applications and data in the required geographical boundary. For Europe, the forthcoming EU Data Boundary will ensure that data will be stored and processed in the EU and European Free Trade Association.

        Sovereign controls

        In addition to the specific regions and geographic boundaries where applications and data are stored and processed, Microsoft also offers a set of sovereign controls that provide additional layers to protect and encrypt sensitive data. These controls span the entire Microsoft cloud: SaaS offerings such as Power Platform, Microsoft 365, and Dynamics 365, as well as the cloud infrastructure and the PaaS services that are available in Azure.

        The following offerings can be leveraged by clients for sovereign protection:

        • Azure Confidential Computing: Azure confidential computing consists of confidential virtual machines and confidential containers. This enables data to be encrypted in rest, but also in use. Specialized hardware is used to create isolated and encrypted memory, which is called a trusted execution environment (TEE). TEEs guarantee that data and code that are processed cannot be accessed from outside the TEE. Client-owned encryption keys are released directly from a managed HSM (hardware security module) into the TEE. The client keys are secured, also when in use, and it ensures that data is encrypted in use, transit, and in rest.
        • Double Key Encryption (DKE): DKE uses two keys together to access protected content. One key is stored in Azure and the other key is held by the client. It comes with Microsoft 365 E5, and it is intended for the most sensitive data that is subject to the strictest protection requirements.
        • Customer Lockbox: Customer Lockbox ensures that Microsoft can’t access client data and content without explicit approval from the client during service operations. Customer Lockbox is offered for Microsoft 365, Microsoft Azure, Power Platform, and Dynamics 365.
        • Azure Arc: Azure Arc extends the Azure services, management, and governance features and capabilities to run across data centers, at the edge, and in multicloud environments. Clients can centrally manage a wide range of resources, including Windows and Linux servers, SQL Server, Kubernetes clusters, and other Azure services. Virtual machine lifecycle management can be performed from a central location. Governance and compliance standards can be met by implementing Azure Policy across these different resources. And services such as Azure Monitor and Microsoft Defender for Cloud can be enrolled as well.
        • Sovereign Landing Zone: Microsoft Cloud for Sovereignty will include a Sovereign Landing Zone. This landing zone is built upon the enterprise scale Azure Landing Zone and will make deployments automatable, customizable, repeatable, and consistent. This landing zone will extend into Azure Information Protection, which also enables policy and labeling for access control and protection on email and document data. Clients can also define custom policies to meet specific industry and regulatory requirements.

        Governance and transparency

        The Government Security Program (GSP) provides participants from over 45 countries and international organizations, represented by more than 90 different agencies, with the confidential security information and resources they need to trust Microsoft’s products and services. These participants have access to five globally distributed Transparency Centers, receive access to source code, and can engage on technical content about Microsoft’s products and services. Microsoft Cloud for Sovereignty will expand GSP to increase cloud transparency, starting with key Azure infrastructure components.

        Wrap up

        In this article I wanted to focus on what Microsoft Cloud for Sovereignty has to offer for clients who want to leverage the Microsoft cloud for their digital transformation journey, but also want to maintain the same level of control over their IT resources as they have in their own data centers. Cloud adoption has accelerated enormously in the last couple of years, which also makes cloud sovereignty much more important for governments and organizations. Microsoft offers the tools, processes, and transparency to partners and clients to support the increasing sovereignty requirements that clients have on their transformation journey.

        Due to these increasing sovereignty requirements, Capgemini has conducted research to look deeper into organizational awareness and key priorities when it comes to cloud sovereignty and the role it plays in overall cloud strategy. We have released a whitepaper with our findings, which can be downloaded here.

        At Capgemini, we have a lot of experience in implementing cloud solutions across all industries. If you would like more information about how we do this for our clients, you can contact me on LinkedIn or Twitter.

        You can also read my other articles here.

        Sjoukje Zaal

        Chief Technology Officer and AI Lead at Capgemini
        Sjoukje Zaal is head of the Microsoft Cloud Center of Excellence at Capgemini, Microsoft Regional Director and Microsoft AI & Azure MVP with over 20 years of experience providing architecture, development, consultancy, and design expertise. She is the regional head of the architecture community in the Netherlands. She loves to share her knowledge and is active in the Microsoft community as a co-founder of Lowlands.community. She is director of the Global AI Community and organizer of Azure Lowlands. Sjoukje is an international speaker and involved in organizing many events. She wrote several books and writes blogs.

          Learning from digital natives

          Zenyk Matchyshyn
          17 August 2022
          capgemini-engineering

          Today’s market leaders are digital-native companies. They were born digital. But what makes them so successful, and can your business compete with them?

          Digital native companies are entering every industry. Many of them did not exist 20 years ago, yet today they are among the most significant engines of change in our society. They do not need digital transformation initiatives because they were born digital. Airbnb launched at a time when large hotel brands were dominating the accommodation industry. Everybody was betting against it, but through a combination of a disruptive business model and a focus on experience design, Airbnb has become a household brand and the number one choice for many travelers and holiday-goers.

          When the COVID-19 pandemic hit, Moderna, a company that produced vaccines to help us combat the virus, was able to design a vaccine in just two days. Moderna describes itself first as a software company.

          These businesses share common themes. They’re resilient, disruptive, and often defy the odds before achieving great success. They’re also digital-native companies, but what does that mean?

          What does it mean to be a digital-native company?

          All companies, old and new, have come to rely on software. Some companies that have been around for decades might have a large team of software engineers and a software portfolio that dwarfs their digital-native competitors. So, what is it about digital natives that sets them apart?

          Tesla wasn’t the first automotive company to write its software. Other automakers developed software too, and at a much larger scale. But Tesla had something these other automotive companies did not – a digital native culture. Companies that are born there tend to have a different approach when it comes to problem-solving and adaptability. Being digitally native is about culture, way of working, and mindset –these elements are hard to replicate for behemoth companies that have been around for decades.

          Culture isn’t just about what a company says. It’s about what it does. The “two-pizza team” approach was introduced at Amazon, which meant that every development team should be small enough to be fed with two pizzas. There were limitations in how effective teams could be as they grew, so the intent was to keep them small, agile, and productive. The most important part was that they should own what they do. They needed to be both small and self-sufficient.

          This type of approach to productivity is what it means to be a digital native, and for non-digital natives, it can be quite a dramatic adjustment – but it’s not impossible.

          Think about products instead of projects

          Another key difference between digital natives and non-digital natives is that digital natives think about building products rather than implementing projects. You figure out what your client needs and then create a product that hopefully fills that need with some measure of success. Then you move on to the following product.

          On the other hand, projects are more focused on requirements, timelines, and resources. The success of a project isn’t just based on how happy a client is with a product, but on the effectiveness of the overall journey, from planning and budgeting to management and execution. It is difficult for non-digital native companies to think about projects instead of products, but it is possible with the right culture and mindset.

          Agility and Flexibility are critical

          Digital natives’ success is not built on having an extensive software portfolio ready for every situation. It took Stripe less than 3 years to become a $1 billion dollar company and now they are on track to become a $100 billion company in 10 years. While doing product in highly competitive financial services market, which exist for a very long time.

          Conclusion

          The best way to develop and grow a digital culture and philosophy is by modeling an organization that’s already a digital native. Capgemini Engineering is ready to assist you in becoming a digital native by sharing our decades-long experience working with startups, including digital native companies.

          Author

          Zenyk Matchyshyn

          Chief Technology Officer, Software Product Engineering
          Zenyk, a seasoned technologist, is dedicated to leveraging the potential of software for positive change. He is passionate about technology, and his expertise extends across multiple industries. Using his interdisciplinary knowledge, Zenyk provides solutions to digital transformation complexities that many industries face. Zenyk has pioneered solutions within emerging technologies and is committed to making a lasting impact on the world through tech innovation.

            Capgemini’s offering towards the headless journey – headstart2headless

            Capgemini
            Capgemini
            15 September 2022

            HEADSTART2HEADLESS

            In today’s world, with the prevalence of connected devices and IoT, traditional content management systems with their coupled content and presentation layer impede content velocity and employing newer tools and techniques for content presentation.
            Companies are looking for increased flexibility and scalability that a decoupled architecture can provide. The primary objective of headless CMS is to provide an omnichannel seamless experience. There is a constant requirement to change the front-end according to how the customers want to view it across channels. Organizations need a superior solution architecture to enable greater protection with the required amount of security and encryption provided to internal users, while content generated outside the organization can be approved and encrypted as needed.

            Digital platforms across orgs have evolved and there is a need for flexible solutions. Enterprises are across mobile sites, apps, conversational interfaces, chatbots, and more. Headless CMS architecture enables a framework that makes it possible for orgs to adapt the front-end layer, while APIs seamlessly integrate the content infrastructure to the presentation layer. The world is moving to a product-focused mindset and firms are looking to rebrand themselves on a regular basis. The content model is evolving from build-from-a-single-page to building blocks for many products. The support that is required across devices is limitless. This has led to the need for headless CMS.

            In headless CMS marketing, teams create content within the CMS, and front-end developers retrieve the content through APIs using whichever front-end technologies work best. Marketers, therefore, can author content in one place, while developers can build a variety of presentation layers to suit the company’s – as well as the customer’s – needs and wants. It presents the best of both the worlds – the power of a CMS and the flexibility of new front-end technologies.

            Headless vs. Traditional

            Our Solution

            Headless can be achieved in various ways in Adobe Experience Manager (AEM). Our solution, HeadStart2Headless, is capable of supporting the various methodologies based on the specific requirements of the customer. Depending on the existing ecosystem, technology proficiency, and AEM requirements, customers can choose from a system which has the least AEM coupling to a system having the most.

            Capgemini’s offering can fast forward the customers in an efficient way to set up and run headless in the least amount of time, thus propelling their businesses to become more agile and flexible in a very short turnaround time.

            Key Benefits

            Contact with us

            To know more about our headless CMS solution, contact: fssbuadobecoe.global@capgemini.com

            Meet our Experts

            Dr Cliff Evans

            Head of Practices, Financial Services Europe
            Interested in the human and engineering challenges from the implementation of new technologies to realise sustainable benefits. Responsible for evolving our technology capabilities and enabling our clients to think and act two steps ahead. Focussed on the Banking and Financial Services Sector which is undergoing unprecedented change, as a consequence of the impact of new technologies and competitors.

            Niyati Srivastava

            Digital Content and Marketing Lead UK and Continental Europe
            Niyati leads content and marketing services for UK and Continental Europe. Her extensive experience developing and scaling GTM offers harnesses data-driven customer experience expertise from across Capgemini for a powerful marketing proposition. Niyati is working on cutting edge solutions for the blended and digitally enhanced realities and business models in Financial Services, working extensively with the C-suite to define and develop strategies with responsibility at their core – data ownership, sustainability, safety and human experience.

              Open APIs – the key transformation enabler for CSPs 

              Abhi Soni
              15 Sep 2022

              Open APIs and Open Digital Architecture lay the foundation for digitization and monetization of new technologies.

              Behind all the industry buzz around digitization, customer experience transformation and 5G monetization, there’s still a lingering issue at the core of Communication Service Providers’ (CSPs’) everyday business that needs attention: Up to 80% of CSPs’ IT budgets are still being spent on system integration and customization1. 

              This leaves limited resources for innovation and actual IT transformation. This also raises larger questions around CSP transformation:

              • Why is the right shift of most CSP business models so slow-paced?
              • How do CSPs bridge this gap between traditional IT problems and the latest industry and customer demands?
              • How do CSPs evolve and collaborate into the new ecosystem involving hyperscale digital natives?
              • What are the best ways to capitalize on new technology waves such as 5G, edge computing, and AI?

              Over the last few years, there’s been increased focus from the leading CSPs on open API and open digital architecture (ODA), with the ambition that open API and ODA can be the enablers for their transformation into becoming platform-based, end-to-end service providers.

              Read on to find out if open APIs are the solution to the problems facing CSPs.

              The key challenges facing CSP leaders today

              Let’s look at the key challenge CSPs are still facing: the IT estate of most CSPs is too complex and rigid, comprised of monolithic core IT systems and legacy processes and technologies. Laurent Leboucher, Group CTO and SVP, Orange explains this problem succinctly:

              IT legacy in telco environments very often looks like a Pollock painting. It’s hard to identify through hazy building blocks, and there is almost no loose coupling. Data records are very often duplicated several times, and everything seems to be tied to everything else. This is often the result of many years of silo fragmentations and various attempts to fix those fragmentations with technical projects, which created this entropic technical debt.2 CSPs’ current IT environment consists of many different application stacks that have been modified over years. These often either have overlapping and redundant functionalities or have gaps in the end-to-end integration of their customer journeys, which operators address through further developments and customizations. The problem of complex IT systems has been further intensified with the inorganic growth the telecom industry has undergone, thus adding a further level of closed architecture to the mix.

              Open APIs and Digital Architecture solve these problems at their source

              The use of open APIs and ODA to connect disparate systems not only protects IT budgets; but also goes to the core of how CSPs can address many of their key functions, including:

              • Competing with digital counterparts and catching up with the platform utopia world,
              • Evolving further from zero-touch provisioning, and zero-touch automation to zero-touch partnering, enabling a marketplace with simplified and automated cross-platform play,
              • Taking demand for digital to an omnichannel personalized experience,
              • Evolving business models to best mobilize on the 5G wave,
              • And ultimately, ensuring solid revenue, returns on investment, and faster time to market.

              What to change, what to change to, and how to change

              The three famous questions from Eliyahu M. Goldratt provides a useful roadmap. The challenge of a complex legacy and a siloed IT estate has been common among the majority of CSPs, and so is the need to participate in platform-based digital ecosystems. The most important of these, TMF Open API program, was formally launched in 2016. The TMF Open APIs combined with component-based architecture like the ODA are a wise solution for CSPs looking to reduce IT complexity.

              The role of Open Digital Architecture

              Open APIs are specifically designed for functional integrations and the ODA to address the challenges of deploying, configuring, and operating in a complex application landscape. As open API is considered the de facto standard for telecoms interfaces, ODA is a component-based architecture that can be viewed as the de facto standard for open digital platforms, which provides a consistent way for components to fully interoperate end-to-end across multivendor ecosystems.

              TMF Open APIs along with ODA is the futureproof approach designed with an outside-in perspective. It can provide the plug-and-play interoperability of components within their IT systems (and networks), reduce complexity and enable digitization of customer-facing systems and reduce cost of integration as well as time to market for new services digital services. This while also supporting both existing and new digital services and addressing the implementation of B2B2X digital ecosystems, which will be critical for operating and monetizing 5G and edge computing.


              A global trend toward APIs

              There is a clear trend regarding the use of APIs as many of the world’s largest service providers such as Axiata, Bharti Airtel, BT, China Mobile, China Unicom, NTT Group, Orange, Telefónica and Vodafone have officially adopted TM Forum’s suite of Open APIs for digital service management.  Equally, there is an increased interest from suppliers to adopt open APIs as more than 128 of the world’s leading CSPs and technology ecosystem participants have signed the Open API Manifesto, which publicly demonstrated their endorsement of TM Forum’s suite of Open APIs 3

              Like any strategic change, the adoption of open API and ODA does not come without its challenges. The shift to open API and ODA requires a holistic approach to address key concerns around these key areas:

              • Shifting to a centralized approach towards integration: CSPs have historically been complex organizations with legacy processes, multi-layered silos and traditional approaches that take a project-specific view on integrations. Shift to Open API and ODA requires a cultural change to a more centralized integration strategy that puts API first, and is based on industry standards, repeatable frameworks, and processes.
              • Initial investment and per API cost: Another concern is to prove the return on the investment from initial projects. To justify the initial, spend or per API cost of initial projects. For open API and ODA to create value for the CSP ecosystem, it needs to have a strategic view rather than a tactical project view. A well-defined open API-led integration strategy sets a foundation and builds the repository of integration assets (TMF Open API library in the case of CSPs) which are reusable and generate more business value with every project and create a lower per API cost.
              • System API reusability: It may be quick to achieve reusability in UX and process layer the legacy system and system APIs which are not that flexible. However, carriers are now innovating their COTS (commercial off-the-shelf products) as well as networks due to initiatives such as SDN, promoted by the Open Networking Foundation.
              • Commercial readiness: There is a gap between what carriers are offering and can provide in their immediate, 1–2-year roadmap, and what the OTT players and third-party providers need. What they need is an open API that allows them to reach as many companies as possible and not all CSPs are ready.

              The good news is that CSPs can address most of these challenges by partnering with the right system integrators and suppliers/API aggregators. The right services partner can help assess your existing IT estate and processes, and bring onboard proven governance models, integration design authorities, reusable API libraries, and repeatable model/business case. This can help reduce the initial effort while increasing the adoption. While increasing return on investment and helping (re)introduce best practices and providing the necessary support depending on the existing resource capabilities. Therefore, with increasing CSP interest – the next big enabler for open API adoption is suppliers. Large, suppliers have traditionally relied on locking in their CSP customers, but it is time to realize that industry fragmentation impedes innovation and the ability to compete, including the adoption of Open APIs.

              Lasting Benefits

              Open API-based integration and open digital architecture enable CSPs’ IT estate to become more agile and resilient, which translates into tangible business benefits:

              • Reduction in time, from concept to cash for new services, as well as the total cost of ownership. APIs significantly reduce the effort and capital involved to integrate with internal and third-party systems.
              • Expansion ofthe service offering, enabling – in a significantly reduced time frame – the ability to gear up to meet shifting markets.
              • Enablement of the CSP business to quickly innovate, partner, and create mix-and-match products and services. The result is speed, convenience, and innovation.

              In the end, TMF open API and ODA are among the most critical weapons a CSP needs in its arsenal today. This means the difference between realizing and squandering the opportunity to monetize innovative 5G services. Most importantly, open APIs will help CSPs evolve and establish themselves as end-to-end service providers. To learn more about our Capgemini Digital Telco Connect solution, our reusable assets, and TMF Open API libraries (or just to have a brainstorming on API-led transformation), contact me.

              TelcoInsights is a series of posts about the latest trends and opportunities in the telecommunications industry – powered by a community of global industry experts and thought leaders.

              Author

              Abhi Soni

              Group Account Executive
              With over 18 years of global experience in the telecommunications industry, Abhishek Soni is a recognized industry expert leading Capgemini’s next-generation offerings, with a strong focus on digital and AI-driven transformation. At the forefront of agentic technologies and platform-led innovation, he spearheads Capgemini’s end-to-end AI solutions and serves as the global industry lead for Salesforce in telecom. He has held key leadership roles across APAC, EMEA, and the UK, and currently leads a portfolio of strategic telecom accounts—delivering transformative outcomes for global clients. His deep expertise spans strategy, consulting, and solution innovation, making him a trusted advisor in shaping the future of communications

                The PULSAR principles of AI-ready data

                James Hinchliffe
                24 Aug 2022

                Does FAIR go far enough to provide AI-readiness? Not quite – but it’s a great start. How can we build on a FAIR data foundation to be truly ready to make good use of AI?

                For many R&D organizations, the desire to do new things with old data often leads to excitement about the potential of artificial intelligence and machine learning meeting the reality of legacy data that isn’t fit for this new purpose. This is often the lightbulb moment where the idea of data management takes off.
                In this article we will explain the six PULSAR principles of AI-ready data and show how FAIR data brings you closer to true AI-readiness.

                P is for Provenanced
                These days, many rich, public data sets are available (like UniProt [1], ChemBL [2] and Open PHACTS [3] in life sciences) that organizations are using to enrich internal data and tackle research problems on a much bigger scale. When machine learning feeds into that work, ensuring that model predictions are reproducible is critical and requires a robust provenance chain showing what data was used to inform a model, where it came from and how it was generated.
                The authors of FAIR anticipated this and accounted for provenance explicitly within the reusability principles, which states that data should be associated with information about its origin and processing. Truly FAIR data automatically covers the ‘provenanced’ principle – that’s a good start!

                U is for Unbiased
                There are many well-known stories about biased AI systems causing terrible consequences for real people. Usually, AI systems are biased because they were trained on biased data – often data that contained hidden biases that were not obvious upfront.
                Detecting bias in data is challenging, and FAIR does not have all the answers. But through findability, you can make your search for appropriate input data broad, and through accessibility, you can be more confident that you’ve obtained everything available. Then your data profile is less likely to have blind spots – and FAIR will have helped you to avoid one of AI’s biggest mistakes.

                L is for Legal
                Do you, and your AI, have the legal right to use a given data set? For example, with personal data, it’s fine to collect personal data provided you tell people you collect it from what you’ll do with it (‘transparent processing’). But AI projects often make secondary use of data, beyond its original research purpose. Are you covered by the original terms of consent?
                One of FAIR’s reusability principles specifically states that human- and machine-readable conditions of reuse should be included in metadata. So, while the machine-readable aspect is probably still a work in progress, at least AI system owners should be able to take an informed view on the appropriateness of truly FAIR data they consume.

                S is for Standardized
                Everyone appreciates that standardization reduces problematic data variability and, while standardization may not enforce all quality aspects, it does prompts data practitioners to consider quality. Of course, some AI projects specifically act on unstructured data, e.g. when processing natural language. Here, standardization of the outputs, rather than the inputs, is the key, for example when concluding that two scientific papers are discussing the same disease even if they refer to it using different nomenclature.
                Standardization is baked into FAIR’s interoperability principles, which recommend standardization of the way we physically store data (e.g. as triples in a triple store or tables in a relational database), the data exchange format (e.g. using OWL or JSON-LD) and the actual meaning of the data (e.g. using a public or industry data standard).

                A is for Activated
                Activated data is ready to use – for example, the data sets you’re going to feed to your AI system are either joined together or ready to be joined. Big data and AI often generate novel insights from the combination of historically siloed data types – for example, chemistry and biology data in a search for new medicines – but connecting data sets from multiple domains can be surprisingly complicated.
                FAIR’s interoperability principle is the key here. With interoperable data, close attention should have been paid already to those key joining points on the edges of data sets and data models, building in interdisciplinary interoperability from the start.

                R is for Readable
                …and, of course, machine-readable. Interoperability is the main FAIR principle relevant to machine-readability, and while this is partly obvious, it’s not just about physical data formats; truly reusable data should codify the context in which it was generated so that the machine draws the right conclusions. This is usually the biggest challenge in FAIRification work, especially in specialist areas that lack pre-existing data standards or rely heavily on written descriptive text. Providing a long-term, robust solution often means developing new data capture systems and processes that properly codify tacit knowledge that otherwise would be left in explanatory paragraphs, research plans, published papers or sometimes not even written down at all.

                Conclusion
                To be truly AI-ready, your data should satisfy the PULSAR principles – and applying the FAIR principles as a first step means a lot of the work is already done. Indeed, “the ultimate goal of FAIR is to optimize the reuse of data” [4]. The end of FAIR is the beginning of AI-readiness.
                Capgemini’s many years of experience with FAIR and data management will help you truly embrace becoming a data-driven R&D organization. CLICK HERE.
                _________________
                [1] https://www.uniprot.org/
                [2] https://www.ebi.ac.uk/chembl/
                [3] http://www.openphacts.org/
                [4] https://www.go-fair.org/fair-principles/


                Five ways to battle data waste

                Roosa Säntti
                14 September 2022

                There is an increasing focus on reducing the environmental footprint of data centers and cloud services. Interesting enough, that is not yet the case at all for data. But clearly, with more organizations aspiring to become data-powered, the issue of Data Waste is lurking around the corner. We introduce five ways to begin battling data waste – with an additional key benefit: getting a better grip on the corporate data landscape.

                My data is bigger than yours: we used to take pride in storing as much data as possible – because we could, prices were low, and future, killer algorithms were waiting. Having more data seemed the landmark of being a true, successful data-powered enterprise.

                Turns out this consumes loads of energy and precious natural resources, and it creates a growing heap of unsustainable e-waste. We need to become more aware of what data we really need to store, how many times we duplicate it, and how long we keep it available. Also, although AI may be key to addressing climate challenges, it slurps energy itself too. Think only about how much energy it takes to perform one training cycle for a major AI language transformer model (hint: really, really a lot – say 5 times the lifetime CO2 emission of an average American car). The battle against data waste will therefore be a continuous, delicate balance act – and it only just begun.

                And it’s a battle with benefits: many of the measures that already can be taken bring additional value for organizations that want to become data-powered, even to the point that the positive impact on overall data mastery may dwarf the sustainability impact.

                Here are five suggestions to get your quest going:

                 1. Get the data first

                As with any other transformational objective: you should map your current situation first before you can start improving. Battling data waste begins with getting data on what data you actually have. Only then you will be able to assess how much of it really is unsustainable data waste, for example by analyzing how often data is used, by how many people and for what type of purposes. Many data catalog tools (such as Alation, see a separate article in this magazine) are perfectly equipped for this, and increasingly they feature intelligent automation and AI to do the heavy lifting of scanning the data landscape. Having an up-to-date data catalog brings many obvious additional benefits to a data-powered business as well, so every minute of activity in this area is typically well-spent.

                2. Map the environmental impact

                Once you know what data you have, it is a matter of understanding its real environmental impact. Data is stored in storage systems, as part of an IT infrastructure and a supporting network (in a data center or in the cloud). All these resources consume energy, create e-waste and have a carbon footprint. An increasing number of’ publicly available carbon calculators help to establish the sustainability cost of the elements of the data landscape, not only focusing on Scope 1, but covering the entire ‘supply chain’ of Scope 2 and 3. Once this data is established, it should be routinely added to the metadata management and catalog facilities of the organization – for current and future reference and use. As with every sustainability effort, you want to focus on the data sets that have the most negative impact.

                “But it is indeed a balancing act, as the data can be part of a solution or an initiative that delivers societal benefits that far outweigh its sustainability costs.”

                3. Get rid of it

                Ever saw Hoarders? It’s a reality-TVshow that features compulsive hoarders: people who are addicted to filling their homes with objects,and how that spills out into their lives. You don’t want to be a data hoarder. Just keeping data for the sake of it – or that it might come in hand in some unforeseen way – can provide you with a high sustainability bill. And it simply costs money too,for that matter. So, just as with application rationalization, data should have a managed lifecycle that not only involves creating and using it, but also features clear policies for decommissioning unused, redundant, or simply wasteful data.Organizations sometimes tend to hold on to their established IT assets(including data) for nothing more than emotional, non-rational reasons. Where the cost equation may not be enough to break that spell, sustainability impact might just do fine.

                4. Stop at the gates

                It’s a well-established practice within Permaculture (see our separate article in this magazine about ‘Permacomputing’ for more): you don’t recycle, reuse, and repurpose as an afterthought, it is an integrated part of your design and approach, right from the start. A lot of wasteful data can be avoided by never ingesting it in the first place. So, no more room for this typical Big Data era mindset of whatever data is available should be stored, because storage in the cloud is cheap and you never know what use it may have. Later. Sometime. Maybe. Instead, think in terms of Small Data, Tiny Data, or simply Smart Data: be much pickier about the data sets you get onboard, the objectives you have for it, and the quality of the data points inside. Select data that is fit for your purposes. Think more upfront, clean so much less later.

                5. Do not duplicate

                Data architecture is not necessarily a well-established practice within many complex organizations. As a result, data is often unnecessarily copied multiple times from the central data organizations to various business domains, and vice versa. Each instance starts to lead its own life, serving all sorts of different purposes, rapidly adding to a growing pile of potential data waste. And it all tends to be unaligned and unsynchronized. New architectural approaches – notably Data Mesh – appoint the ownership of specific data sets much more explicitly to specific business domains. Data is typically held – and stored – by the business domain and made available in flexible integration ways (such as APIs), so that duplication is unnecessary, even undesirable. Other integration technologies, such as data virtualization, can achieve the same.

                Lastly, don’t forget the people. As with everything around data, we can only accomplish so much without involving and empowering people to be and lead the change. Data catalogs and API-first architectures are great tools to drive more sustainable use of data and AI. But if there are no people embracing the direction (a sustainable data vision and strategy) and no ownership of the data (internalizing which data is used, why and how much) – failure is a given. True Data Masters battle data waste by harnessing both: data foundations and data behaviors.

                There are many more ways to stop data waste, such as relying more on shared data between multiple ecosystem partners, procuring data and pre-trained algorithms from external providers, limiting the movement of data, and switching to energy-saving storage media. One thing is for sure: even if reducing data waste would not deliver a substantial sustainability impact at first sight, each and every activity suggested adds to a higher level of data mastery. And that – in all cases – is priceless.

                INNOVATION TAKEAWAYS

                Data has a sustainability cost

                With its obvious merits, data has an impact on the environment in terms of its dependency on natural resources and energy and its carbon footprint; hence data waste must be actively addressed.

                The quest against data waste

                There are many ways to decrease harmful data waste, but they all start with a better understanding of the current data landscape and its environmental impact.

                Battle with benefits

                Reducing data waste can have an obvious positive environmental impact, but while doing so organizations will see their level of data mastery lifted as well.

                Interesting read?

                Capgemini’s Innovation publication, Data-powered Innovation Review | Wave 4 features 18 such articles crafted by leading Capgemini and partner experts sharing inspiring examples of it – ranging from digital twins in the industrial metaverse, “humble” AI, serendipity in user experiences, all the way up to permacomputing and the battle against data waste.. In addition, several articles are in collaboration with key technology partners such as  AlationCogniteToucan TocoDataRobot, and The Open Group to reimagine what’s possible.  Find all previous Waves here.

                Authors

                Roosa Säntti

                Head of Insights & Data Finland
                Roosa’s ambition is to make people, organisations and the society to understand the power that lies in their data and how they can harness it to build smarter and more sustainable environment for all. She helps businesses in innovating their strategy with the help of digitalization, technology and data. In her current role she is leading a business unit serving customers within data, analytics and AI. Her special interests lie in the intersection of innovation, technology and people.

                Ron Tolido

                CTO, Insights and Data
                Ron is the Executive Vice President, CTO, and Chief Innovation Office at Capgemini Insights & Data global business line. He is a Certified Master Architect and a member of the Capgemini group Chief Technology, Innovation & Ventures council.

                  Immersive employee experiences offer organizations a talent advantage

                  Capgemini
                  Capgemini
                  14 Sep 2022

                  As organizations continue to compete for top talent, HR professionals are finding that they need to change recruitment and retention tactics.

                  The Society for Human Resource Management reports that while talent acquisition teams were striving to meet their companies’ hiring numbers last year, “the focus is shifting now to candidate and employee experience.” Forward-thinking organizations are taking employee experience (EX) seriously to attract and retain talent, and many are creating immersive experiences to enhance EX. 

                  To understand why organizations are focusing on immersive EX, it’s important to grasp that immersive experiences are fundamentally multisensory experiences. They often involve a combination of user interfaces (UIs) such as these:  

                  • flat UI, in the form of a phone, a tablet, and monitor screens; 
                  • natural UI that supports voice assistance, hand-gesture controls, and haptic feedback like vibrations; and
                  • mixed reality UI, which includes augmented and virtual reality interfaces.

                  These technologies can enhance candidate and employee experiences with benefits that go beyond making an organization more attractive to talent. It’s well known that companies with great EX deliver better customer experiences, because employees are engaged in their mission and empowered to solve customers’ problems.

                  That, in turn, can lead to more revenue, as per a Forbes Insights and Salesforce report, which found that “companies that have both high EX and CX see almost double the revenue growth compared to those that do not.”

                  Immersive experiences attract candidates and retain employees 

                  Immersive employee experiences can take many forms. You might think of someone wearing a headset and virtually learning to build a new machine, but there are other use cases.  

                  Immersive self-service portals

                  We often hear from organizations that their employee portal is outdated or hard for employees to use. That can cause daily frustration, especially when you consider how many applications an employee interacts with, such as an email client, a document repository, an employee directory, and so on.  

                  Employees are accustomed to using text messages, chatbots, or voice assistants to find what they need when they shop or engage socially online, but legacy employee portals don’t provide that level of convenience. An immersive portal can serve each employee personalized content that’s relevant to their role, almost like a social network for the enterprise. It can also allow employees to direct questions to an intelligent assistant, so they don’t have to search multiple systems or send emails. Those customized, convenient features free employees to focus on their roles in growing the business.  

                  Immersive training experiences

                  Immersive onboarding and training can benefit all employees by allowing them to experience their new roles in realistic simulations, refine specific skills, and develop new routines before they begin working. Learning new skills in a virtual setting can help new employees prepare for working with complex equipment or in busy settings without slowing down the company’s production processes. A common example is the use of headsets and augmented reality to teach autoworkers how to assemble complicated parts of automobiles without risking damage to costly components during their training.

                  However, other businesses can improve EX with virtual training, too. For example, baristas need to learn the steps for making dozens of coffee and tea drinks using a variety of machines and to follow safety practices while they work. Learning on the job can slow down other employees, negatively impact their experience, and create delays for customers as well. In a virtual environment, new hires can learn how to perfect their lattes and macchiatos without real spills, burns, or slowdowns.  

                  Real-time, remote collaboration 

                  When employees run into problems assembling products or using a piece of equipment, the result is downtime and stress. With AR headsets and video streaming, employees can check in with designers and engineers to walk through a problem and implement a solution much faster compared to waiting for an in-person visit or trying to solve the problem via email or a voice call. The result is more productive employees who are empowered to seek help when they have a problem. 

                  As metaverse technologies roll out to support richer virtual engagements, employers may be able to host highly realistic immersive meetings and events that spark the same kind of emotional engagement employees would experience at an in-person gathering. That can create a stronger organizational culture and foster more collaboration and creativity even among fully remote teams.

                  Extended capabilities in the field

                  Repairing complex equipment can be a challenge, especially in bad weather or remote locations. Here, again, AR headsets and remote access to guidance can help employees diagnose problems and repair them much faster than if they had to page through a manual or search the web for the information they need as they work. That can allow field service employees to bring equipment back online faster, reduce call-backs for further repairs, and help employees be more productive.

                  All these immersive experiences can be a selling point for candidates because of the convenience and support they offer. For younger candidates especially, immersive technology is also appealing because it’s familiar – members of Gen Z, now in the early stages of their careers, have grown up with screens and immersive experiences at home and at school.  

                  Planning immersive employee experiences 

                  Immersive EX isn’t just about the technology. It’s about the entire employee journey – and there are many journeys an employee can have, such as onboarding, taking maternity leave, and earning promotions. Companies can start by mapping one of their employee journeys to find ways to elevate it, using simple design techniques to reduce friction.

                  It’s important to start small and move quickly, perhaps with onboarding or training, to test out ideas and identify the right technologies to leverage as part of an immersive experience. This limited initial approach enables organizations to get feedback from users and other stakeholders that they can use to refine the experience before formally launching and scaling it. Then, organizations can use that immersive employee experience to get buy-in for additional immersive EX programs. 

                  As organizations’ immersive employee experiences gain traction, they can become a selling point for talent acquisition and a tool for retaining existing talent. Great immersive EX can also generate word of mouth that attracts talent, as employees share how the immersive technology they work with improves their experiences. By starting small and gradually building useful, supportive experiences, organizations can give themselves an advantage in recruiting and retention.

                  This article was first published on destinationCRM.com, August 18, 2022

                  Co-authored by Mike Buob and Alexandre Embry.

                  Alexandre Embry

                  Vice President, Head of the Capgemini AI Robotics and Experiences Lab
                  Alexandre leads a global team of experts who explore emerging tech trends and devise at-scale solutioning across various horizons, sectors and geographies, with a focus on asset creation, IP, patents and go-to market strategies. Alexandre specializes in exploring and advising C-suite executives and their organizations on the transformative impact of emerging digital tech trends. He is passionate about improving the operational efficiency of organizations across all industries, as well as enhancing the customer and employee digital experience. He focuses on how the most advanced technologies, such as embodied AI, physical AI, AI robotics, polyfunctional robots & humanoids, digital twin, real time 3D, spatial computing, XR, IoT can drive business value, empower people, and contribute to sustainability by increasing autonomy and enhancing human-machine interaction.

                  Mike Buob

                  Vice President of experience and innovation at Sogeti, part of Capgemini
                  Mike Buob is VP of Experience & Innovation at Sogeti, part of Capgemini. He helps clients create impactful experiences for their customers and organizations with their transformation and innovation initiatives. Mike has a diverse background in technology, innovation and strategy, including Artificial Intelligence, DevOps, Cognitive QA, IoT, Cybersecurity, Analytics, Digital Manufacturing, and Automation.