Skip to Content

Quantum computing: The hype is real—how to get going?

Capgemini
27 Apr 2023

We are witnessing remarkable advancements in quantum computing, regarding the hardware but also its theory and usage.

Now is the age of exploring: How for example will quantum machine learning differ from the classical, will it be beneficial or malicious for cyber security? Together with Fraunhofer and the German Federal Office for Information Security (BSI), we explored that unsettled question and found something sensible to do today. There are two effective ways in which organizations can start preparing for the quantum revolution.

The progress in quantum computing is accelerating

The first quantum computers were introduced 25 years ago (2 and 3 qubits), the first commercially available annealing systems are now 10 years old. During the last 5 years, we have seen bigger steps forward, for example systems with more than twenty qubits. Recent developments include the Osprey chip with 433 qubits by IBM, first results of quantum error correction by Google, as well as important results in interconnecting quantum chips announced by the MIT.

From hype to realistic expectations

Where some see steady progress and concrete steps forward, others remain skeptical and point out missing results or unkept promises—the most prominent of which is found in the field of the factorization into large prime numbers: There still is a complete lack of tangible results in breaking the RSA cryptosystem.

However, development in quantum computing has already passed various important milestones. Dismissing it as mere hype that will pass eventually now becomes increasingly difficult. In all likelihood, this discussion can soon be laid to rest, or at least refocused towards very specific quantum computing frontiers.

The domain of machine learning has a natural symbiosis with quantum computing. Especially from a theoretical perspective, research in this field is considered fairly advanced. Various research directions and study routes have been taken, and a multitude of results are available. While much research is done through the simulation of quantum computers, there are also various results of experiments run on actual, non-simulated quantum devices.

As both the interest and the potential of quantum machine learning is remarkably high, Capgemini and the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, have delved deeply into this topic. On request of the German Federal Office for Information Security (BSI), we went as far as analyzing the potential use for, as well as against, cyber security. One of the major results of this collaboration is the report “Quantum Machine Learning in the Context of IT Security“, published by the BSI. Current developments indicate that there is trust into quantum machine learning as a research direction and it’s (perceived) future potential.

Laggards increasingly lack opportunities

The ever-growing availability of better and more efficient IT technologies and products is not always reasonable to implement and often difficult to mirror in an organization. Nevertheless, innovation means that a certain “technology inflation” constantly devalues existing solutions. Therefore, an important responsibility of every IT department is to keep up with this inflation by implementing upgrades and deploying new technologies.

Let us consider a company that still delays the adoption of cloud computing. While this may have been reasonable for some in the early days, the technology has matured. Over time, companies that have shied away from adoption have missed out on various cloud computing benefits while others took the chance to gain a competitive advantage. Even more, the longer the adoption was delayed or the slower it was conducted, the further the company has allowed itself to fall behind.

Time to jump on the quantum computing bandwagon?

Certainly, quantum technology is still too new, too unstable, and too limited today to adopt it in a productive environment right away. In that sense, a pressure to design and implement plans for incorporating quantum computing into the day-to-day business does not exist today.

However, is that the whole story? Let us consider two important pre-implementation aspects: The first of these is to ensure everyone’s attention for the topic: For an eventual adoption, a widespread appreciation for what might be gained is crucial to get people on board. Without it, there is a high risk of failing­—after all, every new technology comes with various challenges and affords some dedication. But developing the motivation to adopt something new and tackle the challenges takes time. So, it’s best to start early with building awareness and basic understanding of the benefits throughout all levels and (IT) departments.

The second aspect is even more difficult to achieve: experience. This translates to know-how, participation, and practice within the organization to get prepared for the adoption of technologies once they are ready for productive deployment. In the case of quantum computing, gaining experience is harder to achieve than with other recent innovations: In contrast for example to cloud computing—which constitutes a different way of doing the same thing, and thus allows companies to get used to them slowly—quantum technologies represent a fundamentally new way of computation, as well as a completely new approach of solving problems and answering questions.

The key to the coming quantum revolution is a quantum of agility

Bearing in mind the scale of both pre-implementation aspects and of the uncertainty of when exactly quantum is going to deliver advantage in the real world, organizations need to start getting ready now. On a technical level, and in the realm of security, the solution for the threat of quantum cryptanalysis is deployment of post-quantum cryptography. However, on an organizational level, the solution is crypto agility : having done the necessary homework to be able to adopt quickly to the changes, whenever they come. Applying the same concept, quantum agility represents having the means to adapt quickly to the fundamental transformations that will come with quantum computing.

Thus, building awareness and changing minds now will have a considerable pay-off in the future. But how can organizations best initiate this shift in mindset towards quantum? Building awareness is a gradual process that can be promoted by a working group even with small investments. This core group might for example look out for possible use cases specific to the respective sector. Through various paths of internal communication, they can spread the information in the proper form and depth to all functions across the organization.

To build up knowledge and experience, the focus should not be on viable products, aiming to replace existing solutions within the company. Instead, it is a way of playing around with new possibilities, of venturing down paths that might not ever yield any tangible results but aiming to discover guard rails subjective to each corporation and examine fields where quantum computing might eventually be the way to substantial competitive advantages.

Frontrunners are gaining experience in every sector

For example, some financial institutions are already exploring the use of quantum computing for portfolio optimization and risk analysis, which will enable them to make better financial predictions in the future. Within the pharma sector, similar efforts are made, gauging the potential of new ways of drug discovery.

In the space of quantum cyber security, together with the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Capgemini has built a quantum demonstration: performing spam filtering on a quantum computer . While this might be the most overpriced—and under engineered—spam filter ever, it is a functioning proof of concept.

Justifying investment in quantum computing requires long-term thinking

The gap between companies in raising organizational awareness and gaining experience with the new technology is gradually growing. Laggards have a considerable risk of experiencing the coming quantum computing revolution as a steamroller, flattening everyone that finds themselves unprepared.

The risks and challenges associated with quantum technology certainly include the cost of adoption, the availability of expertise and knowledgeable talent, as well as the high potential of unsuccessful research approaches. However, the cost of doing nothing would be the highest. So, it’s best to start now.

We don’t know when exactly the quantum revolution will take place, but it’s obvious that IBM, Google and many more are betting on it—and in the Capgemini’s Quantum Lab, we are exploring the future as well.

Christian Knopf

Senior Manager Cyber Security
Christian Knopf is a cyber defence advisor and security architect at Capgemini and has a particular consulting focus on security strategy. Future innovations such as quantum algorithms are also in his field of interest, as are the recent successes of deep neural networks and their implications to the security of clients he works with.

    Future IT: FinOps, GreenOps and sustainable cloud strategies

    Güncel Düzgün 
    26 Apr 2023

    Sustainable IT starts with public cloud…

    Driven largely by the Paris Agreement of 2015, businesses of all sizes are increasingly committed to sustainability. From reducing carbon emissions to implementing eco-friendly practices, there is a growing push towards sustainable business operations.

    This comes at a time when rapid growth of the digital economy is significantly increasing energy consumption and carbon emissions. The rise of computing has led to concerns about its environmental impact. As companies strive to become more environmentally conscious, a green concept for future IT has become a critical component of sustainable business operations.

    The role of public cloud will be central to sustainable IT. However, when managed haphazardly, public cloud can rapidly lead to waste and inefficiency – virtually bottomless and out of sight, public cloud tempts employees to shoot first and ask questions later. On the other hand, properly managed public cloud provides consistent energy savings in comparison to traditional on-premises data center solutions. This comes down to a conscious choice that each organization faces.

    …and continues with FinOps

    Leveraging energy-efficient technologies, optimizing resource usage, promoting remote work, developing green computing practices, and enabling collaboration among users and organizations, public cloud has the potential to significantly reduce carbon emissions compared to traditional on-premises IT infrastructure (by up to 95%).

    Therefore, when I work with organizations to help them reduce their carbon footprints, one of my first recommendations is migrating their workloads to public cloud (if an organization hasn’t made that move yet). However, the cloud sustainability journey doesn’t finish with an eco-friendly public cloud environment; this is where it begins. Once the workloads are landed in the public cloud, it is important to run them with a sustainable cloud operation approach. That’s where FinOps comes in.         

    The role of FinOps in cloud sustainability

    FinOps (short for financial operations) is a cloud financial management discipline and cultural practice that focuses on tracking and optimizing cloud spending. It does so by bringing together IT, finance, engineering, and business teams, and providing a cultural mechanism for teams to manage their cloud costs, where everyone takes accountability of their cloud usage.

    FinOps – when applied correctly – ensures the optimal usage of cloud resources, which in turn promotes not only cost efficiency but also energy savings and carbon emissions reductions. In other words, if you do not build such a cost-usage-control framework in your public cloud environment – where resource deployment is unlimited due to a pay-as-you-go cloud consumption model – this suboptimal cloud usage leads to underutilization and waste in the form of excess energy consumption and carbon emissions. Therefore, FinOps is sustainable by design.

    FinOps helps organizations achieve their sustainability goals in the cloud by implementing cloud cost optimization techniques like:

    • Establishing FinOps governance and policies
    • Rightsizing and autoscaling of elastic resources
    • Shifting workloads to containers
    • Automated scheduling of compute services
    • Decommissioning of idle and unused resources
    • Reducing log ingestion
    • Optimizing storage tier and redundancy
    • Refactoring and mutualizing of applications by cloud native architectures (e.g., plarform as a service)
    • Budgeting, forecasting, and overspending alerts
    • Managing anomalies
    • Establishing FinOps culture and accountability
    • Switching usage-based pricing to new consumption models (reservations, saving plans, spot instances, etc.) for more efficient resource usage 

    Brothers-in-arms – how FinOps supports GreenOps

    FinOps and GreenOps are two distinct methodologies that – when used together – form a powerful strategy for organizations looking to optimize their cloud usage and reduce their environmental impact.

    While FinOps is focused on managing cloud consumption, GreenOps is focused on implementing sustainable practices within an organization’s operations. It involves developing strategies that prioritize environmental sustainability while still achieving business objectives like reducing waste, promoting renewable energy sources, using eco-friendly materials and processes, and promoting a culture of sustainability and environmental responsibility. By identifying and mitigating carbon emissions, organizations can reduce their impact on the environment and contribute to a more sustainable future.

    There is a bit more to it though. GreenOps encompasses everything an organization does to reduce the carbon footprint of their cloud program. This might include relocating resources to companies or regions where sustainable energy is used or using code that requires less energy to run. But the greatest contribution by far is usually – in my experience – made by FinOps.

    The most obvious way is through optimization of cloud resource consumption, which naturally reduces energy consumption. FinOps can also help companies choose the most environmentally friendly cloud service providers. Best of all, the gains made by FinOps allow for continuous improvement and ensure that a company remains on track to achieve its sustainability targets.

    FinOps and GreenOps can create common work areas with unified dashboards, enabling teams to make joint decisions. A great example of how FinOps and GreenOps can work together is through the optimization of supply chains. Together, teams can identify areas where supply chain inefficiencies are leading to unnecessary energy consumption so that organizations can reduce their carbon emissions and save money. FinOps can be used to identify supply chain inefficiencies, while GreenOps can be used to implement sustainable supply chain solutions.

    Future IT: Sustainable and cost-effective cloud operations

    Sustainability is a natural benefit of FinOps – one of many. In fact, FinOps can be an essential framework for organizations looking to reduce their environmental impact while saving money. Moreover, when combined, FinOps and GreenOps can create a more holistic approach for companies to manage their cloud resources in a sustainable and cost-effective way while also maximizing the business value of public cloud.

    Incorporating FinOps recommendations into cloud operations can help drive both sustainability and cost-effectiveness. Creating a vision for end-to-end cloud operations and decarbonizing procurement are essential first steps that enable teams to identify and optimize savings opportunities. Continuous improvement and automation, empowering teams to take action, and promoting a culture of green accountability is crucial for long-term success. By adopting these measures, businesses can achieve their financial goals while also contributing to a more sustainable future.

    Looking to go deeper into FinOps? Check out our FinOps services & download the whitepaper developed by IDC and commissioned by Capgemini to explore the full potential of FinOps and the value it brings to every stage of the cloud journey.

    Meet our expert

    Güncel Düzgün 

    Global FinOps Offer Co-Lead

      Ensuring Aerospace & Defense supplier resilience & sustainability in a volatile world

      Gilles Bacquet
      24 Apr 2023
      capgemini-engineering

      Over the last few decades big manufacturing companies created vast global networks of suppliers, perfectly setup to deliver critical parts just-in-time.

      But, supply chains have proven more fragile than once thought. The global pandemic showed big shocks can create dramatic change, which suppliers may not be ready for. And when we emerge from that change – whether a day or a year later – individual suppliers may not be the same. They may have gone bust, lost key staff, pivoted to different customers, or found they are simply not setup for the new situation before them.

      Hopefully the next pandemic is years away. But shocks are becoming more common and the dream of ‘borderless’ supply chains is fading. Whether its Brexit, Ukraine, lockdowns, protectionism, sanctions, fallings out, or someone getting their boat stuck in the Suez canal, the world is less stable for suppliers, which means those who rely on them need to be more vigilant.

      This is felt particularly strongly in Aerospace and Defence, industries where around 80% of any finished product comes from the supply chain, and which are currently expected scale up in this time of supplier uncertainty.

      Forecasts suggest 40,000 new planes will be needed in the next 20 years, off the back of a rapid scale down over the pandemic. Defence had been reducing spend for years and discontinued many products, but now must restock and reinvent as military threats rise and its support for Ukraine depletes supplies.

      As they build and evolve supply chains, they will face new challenges. As example, materials from aluminium to chips are in high demand from automotive, a far more material intensive industry. And they will need to pay more attention to sustainability – new rules and pressures may mean that polluting suppliers will not be allowed to stay on the books for long.

      So, they need sustainable supply chains, with long term certainty, to fuel a massive production increase, in the face of still competition for resources, in an unpredictable world.

      What should they do?

      Building resilient & sustainable supply chains from the bottom up

      At the top level, a resilient supply chain might be seen as a control tower, with visibility of stock levels and events, backed by whizzy AI making real-time optimisations and recommendations. And indeed Capgemini has insight on how to build such a system.

      But such a system is only as good as the information that goes into it. In a volatile world, that means understanding the risks facing suppliers themselves, so you can make decisions at an individual supplier level, which embeds resilience across your supply chain. 

      What are the risks to resilience in your supply chain?

      Some risks lead to problems that are highly specific. A broken machine will delay orders. If the supplier is slow to act, the solution may literally be to send your auditor back to oversee the ordering of replacement parts and repair.

      Others may be more structural. If suppliers are becoming reluctant to sell you aluminium – eg because of a slowdown in global supply – you may need a change your approach, such as moving your commitment from six months to five years, or bidding higher to secure priority, or adding new suppliers. That needs careful analysis of your needs. If you make a commitment and the supplier goes bust or you change to a non-aluminium design, you lose out. But if you do nothing, you may not have the materials to make your product.

      Increasingly, being resilient means being sustainable. As new climate regulations emerge and consumers pile on pressure, working with polluting suppliers will cease to be viable. If those suppliers are important to your product, then that poses a threat to resilience, meaning you need to make them change, or find new ones.

      First action will be to be able to aggregate all GHG emissions from the Supply Chain (from raw materials to Tier 1 suppliers) to calculate Inbound emissions and generate Scope 3 reports (all indirect emissions (not included in scope 2) that occur in the value chain of the reporting company, including both upstream and downstream emissions). Define industry standards and tool for reporting is actual challenges to be tackle.

      Understand the risks, and make changes to build resilience

      Understanding where you have these and other problems means doing the hard work of visiting suppliers. This may be part of a continuous auditing process, part of onboarding, or a specific intervention following a shock. For example, after the pandemic reopening, we found ourselves working with a client to audit their entire 200 company supply chain in just a couple of weeks, to assess readiness to scale in an uncertain time.

      Either way, it means sending experts to suppliers’ sites to understand their situation, and gather info. These must be people who can spot problems or risks that would not show up in a call or data analysis, such as lower than reported stock, incorrect storage, lack of skills, crumbling machinery, or higher than claimed emissions.

      Individual supplier resilience data can be combined with Business Intelligence (BI) tools, risk models and expert analysis to build up full situational awareness across the supply chain – a moderate risk at one supplier may be manageable, but if that risk is replicated across all suppliers of that product, that may be a red flag that needs addressing.

      Having understood the problems and risks, you can identify a remediation or recovery plan. That may include upgrades, training, new processes, data collection and reporting. You must also ensure your supplier implements it, using an appropriate mix of carrots (committed orders, investment) and sticks (threats to take business elsewhere).

      Even with the best laid plans, suppliers fail, and many companies take months or years to onboard new ones. In a volatile world, these processes need to be revisited to allow much quicker onboarding and transferring of work. It is why implement now a Sustainable procurement strategy is critical to only select low emissions new suppliers and develop current ones to improve their current GHG emissions to anticipate future restrictions

      Managing everyday issues

      Supply chains are complicated and even reliable suppliers get things wrong. Another key part of supplier resilience is having quick and efficient processes for resolving day-to-day problems which can quickly add up to a large costs of business.

      These are issues like rejecting defective goods. Often companies handle this at a local level, but a central team with a dedicated company-wide platform is usually more efficient. When there is an issue at any level, it is flagged in the system, resolved by dedicated experts including processing the issue, reordering, resolving payments, and closing it.

      Additionally, the key to getting this right is having people with technical knowledge, change management skills, and the soft skills to get suppliers to listen and act. Having these skills in local teams is also important, as knowledge of language and culture are critical to getting results. Local teams are also vital to maintaining a low carbon footprint.

      The key for success here involves mixing central expertise and a network of experts.

      Conclusion

      In an ideal world –we would have something like the film Minority Report – where we predict exactly what will happen, and act in advance, sending instructions straight to suppliers to change focus to adjust for the upcoming shock.

      This is not realistic. The world is too complex to predict every shock, from a machine breaking at a critical moment, to a global lockdown. But we can be ready for problems, with resilient suppliers and processes to help them adapt and respond to problems including the integration of environmental protection (GHG) as a key pilar. That means a combination of on-the-ground experts, processes and technology.

      Meet our expert

      Gilles Bacquet

      Senior Portfolio & Product Manager, Resilient & Sustainable Supply Chain offers owner
      Gilles is a Production & Supply Chain engineer and has joined Capgemini group in 2001. Starting as consultant expert in Supplier Quality Management for Automobile & Aeronautic, he has extended his responsibilities in creating Supply Chain offer and developed business oversea. He is today leading Resilient & Sustainable Supply Chain offers for Capgemini Engineering.

        One platform to rule them all – or not?

        Gustaf Soderlund
        24 April 2023

        Around 15 years ago, I was involved in a large ERP (Enterprise Resource Planning) system deal. It involved HR capabilities (including talent management, compensation etc), financial capabilities (like accounts payable and receivable), logistics capabilities (stocks and inventory) and procurement capabilities (vendor management etc). At the time, the obvious choice and the best solution was to have all these capabilities integrated into one platform and coolest at the time was the SAP system (possibly it still is). The major advantage was having it all connected and with huge amount of transactions handled by one platform by everyone, the cost per transaction went down considerably (economies of scale).

        Could this approach at amalgamation on to one platform work in the DPA world as well? Does it make sense to run all the processes on one platform in order to get the economies of scale? For many years organizations have decided to stick with one platform to see cost per transaction go down, simplification of the architecture and an advantage of only building skills on one platform. And, if you trust everything the polished vendor slides are saying, you should use the same platform for CRM, Customer Service and Sales to really reap all the benefits. The story of one platform (to rule them all) is indeed quite convincing! And this trend (especially the use of a platform for many purposes) has been going on for some time now (just look at the waves and quadrants). The big Question is how come not all organizations are ‘buying it’?

        In my view it all comes down to two other strategies, ‘best-of-breed’ and ‘fit-for-purpose’. Adopting a best-of-breed strategy means you want the best technical capabilities from the leading vendor in each specific area, be it DPA or CRM or Marketing automation. This is not new and explains why the ERP concept wasn’t as successful in the CX area. However, having a fit-for-purpose approach is newer.

        Imagine you bought a state-of-the-art DPA platform (like Pega or Appian) to manage your super complex and highly regulatory payment processes with embedded business rules for payment investigations and disputes. The platform might be quite expensive but considering what you’re saving in quality, compliance and automation – ‘it’s so worth it!’. For this exact purpose, the leading DPA platform would be a perfect fit, especially if it’s a framework (like a regulatory scheme) that the vendor consistently updates twice a year. Now imagine a few years down the line, you’re implementing GDPR. Still highly regulatory and some complexity. The leading DPA solution might still be fit-for-purpose here. But, what happens if the bank wants to send out a survey about the new Mobile App and is offering each respondent two movie tickets, that are being sent to their home address? Probably, the leading DPA platform is a bit of an overkill for this purpose, while a simple low code tool or a basic case management tool could be fit-for-purpose here. 

        Only the last month, I’ve come across three very large banks that are using several process platforms at the same time, as a part of their strategy. But, on the other hand, medium or smaller sized organizations can possibly not afford three parallel technologies based on different use cases. There, it would make more sense to see if there are platforms that are good enough for all (or at least most) of the prospective use cases and usage areas. The advantages of using one platform for both DPA, CRM, Customer Service and Sales can also be substantial there. Especially, since you have only one technology to maintain, one technology skill to build capabilities on etc, which would argue the case for ‘one platform to rule them all’. Are you unsure of where your new use case should fit best? Feel free to reach out to us!

        Author

        Gustaf Soderlund

        Global VP Public Sector Sweden, Nordics
        Gustaf has many years of experience selling, delivering, and leading business process and customer engagement solutions in a variety of industries, including banking and insurance Gustaf currently leads Pega globally and is the Augmented Services leader for Financial Services.

          Related Expert Perspectives

          Truck OEMs and sustainability:
          Realizing the ambition

          Fredrik Almhöjd – Our expert
          Fredrik Almhöjd
          21 Apr 2023

          Net zero targets are a great start, but many commercial vehicle manufacturers have yet to put together a credible strategy for reaching them. A holistic approach is key, believes Fredrik Almhöjd, Director, Automotive & Manufacturing at Capgemini and the company’s go-to-market lead for commercial vehicles in the Nordics.

          Climate change is now widely recognized by commercial vehicle (CV) manufacturers as one of our generation’s biggest challenges, and most companies seem determined to tackle it. However, many find that, while it’s relatively easy to define net zero targets, creating a coherent strategy for achieving them is trickier.

          This article takes a general look at the CV industry’s sustainability ambitions and concerns and proposes a holistic response. In subsequent articles, I’ll delve deeper into some key aspects of this topic.

          A strategic imperative

          Until recently, automotive OEMs tended to view the “sustainability agenda” as a box to be ticked for PR purposes. That picture has now changed drastically. With transportation accounting for 37% of global CO2 emissions in 2021 according to the International Energy Authority, stakeholders including regulators, customers, and the public are piling on the pressure for OEMs to lower emissions in line with the Paris Agreement and similar targets.

          As a result, automotive industry boards now recognize the strategic importance of sustainability and have put it at or near the top of their agendas. One sign of this recognition is that more and more corporations are appointing Chief Sustainability Officers. The Harvard Business Review recently reported that in 2021 more CSOs were appointed than in the previous five years together – that’s for all industries but we see a similar trend in automotive.

          In line with this trend, all the major truck OEMs communicate clear, ambitious goals. Many of these companies have signed up for the Science Based Targets initiative (SBTi) to help them achieve Paris Agreement objectives, for example.

          Truck OEMs’ goals include phasing out diesel in favour of fossil-free trucks within the next decade. While there’s general agreement that this needs to happen – at least in most markets – many OEMs have yet to formulate a clear strategy regarding battery electric vehicles (BEVs) and fuel cell electric vehicles (FCEVs). BEV will most probably be appropriate for regional and distribution trucks, while FCEV will be the usual choice for the long haul – and so both are likely to be in the portfolio.

          What’s still missing?

          While OEMs’ product plans for zero-emission vehicles are already well advanced, they are not yet able to realize their overall sustainability vision. That’s because most companies do not yet have a holistic, systematic approach. Such an approach needs to look beyond the product portfolio and address the whole automotive product lifecycle, and much more besides.

          The other thing that’s lacking is speed. To stand a chance of reaching the Paris Agreement and similar targets, the industry urgently needs to move from talking the talk to taking meaningful action.

          A holistic approach to Commercial Vehicle Sustainability

          In planning the approach, it’s helpful to think in terms of three building blocks:

          1. Sustainability culture

          Companies need to work toward sustainability across the end-to-end lifecycle. This requires the creation of a whole portfolio of sustainable products and services, with an emphasis on the circular economy.

          For that to happen, the whole organization – and its ecosystem – needs to move to a sustainability-aware culture. Senior management should communicate clear targets and KPIs that support sustainability ambitions. These targets and KPIs must be translated into meaningful goals and incentives for everyone involved, from the boardroom to the shop floor.

          With the right culture in place, the journey to sustainability will rapidly gather momentum, as leading OEMs are already discovering.

          2. Reliable analysis and reporting

          To navigate and manage the journey, it is critical to be able to measure progress. Sound metrics are also vital to substantiate sustainability claims and fend off accusations of greenwashing.

          OEMs, therefore, need to gather accurate, up-to-date data about all activities and projects. They also need to put in place the analytic tools to report progress against baselines and targets at any required level, as well as to deliver comprehensive ESG reports.

          The right data and connectivity architecture is critical because real-time or near-real-time data may be needed on occasion. Our report Driving the Future with Sustainable Mobility makes the case for implementing an “intelligence nerve centre” to address this requirement.

          3. Methodical innovation

          Innovation is a key enabler of sustainability, and OEMs need to have clear strategies for achieving it – whether in-house, via partnerships, or most likely through a combination of methods.

          It’s not just technical innovation that’s needed. New business models will also be required – particularly circular economy models.

          The need for collaboration

          Sustainability can’t be achieved by any one company in isolation. Let’s look at just a few examples where collaboration with other organizations is essential.

          1. Working with governments

          OEMs should lobby governments to incentivize the take-up of EVs, as well as to put in place low-emission zones and similar restrictions. Governments also have a part to play in establishing the necessary infrastructure for BEVs and FCEVs.

          2. Working with ecosystem partners

          Charging and power companies can make a big contribution to sustainable transportation. Less obviously, perhaps, the same is true of technology providers developing digital services for customers, because such services can facilitate more sustainable vehicle use.

          OEMs also need to collaborate with parts suppliers to ensure that inputs are produced and delivered as sustainably as possible.

          Don’t forget the opportunities

          This article has focused on the challenges of sustainability, but there are great opportunities as well, not least in terms of securing access to investment by demonstrating compliance with stakeholders’ ESG targets. Circular economy models, too, have the potential to generate new revenues, as well as help companies overcome sustainability challenges.

          These opportunities will be covered in more depth in future articles, as will the topic of collaboration and the need for a sustainability culture.

          Meanwhile, please contact me if you’d like to discuss any of the issues raised here or learn how Capgemini can support your sustainability journey.

          About Author

          Fredrik Almhöjd – Our expert

          Fredrik Almhöjd

          Director, Capgemini Invent
          Fredrik Almhöjd is Capgemini’s Go-to-Market Lead for Commercial Vehicles in the Nordics, with 25+ years of sector experience plus extensive knowhow in Sales & Marketing and Customer Services transformation.

            How governments are using IT to shrink their carbon footprints

            Gunnar Menzel
            19 Apr 2023

            The public sector has a mandate to lead on sustainability, and this includes their IT footprint. How can governments achieve digital growth while cutting carbon emissions?

            In my experience working with public sector leaders, an often-overlooked source of emissions come from IT. The global IT industry is responsible for approximately 3% of total CO2 emissions, and if left unchecked, this could grow to as much as 10%. Fortunately, more and more organizations, public and private, are taking the necessary steps to reduce their carbon footprints:

            1. Reducing the energy needs of an IT system and making sure that only sustainable energy sources are being used
            2. Using IT and data technologies to help drive down an organization’s total CO2 emissions, for potential results of as much as 20%.

            Let’s start with some simple tools that public sector leaders can use to control their IT systems’ carbon footprints, and then I’ll turn to some larger sustainability strategies made possible by new technologies.

            Sustainable IT is observable IT

            The first step to managing your IT CO2 footprint is understanding how your IT system works, and where it draws energy. This can be done with a tool like an application portfolio manager, by conducting a carbon audit or by using a carbon calculator . For public sector leaders interested in going a step further, you can’t do better than a digital twin. A digital twin is a virtual replica of a system (in this case, an IT system), which provides a team with an x-ray view into how it functions. Digital twins make it possible to experiment with changes virtually, before implementing them in real life, and they’re an outstanding tool for general understanding. Governments are often saddled with legacy technologies, which require inordinately high costs in the form of energy and maintenance. Optimizing public sector IT systems should be step one for any department that can spare the up-front investment. The benefits are immediate and lasting.

            How data can help governments radically reduce carbon footprints

            Now, what happens if you turn those same optimization tools on your overall carbon footprint? In my work with the public sector and private sector, I’m often struck by two things: how little people know about their organization’s energy usage, and how easy some of the gains can be.

            Once an organization’s energy usage is observable, the doors are open to optimization. Among the specific tools that can lower an organization’s carbon footprint are:

            • Going digital
            • Moving to the cloud
            • IoT

            Let’s look at each in turn.

            Cutting carbon by going digital 

            While digitization can be a valuable tool in the pursuit of sustainability, it’s important to approach it with a nuanced understanding of the potential risks and benefits. On one hand, digital systems reduce the need for paper-based processes and streamline operations, leading to a more efficient use of resources and reduced labor. They can also enable governments to collect and analyze data, providing better visibility into their sustainability efforts.

            However, there are also potential downsides to digitization. For example, gathering and processing more data requires energy, which can contribute to carbon emissions. Additionally, the proliferation of digital devices can lead to a rise in e-waste if not managed responsibly. One report by the Capgemini Research Institute found that less than half of executives are aware of their companies’ IT carbon footprints, or of the steps they might take to reduce it.

            The key is planning. Digitization is a valuable tool for governments committed to sustainability, when combined with a holistic understanding of its energy usage, and when management take steps to mitigate any negative effects. With the right strategy in place, digitization becomes a powerful enabler of decarbonization and resource efficiency.

            Cutting carbon by migrating to the cloud

            Another way that IT can help reduce CO2 emissions is through the implementation of cloud computing. By moving data and applications to the cloud, private and public sector organizations can dramatically reduce their energy consumption and carbon footprints.

            One example of a government saving energy by migrating to the cloud is the U.S. Government’s Cloud Smart initiative, which encourages federal agencies to move their IT systems to the cloud. As a result, agencies can reduce the number of physical servers they need to maintain, which in turn lowers energy consumption and greenhouse gas emissions.

            Cutting carbon through IoT 

            By connecting devices and systems, the Internet of Things (IoT) also helps organizations optimize their operations and reduce energy consumption. For instance, a smart building can automatically adjust lighting and temperature based on occupancy levels, resulting in significant energy savings.

            However, reducing CO2 emissions is not the only benefit that IT can offer. It can also open new opportunities and help address wider sustainability challenges. For example, using IT to improve supply chain management can help organizations reduce their environmental impact by reducing the amount of waste and increasing the efficiency of their operations.

            Maintaining focus in difficult times

            In the public sector today, new events constantly vie for attention. Inflation, the war in Ukraine, chemical train derailments and other challenges must not distract public sector organizations from addressing the global warming challenge. IT has an important role to play in reducing CO2 emissions and helping to create a more sustainable future. By understanding our current CO2 footprint, establishing proper governance, selecting and scaling the right use cases, and using real-world examples, we can make a meaningful impact. Let’s take action and do our part to protect our planet.

            Read more about real-life use cases for carbon-cutting IT in TechnoVision for Public Sector, our yearly look at leading technological applications in the public sector space.  For more information on contact me at gunnar.menzel@capgemini.com

            Author

            Gunnar Menzel

            Chief Technology Officer North & Central Europe 
            “Technology is becoming the business of every public sector organization. It brings the potential to transform public services and meet governments’ targets, while addressing the most important challenges our societies face. To help public sector leaders navigate today’s evolving digital trends we have developed TechnoVision: a fresh and accessible guide that helps decision makers identify the right technology to apply to their challenges.”

              The metrics that matter

              Vinay Patel
              20 April 2023

              How banks are using data to monitor and manage their customer experience performance

              It’s 2023, and banks are still being asked to go in two directions at once.
              On one hand, banks must keep up with customers’ evolving expectations, while on the other, they must continuously improve business profitability. It’s a challenge that requires a high degree of coordination and strategic planning. Unfortunately, for many banks a lack of effective metrics and processes is hindering their ability to make informed, real-time decisions, and that’s holding them back. This blog will explore how effective customer experience performance management can help banks achieve their business objectives with optimal customer satisfaction.

              The intersection of business and customer experience
              Banks must focus on achieving specific business improvements such as cost reduction and revenue enhancement – without sacrificing customer experience. Preference should be given for transformation projects that meet these expectations­. Projects such as omnichannel communication, self-service adoption, knowledge management, mobile-first design, and CCaaS all have the potential to improve the customer experience and drive business results.

              To improve customer service, banks must define specific, controllable customer service activities that can be performed, measured, and improved at each level of the organization. This should include coordination across the customer life cycle, technology acquisition, processes, customer interactions, and collaboration with partners. There’s a term for this process: customer service performance management. The benefits of customer service performance management are clear – streamlined decision making, faster process delivery, and lower customer service expenditures. So… where’s the snag?

              Defining metrics…
              If you can’t measure it, you can’t master it. The key to focusing attention and effort is to define the right metrics. This means leveraging data analytics and customer feedback to gain insights into customers’ behavior and preferences. In means enhancing data-sharing capabilities between departments. Do your customer service agents have access to your CRM data? Are salespeople leveraging customers’ unique histories? By defining and tracking metrics, and by sharing them securely between teams, banks can improve customer experience and drive profitability, ultimately achieving sustainable growth and long-term success in the dynamic banking industry.

              …that drive improvement
              With a set of data-driven metrics, step two is using those metrics to inform decisioning at every level. Some banks find it helpful to appoint a Customer Experience Officer with the authority to determine whether the defined metrics and activities support or inhibit CX goals. They should also groom and compensate customer service managers based on their enterprise vision of customer service. Additionally, banks must build a real-time analytical framework to ensure that a customer is treated appropriately at every phase in the customer life cycle.

              Here are several more recommendations for improving service for banking customers:

              1. Appoint a Customer Experience Officer with authority to determine whether the defined metrics and activities support or inhibit collaboration between and among multiple groups.
              2. Groom and compensate customer service managers based on their enterprise vision of customer service.
              3. Evolve the concept of customer life cycle management from an often discussed but poorly administered concept to a more practical approach.
              4. Measure the specific effects technology has on decision making, organizational structures, business processes and customer expectations.
              5. Build a real-time analytical framework to ensure that a customer is treated appropriately at every phase in the customer life cycle.

              Customer Experience Analytics & Insights

              customer-experience-analytics

              A foundation for lasting customer loyalty
              By making customer experience an integral part of the overall business strategy, banks can improve customer satisfaction, build customer loyalty, and enhance overall business performance. This process can also help break down silos within the organization, leading to better collaboration and communication across departments, for a more unified and cohesive customer experience.

              Ultimately, banks must view customer service as an enterprise-wide business objective and prioritize effective performance management to achieve their business objectives. By leveraging customer initiatives across departments, mapping touchpoints to ensure consistency, and communicating a clear customer service roadmap to employees, banks can better meet customer expectations and achieve their business objectives.

              Author

              Vinay Patel

              Senior Director, Contact Center Transformation Leader
              Banking and Capital Markets sector are focused on delivering a customer-centric contact center leveraging a customer experience hub to  optimally engage customers across interactions.

                Related Research

                Six minutes to make customer contact

                Orchestrate people, process and technology to make these moments exceptional.

                Client Story

                Unleashing the potential of omnichannel service: a real-world case

                Discover how a leading Financial Services firm revolutionized their customer experience with next-gen technology.

                6G for the hyperconnected future

                Capgemini
                Capgemini
                17 Apr 2023

                A point of view on the technology advancements in 6G platforms and ecosystems.

                Life in 2030

                In 2030, the world will look dramatically different due to technological advancements in connectivity and associated technologies. The metaverse is likely to become fundamental to everyday life. 8K virtual reality (VR) headsets and brain-interface devices will probably become mainstream. There could be widespread proliferation of level-5 autonomous vehicles and hyperloop tunnels could enable faster international travel. Hypersonic airliners could enter service. “Smart Grid” technology will become widespread in the developed world. 3D-printed organs, blood vessels, and nanorobotics may improve our quality of life. Artificial brain implants could restore lost memories.  Quantum computing may become cheap enough to be mainstream. The first version of the quantum internet is likely to emerge, with terabyte internet speeds becoming commonplace. The entire ocean floor will probably be mapped, making deep ocean mining operations feasible. Hypersonic missiles will be a plausible addition to most major militaries, as will be AI-enabled warfare. The High-Definition Space Telescope (HDST) could be operational. The first permanent lunar base could be established.

                Making this new hyper-connected world a reality will require a massive leap forward; one that that provides 1,000 times faster connectivity than what is possible today, with data transfer speed in terabytes per second and extremely low latency allowing response time in a few microseconds.

                Although 5G networks are slowly maturing, and their full potential is still to be unleashed, the limits of 5G do not allow infrastructures and networks to simultaneously guarantee a speed of terabytes/second with extremely low latency. This calls for thinking beyond 5G.

                Mobile networks: past, present, and future

                Wireless cellular communication networks have seen the rise of a so-called new-generation technology approximately every ten years and each consecutive generation has resulted from disruptive technological advancement and societal change (Figure 1). If this trend continues, 6G may be introduced in the early 2030s, or at least that’s when most smartphone manufacturers will release 6G-capable mobiles, and 6G trials will be in full swing.

                Figure 1: Mobile networks: past, present, and future

                It is too early to provide a detailed list of features that 6G will bring, but there are emerging themes from research that are shaping new technologies like new spectrum, visible light communication, AI native radio, cell free networks, intelligent surfaces, holograph communication, non-terrestrial networks (satellites, High Altitude Platforms (HAPs), drones etc.) etc. In addition, the lessons learned from 5G network deployments and user ecosystems will play a big part in defining 6G.

                What really is 6G and how it is shaping up?

                6G is expected to provide hyper-connectivity that will lessen the divide between humanity and the inanimate world of machines and computers.

                Considering the general trend of successive generations of communication systems introducing new services with more stringent requirements, it is reasonable to expect that 6G will build on the strengths of 5G and introduce new technologies with requirements that far exceed the capabilities of 5G.

                Regulatory bodies are considering allowing 6G networks to use higher frequencies than 5G networks. Since spectral efficiency, bandwidth, and network densification are the three main ingredients needed to achieve higher data rates, this is likely to provide substantially higher capacity and much lower latency. Terahertz (THz) bands from 100GHz to 10THz are currently being considered. This will allow the delivery of a peak data rate of 1,000 gigabits/second with over-the-air latency lower than 100 microseconds. The current intent is to make 6G, 50 times faster than 5G, 10,000 times more reliable, and able to support ten times more devices per square kilometer while offering wider coverage.

                Though these are early days of 6G, a rough sketch of what 6G performance will look like and its comparison with 5G is suggested in the initial studies. For example, peak data rate in 5G is 200 Gb/s, whereas that in 6G is estimated to be 1 Tb/s, maximum bandwidth is 1 GHz in 5G vs 100 GHz in 6G, latency is 1 millisecond in 5G vs 100 microseconds in 6G, reliability is 1-10-5 in 5G vs 1-10-9 in 6G, peak mobility supported is 500 km/h in 5G vs 1000 km/h in 6G, energy latency is not specified in 5G but in 6G it is estimated to be 1 Tb/J. Detailed comparison is available in [1].

                This research into 6G may seem premature, but the geopolitical race for leadership on this next big thing in telecommunications technology is already gearing up. Countries across the globe are spending huge sums on 6G research. Various consortia are forming, and research projects are starting to address the new standards and vertical use cases, such as vehicle connectivity and private industrial networks. The key 6G initiatives across the globe are shown below (Figure 2).

                Figure 2: Global 6G initiatives

                6G use cases and the technologies driving them

                Expanding upon the foundation of 5G, 6G will enable a much wider set of futuristic use cases that, when deployed on a massive scale, will transform the way we live and work in remarkable ways.

                Telecom operators, technology providers, and academia are joining forces under various alliances and consortia and deliberating which use cases will emerge in the next decade and be adopted by 6G. NGMN [2], Next G Alliance [3], one6G [4] are just some of the leading alliances that have recently published 6G use cases.

                Figure 3 shows the categorization of various 6G use cases that enhance human-to-human, human-to-machine, machine-to-machine, and machine-to-human communication.

                Figure 3: Emerging 6G use cases

                Key technical areas

                These use cases are driving the technology trends and steering the requirements for future generational change. The key technical areas that will accelerate 6G introduction include technological enhancements, architectural improvements, and accelerating adoption (Figure 4).

                Figure 4: 6G technical areas
                Top technology areas include:
                1. New Spectrum: The 6G era will necessitate a 20X increase in network capacity. 6G will meet this challenge through new spectrum in range (7 to 24 GHz) including sub-THz range (larger than 100 GHz) and ultra-massive MIMO.
                2. AI Native Networks: AI will become a native ingredient in 6G networks so that the network can become fully autonomous and hide the increased network complexity from users. A dynamic AI/ML-defined native air interface will be key for future networks. These interfaces could give radios the ability to learn from one another and from their environments.
                3. Sensing and positioning: With near-THz frequencies, the potential for very accurate sensing based on radar-like technology arises. 6G networks will be able to sense their surroundings, allowing us to generate highly realized digital versions of the physical world. This digital awareness would turn the network into our sixth sense. It will particularly improve performance in indoor communications scenarios by acquiring and sending better information about the indoor space, range, barriers, and positioning to the network. Please refer to this for more information.
                4. Security, trust, and privacy: 6G will provide advanced network security, trustworthiness, and privacy protections to unlock the full value potential of 6G, with quantum safe cryptography and distributed ledger technologies, such as blockchain.
                5. Ubiquitous connectivity: 6G will provide reliable networking connectivity, focusing on extreme performance and coverage when and where needed through seamless integration of non-terrestrial networks such as satellites, drones, and HAPs with the terrestrial network. Please refer to this for more information.
                6. Intelligent Reflecting Surface (IRS): IRS is a thin panel integrated with many independently controllable passive reflection elements– that can improve the security, spectrum, energy efficiency, and converge of 6G networks by adjusting the amplitude and phase shifts of reflection elements in IRS for achieving fine-grained reflect beamforming.

                Capgemini’s 6G initiative

                Capgemini has a rich history of leading mobile technologies and possesses end-to-end capabilities in RAN, edge, and core. In addition, Capgemini is a leading player in Open RAN, something that is likely to become more widespread as technology deployment continues.

                Capgemini has a head start in 6G research, particularly in the areas of mesh networks, AI for network automation, sustainability, and quantum cryptography. Our first 6G research paper on “xURLLC in 6G with meshed RAN,” was published in the “ITU Journal on Future and Evolving Technologies (ITU J-FET) – Volume 3, Issue 3, December 2022[9]. The objective of this research is to define a new network architecture that will make the 6G networks simpler, more flexible, and able to support extremely low latency communication.

                Academic collaborations with IISc Bangalore to identify rouge base stations using abnormally high power and with Princeton University to allow federated learning towards the goal of a user-centric cell-free 6G are also underway.

                Capgemini is also an active member of O-RAN alliance and participates in its next Generation Research Group (nGRG) task force to determine how O-RAN will evolve to support 6G and beyond.

                Conclusion

                Today, we are in quite early stages of the rollout of 5G, and we still have a long way to go with the maturing of this technology. However, this is the ideal time to plan for the future and ask what’s next. Emerging use cases for beyond 5G and 6G seem to be taking a firm footing and suggest that 5G may only open the door to such use cases. New and more stringent requirements will continue to push the evolution of wireless well beyond 5G and 6G. Capgemini is at the forefront of 6G research, with strong partnership with academia and industry.

                TelcoInsights is a series of posts about the latest trends and opportunities in the telecommunications industry – powered by a community of global industry experts and thought leaders.

                References
                1. White Paper on Broadband Connectivity in 6G
                2. NGMN Identifies 6G Use Cases
                3. Next G Alliance – 6G Applications and Use Cases
                4. One6G – 6G Vertical Use Cases
                5. What’s Inside Counts: How 6G Can Enable Ubiquitous, Reliable Indoor Location Services
                6. The Future of 6G is Up in the Air — Literally
                7. Non-Terrestrial Networks in 5G & Beyond: A Survey
                8. Faster, Smarter, Greener: Intelligent Reflecting Surface for 6G Communications
                9. xURLLC in 6G with meshed

                Meet the authors

                Subhankar Pal

                Senior Director and Global Innovation leader for the Intelligent Networks program, Capgemini Engineering 
                Subhankar has over 24 years of experience in telecommunications, specializing in advanced network automation, optimization, and sustainability using cloud-native principles and machine learning for 5G and beyond. At Capgemini, he leads technology product incubation, product strategy, roadmap development, and consulting for the telecommunications sector and related markets.

                Sandip Sarkar

                5G and 6G strategy lead for Capgemini Engineering
                Dr. Sandip Sarkar holds a B. Tech from IIT Kanpur. and a PhD from Princeton University. With over 30 years of experience, Dr Sarkar holds over 100 patents and over 30 published papers in the field of telecommunication. His research interests include wireless communications, error-control coding, information theory and associated signal processing systems. Dr. Sarkar was the author of multiple wireless standards and is a senior member of IEEE.


                  The model for future railway mobile communication systems

                  Vijay Anand & Manoj Kumar Meena
                  14 April 2023
                  capgemini-engineering

                  Rail operators want to replace old mechanical rail systems with modern digital alternatives, enabling the rapid deployment of innovative digital services.

                  These will include intelligent traffic management, automated shunting, infrastructure monitoring, and connected workers. Such systems however will require advanced connectivity delivered over high-bandwidth communications, with service-oriented architectures, and safety-critical cloud infrastructure. But many currently use legacy communications systems such as the Global System for Mobile Communications (GSM-R), based on decades-old 2G technology.

                  A future intelligent rail will need to upgrade. Since such upgrades happen irregularly, a number of advances have been made since the last one. Many railways are therefore looking to jump straight to 5G, under the Future Railway Mobile Communication System (FRMCS) standard.

                  Designed by the International Union of Railways, FRMCS aims to become the worldwide standard. It is a network architecture is designed to provide a software platform with rail in mind, onto which new digital services to be easily built and launched, and services easily upgraded over-the-air.

                  FRMCS is targeted to replace GSM-R in the next 7-10 years. But this will be no easy task. Railways are complicated and new services take time because of extensive testing, verification, and

                  stringent safety requirements. The technological challenges are immense; including dual operation during the co-existence period, network type and technology deployment decisions, and new security threats.

                  But these are challenges we must overcome to deliver future rail networks – and all the safety improvements and cost savings that will come with them.

                  In our new whitepaper, Future Railway Mobile Communication Systems, we discuss the benefits and challenges of deploying 5G under FRMCS, and propose a model for a migration strategy.

                  Future railway mobile communication systems whitepaper

                  Rail operators want to move to digital railways, with innovative digital services such as intelligent traffic management, automated shunting, and infrastructure health monitoring.

                  Meet our experts

                  Vijay Anand

                  Senior Director, Technology, and Chief IoT Architect, Capgemini Engineering
                  Vijay plays a strategic leadership role in building connected IoT solutions in many market segments, including consumer and industrial IoT. He has over 25 years of experience and has published 19 research papers, including IEEE award-winning articles. He is currently pursuing a Ph.D. at the Crescent Institute of Science and Technology, India.

                    Manoj Kumar Meena

                    MBA, C-CISO, CISM, CIISec
                    Manoj is a cyber security focused professional with over 16 years of experience in Telecom, Transport, Manufacturing, Pharma, Healthcare, Banking & Financial, Medical and Research industry. He has an extensive background in engineering security solutions for enterprises. He has delivered complex security solutions meeting the dynamic regulatory and compliance requirements. He has been involved in various R&D studies and currently managing various tactical projects for Network Rail in the UK.

                      The EU rules for high-value datasets have changed – how are European countries keeping up?

                      Eline Lincklaen Arriëns
                      13 Apr 2023

                      The European Commission is striving to make the EU a data-driven global powerhouse. To achieve this ambition, it recognizes the huge importance of high-value datasets and is mandating their publication by all EU Member States under an open license.

                      In January this year, the European Commission (EC) published a list of high-value datasets that EU Member States must make available free-of-charge by June 2024. These datasets are a specific category of open data, which is data that can be accessed, used, and freely shared for reuse with, at most, requirements to attribute the original source. In the case of the high-value datasets, these have been identified by the EC as publicly owned datasets that can have major benefit for society, the environment, and the economy.

                      What type of data is considered high-value?

                      The EC has classified six categories of open data as high-value datasets:

                      • Statistics
                      • Earth observation and environment
                      • Meteorological
                      • Geospatial
                      • Companies and company ownership
                      • Mobility

                      These categories of publicly available data are considered to be particularly useful for the creation of value-added services and applications for our society and economy. As an example, the EC states that datasets such as meteorological observation data, radar data, air quality and soil contamination and noise level data can support research and digital innovation, as well as enable better-informed policymaking, especially when addressing climate change and its impacts.

                      What are HVDs being used for?

                      The high level Open Data Maturity (ODM) Report 2022 from data.europa.eu revealed some interesting use cases pertaining to open data on earth observation and environment, meteorological, geospatial, mobility and statistical data. These include:

                      Environmental and economic impact

                      High-value datasets can help monitor forest fires across Europe. Data about forest fires comes from multiple sources, such as satellite imagery from the Copernicus program and national open data portals, and includes Earth observation and environment data, as well as meteorological and geospatial data. As climate change continues to influence forest fires, it is increasingly important to monitor and assess the situation using all useful resources. For example, datasets monitoring and assessing forests fires are used by EFFIS – a service that allows users to view the current situation in a map, read a curated list of new stories about fires, view long-term fire weather forecasts, and access a detailed statistical portal on forest fires. By making these high-value datasets available, organizations have access to more data that can support existing services and contribute to creating new tools that support proactive measures to prevent forest fires and support relief efforts.

                      The drive towards a greener and more sustainable (European) economy encompasses many aspects, including transportation. Here, modern and efficient transportation methods can significantly reduce individuals’ carbon footprint and monitor and support Europe’s transition towards greener mobility in the EU. One method is to exploit mobility high-value datasets. An example of such mobility data is the progress of railway electrification in Member States. Eurostat highlights how these datasets can provide information on the transportation infrastructure and give insights into the extent to which passengers and freight lines have been converted to electric lines, which play a key role in pushing towards greener mobility.

                      Social and political impact

                      High-value datasets can help measure and address income inequality across Europe. EU institutions acknowledge that income inequality indicators, such as statistical data, are highly informative and valuable measurements that can minimize income inequality. An example of a statistical dataset is ‘yearly inequality rate’. This dataset can provide insights and information about income inequality and its impacts on individuals, communities, and society, such as the concentration of earnings in a given population over time and across several factors, including gender, age and region. For example, Eurostat not only produces and shares these datasets, it also assesses the distribution of income among individuals by ranking them from lowest to highest earners, and subsequently divides the population into various sized ‘segments’ to inform users on how income is spread. Policymakers can use this information in the decision-making process to implement and enforce measures to reduce inequality and support minority groups or those in the lower quadrille.

                      Geospatial data can support environmental and economic activities by contributing to smart cities because it provides information containing specifications on properties linked to an exact point on Earth, such as satellite imagery and census datasets tied to a specific geographic location. Among the six thematic categories of high-value datasets, this involves geospatial (e.g., administrative units, geographical names, addresses, building, cadastral parcels); mobility (e.g., transport networks including geographical positions and links with cross-border networks); and earth observation and environment data (e.g., space-based or remotely sensed datasets and ground-based or in-situ datasets). These datasets can support services such as EVapp, which locates victims of cardiac arrests and identifies nearby first aiders in Belgium, or Digital Forest Dryads that protect forests from illegal deforestation in Romania and other EU countries.

                      The EC deadline is set for June 2024

                      As stated, the Implementation Regulation governing the free availability of the high-value datasets was officially published in January this year and Member States have until June 2024 to make them re-usable for free, using application programming interfaces, and available in machine-readable format. The ODM Report 2022 from data.europa.eu highlighted the steps being taken ahead of the Implementation Regulation, and reported a good level of preparedness. When the results for the ODM Report 2022 survey were gathered, it was observed that 96% of the 27 EU Member States were already working on identifying high-value data domains to be prioritized for publication. Further, 85% of the EU27 were already preparing to monitor and measure the level of reuse of high-value datasets.

                      With the Implementing Regulation now published and compliance required from June 2024, future Open Data Maturity assessments will keep track of the progress in applying the regulation from an organizational, technical, and legal perspective. They will also aim to look at the level of compliance.

                      What best practices are there for publishing high-value datasets?

                      The ODM Report 2022 revealed some of the preparatory steps countries have been taking. These steps offer a valuable guide for all countries seeking best practices for publishing high-value datasets, and include:

                      • Preparing in advance:  Several countries started their work on high-value datasets before the publication of the Implementing Regulation: 96% of EU27 stated in the ODM 2022 that they were already identifying high-value datasets and 93% of them confirmed that they were preparing public bodies holding high-value datasets to denote those datasets in their metadata. For example, in Poland, the Chancellery of the Prime Minister started a consultation with all Polish ministries, subordinate units, and Statistics Poland on the draft Implementing Regulation. Similarly, in Austria, a Task Force on Public Sector Information and Open Data has been set up within the Federal Ministry for Digital and Economic Affairs with regard to implementing the Open Data and Public Sector Information Directive 2019/1024 and determining high-value datasets. Allowing for timely internal preparation facilitates putting in place the expertise and resources needed to respond to the requirements of the EC´s Implementing Regulation.
                      • Highlighting high-value datasets: Making high-value datasets more obvious on national open data portals is a key practice. This is planned for instance on the Bulgarian open data portal, where high-value datasets will be assigned to a dedicated category, which will also be selectable through filters in the general section of the available datasets. Similarly, in Finland, the national data portal team has designed a symbol to use as an icon to highlight the high-value datasets and help users to differentiate them from other open data. Highlighting high-value datasets on the portals helps to keep track of identified high-value datasets and facilitates further collection in the data providers and data (re)users community.
                      • Monitoring and showcasing (re)use: The practice of having a standardized way of gathering and cataloging open data reuse cases is encouraged in the ODM Report 2022 and 85% of the EU27 stated that they were also preparing to monitor the reuse of high-value datasets. The Czech Republic, for example, links individual datasets – including those labelled as high-value datasets – directly to the list of reuse examples on their national open data portal (which will also be in open data format). Hence, when reading the metadata of datasets labelled as high-value, it will be possible to see examples of their practical reuse. This allows the better understanding and communication of the (potential) impact of such datasets to a wider audience and stimulates further reuse and impact creation through open data.
                      • Ensuring interoperability and metadata quality: In the ODM 2022, 63% of Member States responded that they were preparing to ensure the interoperability of high-value datasets alongside available datasets from other countries. An example comes from Germany, where a property in version 2.0 of DCAT-AP.de has been implemented in order to be able to better reference high-value datasets. Another example is Sweden, which introduced an interoperability framework for frequently used high-value datasets. Data quality and interoperability are key aspects to unlock the full potential of data sharing, even more when it comes to datasets with a high impact on our society and economy.

                      Aligning with EC priorities and sharing best practices

                      The high-value datasets identified by the EC closely align with their overarching priorities for 2019-2024. For example, geospatial and Earth observation and environment data have clear links to supporting the EU Green Deal, whilst statistics and companies and company ownership data can contribute to realizing an economy that works for the people. The annual Open Data Maturity assessment is helping European countries to push forward with these priorities.

                      Its purpose is to raise awareness on the state of open data practices in Europe and help countries to do more and better. This is enabled first and foremost through the sharing of information across countries, as epitomized in the ODM Report 2022. This sharing both of best practices and of the challenges encountered will help drive more effective implementation of the Implementing Regulation on high-value datasets to create greater impact.

                      Authors

                      Eline Lincklaen Arriëns

                      Senior Consultant and Expert on European data ecosystems Capgemini Invent NL
                      “Digital technologies are crucial in addressing global challenges, including climate change and environmental degradation. Capgemini aims to support clients accelerate their digital transition in a manner that is sustainable to their organization, society, and the environment, and in line with EU priorities such as the EU Green Deal.”