Skip to Content

Making it easier for organizations to get the partner they need to achieve the future they want

James Page
12 Dec 2022

Capgemini is one of the first Microsoft partners globally to achieve all six of Microsoft’s new Solution Partner designations

Microsoft’s introduction of its six new Solution Partner designations on October 3, 2022 – replacing Microsoft’s outgoing legacy Gold and Silver partner badges – is the latest in a series of ongoing partner-focused improvements from Microsoft.

Spanning six key Microsoft technology demand areas , these new designations have been designed to streamline and simplify partner selection, helping organizations move at speed by selecting proven partners with the breadth and depth of expertise to support their end-to-end transformation needs.

Capgemini is proud to have achieved qualification across the full complement of these new designations and has been awarded the Microsoft Solution Partner – Cloud designation given to partners demonstrating excellence in all six solution areas.

Additionally, Capgemini has a total of 14 Advanced Specializations and Azure Expert MSP status. These credentials are testament to Capgemini’s commitment to attaining the highest possible standards across the entire Microsoft ecosystem so that we can continue to be the dedicated, full-service partner our clients need.

The new Microsoft Solution Partner Program explained

Reference to Gold and Silver Microsoft competencies will be phased out, replaced by the new Solution Partner Program.

Based on holistic and stringent scoring criteria measuring breadth and growth of current skills as well as demonstrable growth in Microsoft products, Microsoft Solution Partner designations allow organizations to identify the most competent partners in each of the six solution areas. These are:

  • Infrastructure (Azure)
  • Data & AI (Azure)
  • Digital & App Innovation (Azure)
  • Modern Work
  • Security
  • Business Applications

Advanced Specializations further validate a partner’s deep technical expertise once Solutions Partner Designation has been achieved. Together, designations and specializations give an organization a clear and true sense of both the breadth and depth of a partner’s capabilities.

In addition to the six designations named above, Capgemini has achieved the following 14  Advanced Specializations:

  • Analytics on Microsoft Azure
  • Data Warehouse Migration to Microsoft Azure
  • AI and Machine Learning in Microsoft Azure
  • Kubernetes on Microsoft Azure
  • Modernization of Web Applications to Microsoft Azure
  • SAP on Microsoft Azure
  • Windows Server and SQL Server Migration to Microsoft Azure
  • DevOps with GitHub on Microsoft Azure
  • Calling for Microsoft Teams
  • Meetings and Meeting Rooms for Microsoft Teams
  • Cloud Security
  • Identity and Access Management
  • Threat Protection
  • Low Code Application Development (PowerApps)

Azure Expert MSP is the highest accreditation an Azure partner can achieve. Attainment of this status requires extensive auditing of a partner’s capability, processing, tooling, security, and sales positioning. Partners are assessed by Microsoft’s global best practice standards and independently audited by a third party, which requires many hundreds of hours of our experts’ time to achieve. This gives our clients the certainty when working with us that their due diligence has already been independently verified against internationally defined standards.

What these accreditations mean for our clients

These accreditations help to give our clients the certainty, assurance, confidence, and trust that is needed as a basic requirement for any technology transformation, and to quickly identify a partner’s credentials in the areas and workloads that matter most to them.

By bringing Capgemini’s breadth and depth of credentials and expertise to clients across the entire Microsoft Cloud, at every stage of their journey, we are able to help organizations to go further, faster, and achieve their important business outcomes.

Bringing the Microsoft platform to life

With over 35,000 certified professionals across all the Microsoft Cloud platforms, our people have made this achievement possible. Their dedication to achieving the highest possible technical certifications brings certainty for our clients as we partner together.

Through Capgemini’s end-to-end Microsoft commitment, we can stitch together innovative business-focused solutions from across the entire Microsoft solution portfolio, and draw on our deep, advanced capability to do this. For our clients to have the proof through these accreditations that we are tried and true across every facet of the Microsoft ecosystem is invaluable to us.

If you would like to know more about Capgemini’s Microsoft certifications or how Microsoft solutions can help your business, please get in touch with Sally Armstrong at sally.a.armstrong@capgemini.com

“We are excited that Capgemini has attained all six solution area designations to receive the Solutions Partner for Microsoft Cloud distinction. This highlights Capgemini’s breadth of capabilities across the Microsoft Cloud to deliver innovative solutions and accelerate cloud transformation for their clients.” 

Kelly Rogan, Corporate Vice President of Global System Integrators at Microsoft

Author

James Page

Microsoft Alliance Lead – Australia & New Zealand
James Page leads Microsoft Partner Strategy and Execution across Australia and New Zealand, working with multiple stakeholders to establish market leading partnering impact, building co-selling motions across the key focus sectors.

    Intelligent HR operations – drive amazing people experiences

    Capgemini
    Capgemini
    19 Dec 2022

    Enterprises are increasingly relying on service providers to overcome the challenge of delivering intelligent, frictionless HR operations that drive enhanced, more personalized people experiences.


    If the global pandemic has taught us anything, it is the need to provide an irresistible and amazing people experience. Indeed, investment in the data, services, analytics, and tools to boost employee empowerment and engagement has had an extremely high return during this turbulent period.

    According to a report of senior HR leaders, 79% see acceleration of digital transformation in their organizations due to the pandemic, while 96% of HR leaders see the role of HR shifting from being an administrative service provider to concentrating more on designing employee experiences and satisfaction, acting as change agents, and developing talent.

    The HR function must keep up with this pace of digital transformation and disruptive business practices in order to deliver on growth, but is being impacted by a number of challenges, including labor shortages and the changing expectation of employees working from home.

    Unique APAC challenges for HR

    We only need to look at the APAC market, and its unique specificities, to understand the level of these challenges compared to the other regions. Up to 78% of the Asian workforce was working in person up until 2020 (pre-pandemic), much higher than in US and EMEA, making the move towards hybrid working more challenging.

    Highly diverse work cultures, lack of strong tech infrastructure, and a high proportion of blue collar workforce in APAC has also contributed to this slow adoption of the hybrid work model in the region. In addition, APAC’s highly fragmented and diversified market led by a myriad of languages, local regulatory requirements, and varied ways of working is making it challenging to drive standardization.

    Moreover, a recent survey by Korn Ferry states that APAC faces an imminent labor shortage of 47 million people and $4.238 trillion in unrealized annual revenue across the region by 2030. With the war for talent becoming increasingly competitive as employees prioritize experience over pay along with continuously changing skillsets, HR leaders have started to look at the gig economy to provide the greatest flexibility without hitting the bottom lines.

    Another important report highlights the focus on HR tech modernization in APAC with 89% of respondents preferring to implement people analytics solutions, but only 38% believing their organizations are ready. The main reasons for this sluggish adoption are the diverse nature of the APAC region, differing levels of economic maturity, multiple HCM systems, complex data transmission, shortage of right talent, inadequate systems/processes, budget limitations, and difficulty in securing executive buy-in are.

    People-centric, frictionless HR operations

    These challenges and priorities are defining a new future state for HR shared services and outsourcing. For the first time, “focus on core business outcomes” is the most important driver for companies, with “cost” falling to second place. This shows that now, more than ever, companies view HR outsourcing and transformation as a strategic driver of business value creation through innovation and differentiation.

    Given this shift, organizations are now looking for transformation partners who can proactively respond to regulatory changes and market shifts, while bringing cutting-edge HR and solutions to help with organizational and people challenges.

    Today’s CHROs and CXOs need to now focus on standardizing and automating their employee processes to create consumer-grade people experiences. And all of this after designing and executing an efficient, end-to-end service delivery model driven by intelligent, data-driven, frictionless HR operations that seamlessly connects people, processes, and technology.

    To learn how Capgemini’s Intelligent People Operations can drive a personalized and frictionless people experience across your organization, contact: ajay.chhabra@capgemini.com or rashmeet.kaur@capgemini.com

    About authors

    Ajay Chhabra, Practice Leader – APAC, Intelligent People Operations, Capgemini’s Business Services

    Ajay Chhabra

    Practice Leader – APAC, Intelligent People Operations, Capgemini’s Business Services
    Ajay Chhabra leads Capgemini’s Intelligent People Operations practice for APAC with specific focus on HR transformation & advisory. With over17 years of professional experience, Ajay is passionate about solving client’s HR & payroll challenges through consulting, transformation, and innovative solutions.
    Rashmeet Kaur, Team Lead, Intelligent People Operations, Capgemini’s Business Services

    Rashmeet Kaur

    Team Lead, Intelligent People Operations, Capgemini’s Business Services
    Rashmeet Kaur is a team lead with Capgemini’s Intelligent People Operations practice. She has worked on projects in different industries involving strategy, advisory, & consulting, HR transformation, and shared services setup.

      SONiC – The networking industry’s open secret

      Rajesh Kumar Sundararajan
      15 December 2022
      capgemini-engineering

      “Open” is a popular word in today’s data networking marketplace, with operators relentlessly pushing for open networking, Open RAN, Open FTTH, and Open BNG among other things.

      It challenges some of the established traditional models in the industry, and at least on the face of it, enables new players to enter with competitive products and services.

      Subsequent to SDN (Software-Defined Networking) and virtualization, two major events have changed the market’s dynamics irreversibly: the OCP (Open Compute Project) and SONiC, the open-source NOS (Network Operating System).

      Consumer driven innovation

      Even as little as a decade and a half ago – or three generations at today’s speed – innovations in networking were driven primarily by the R&D organizations of large equipment manufacturers. Consumers, like enterprises and network operators, could describe problems and challenges, and then it was up to the R&D houses to come up with the solutions, including defining and writing the specifications for any standards towards the same.

      Much has changed now. The OCP, the ONF (Open Networking Foundation), and now SONiC have been conceptualized. Projects are being driven by consumers of networking products, among them data centre operators such as Microsoft and Meta (previously Facebook), and telecom network operators such as Axiata, Deutsche Telecom, Telefonica, and Verizon. The cornerstones of this evolution have been the appearance of the “white box” and open source – the former changing lengthy hardware R&D cycles and the latter addressing software R&D cycles.

      Open-source NOS in a continuum

      SONiC is not the first open-source NOS. Others appeared much earlier for different market segments or device categories, including openWRT, pfSense, and prplWRT. These addressed devices at the customer’s premises, such as their residence or enterprise. Software such as DENT, SONiC, and STRATUM, on the other hand, attempt to do the same for the operator part of the network. Granted, there are still large parts of the network which are still closed or proprietary, such as the BSS and OSS at one end and the switch ASIC with its drivers at the other – the P4 programming language attempts to address the latter, albeit partially. Still, these represent significant evolutions in the march towards open networking.

      Not yet a walk in the park

      Even with all these options, the use of open source such as SONiC is not yet as easy as “download, install and go”. Anyone who has tried to make a build of the source code by themselves will tell you stories of the many weeks spent finding that missing script or that incorrect environment variable. The same goes for testing. How do you really place your network in the hands of these several million lines of software code, stitched together with multiple languages such as C++, Java, and Python? How can you be confident that this has been tested sufficiently for your network’s use cases or for your device’s deployment possibilities? How do you make it work for a new platform? How do you make it work with your own management or monitoring system? These questions become all the more challenging when dealing with highly complex hardware platforms.

      Enabling your adoption

      The response to these challenges has been the appearance of support and services offerings to cater to the above. These require significant experience not just of the software but also of the underlying hardware platforms, and the ecosystem of vendors that have developed and supplied them.

      Independent organizations with the necessary domain experience – of the networking device, and the network in which it must function, including management, operations and business support systems, coupled with reliable hardware and software skills bolstered by industrialized engineering processes – can help you tackle these problems and succeed in this highly competitive marketplace of data networking.

      Author

      Rajesh Kumar Sundararajan

      Consultant, Capgemini Engineering
      Rajesh has 25 years of experience in the datacom and telecom industry spanning engineering, marketing, quality control, product management, and business development. He is always connected to the technology and has been involved in projects in IP, routing, MPLS, Ethernet, network access, network aggregation, transport networking, industrial networking, data-center networking, network virtualization, and SDN technologies.

        Monte Carlo: is this quantum computing’s killer app?

        Camille de Valk
        16 Dec 2022

        As the quantum computing revolution unfolds, companies, start-ups, and academia are racing to find the killer use case

        Among the most viable candidates and strongest contenders are quantum computing Monte Carlo (QCMC) simulations. Over the past few years, the pace of development has certainly accelerated, and we have seen breakthroughs, both in hardware and software, that bring a quantum advantage for finance ever closer.

        • Roadmaps for hardware development have been defined and indicate that an estimated quantum advantage is within a 2–5-year reach. See for example IBM and IonQ, who both mention 2025 as a year where we can expect the first quantum advantage.
        • End-to-end hardware requirements have been estimated for complex derivatives pricing at a T-depth of 50 million, and 8k qubits. Although this is beyond the reach of current devices, simple derivatives might be feasible with a gate depth of around 1k for one sample. These numbers indicate that initial applications could be around the corner and put a full-blown advantage on the roadmap. Do note, however, that these simple derivatives can also be efficiently priced by a classical computer.
        • Advances in algorithmic development continue to reduce the required gate depth and number of qubits. Examples are variational data loaders, or iterative amplitude estimation (IAE), a simplified algorithm for amplitude estimation. For the “simple derivatives,” the IAE algorithm can run with around 10k gates as opposed to 100k gates for 100 samples with full amplitude estimation.
        • There is an increasing focus on data orchestration, pipelines, and pre-processing, readying organizations for adoption. Also, financial institutions worldwide are setting up teams that work on QCMC implementation.

        All these developments beg the question: what is the actual potential of quantum computing Monte Carlo? And should the financial services sector be looking into it sooner rather than later? Monte Carlo simulations are used extensively in the financial services sector to simulate the behavior of stochastic processes. For certain problems, analytical models (such as the Black-Scholes equation) are available that allow you to calculate the solution at any one moment in time. For many other problems, such an analytical model is just not available. Instead, the behavior of financial products can be simulated by starting with a portfolio and then simulating the market behavior.

        Here are two important examples:

        • Derivatives pricing: Derivatives – financial products that are derived from underlying assets – include options, futures contracts, and swaps. The underlying assets are expected to be stochastic variables as they behave according to some distribution function. To price derivatives, the behavior of underlying assets has to be modelled.
        • Risk management: To evaluate the risk of a portfolio, for example interest rates or loans, simulations are performed that model the behaviour of the assets in order to discover the losses on the complete portfolio. Stress tests can be implemented to evaluate the performance of the portfolio under specified scenarios, or reverse stress tests can be carried out to discover scenarios that lead to a catastrophic portfolio performance.

        Classical Monte Carlo simulations require in the order of (1/ε)^2 samples to be taken, where ‘ε’ is the confidence interval. For large cases, this easily becomes prohibitive. Suppose a confidence interval of 10^(-5), billions of samples are required. Even if workloads are parallelized on large clusters, this might not be feasible within an acceptable runtime or for cost reasons. Take for example the start of the Covid-19 crisis. Some risk models looking at the impact of Covid on worldwide economies almost certainly would have taken months to build and run, and it is likely that before completion, the stock market would have dropped 20%, making the modelling irrelevant.

        Quantum computing Monte Carlo promises, in theory, a quadratic speedup over classical systems. Instead of (1/ε)^2  iterations on a classical system, (1/ε) iterations on a quantum computer would attain the same accuracy. This means that large risk models that take months to complete may become feasible within just hours.

        Unfortunately, it’s never as easy as it seems! Although sampling on quantum computers is quadratically faster, a large overhead could completely diminish any quantum speedup. In practice, expressing a market model as quantum data seems extremely difficult. There are a few workarounds around this problem, such as the data loaders as announced by QCWare, or a variational procedure as published by IBM, but it is yet to be seen if these work well on real problems.

        However, if quantum hardware and software continue to develop at their current pace, we can expect some very interesting and valuable uses for quantum Monte Carlo applications. A business case can easily be made, because if  QCMC improves risk management simulations, then the reserved capital required by compliance regulations could be reduced, freeing up capital that can be used in multiple other ways.

        Furthermore, the derivatives market in Europe alone accounts for a notional €244 trillion. A slightly inaccurate evaluation of this market could lead to a large offset to its actual value, which in turn could lead to instability and risks. Given the huge potential for derivative pricing and risk management, the benefit of significant and deterministic speedups, and an industry that is fully geared up to benefit from quantum, QCMC seems to be one of the killer applications.

        However, before QCMC works successfully in production, a lot of work remains to be done. Just like in any application, proper data pipelines needed to be implemented first. The time series required for risk management need to be processed on stationarity, frequency, or time period. If policy is adjusted to daily risk management, data streams also have to be up to date. If a quantum advantage needs to be benchmarked, then its classical counterpart must be benchmarked too. Additional necessary developments, such as building the required infrastructure (given the hybrid cloud nature of quantum applications), its relation to compliance regulations, and security considerations, are still in their early stages.

        Given the huge potential of quantum computing Monte Carlo, a number of pioneering financial services companies have already picked it up; Wells Fargo, Goldman Sachs, JP Morgan Chase, and HSBC are well established in their research into using QCMC or subroutines. Certainly, these front runners, will not be late to the quantum party, and they will be expecting to see benefits from these exploratory POCs and early implementations, likely in the near future.

        Deploying algorithms in productionized workflows is not easy, and it is even more difficult when a technology stack is fundamentally different. But, these challenges aside, if the sector as a whole wants to benefit from quantum technology, now is the time to get curious and start assessing this potential killer app.

        First published January 2021; updated Nov 2022
        Authors: Camille de Valk and Julian van Velzen

        Camille de Valk

        Quantum optimisation expert
        As a physicist leading research at Capgemini’s Quantum Lab, Camille specializes in applying physics to real-world problems, particularly in the realm of quantum computing. His work focuses on finding applications in optimization with neutral atoms quantum computers, aiming to accelerate the use of near-term quantum computers. Camille’s background in econophysics research at a Dutch bank has taught him the value of applying physics in various contexts. He uses metaphors and interactive demonstrations to help non-physicists understand complex scientific concepts. Camille’s ultimate goal is to make quantum computing accessible to the general public.

          Digital security and quantum computing: An essentially paradoxical relationship

          Clément Brauner
          14 Dec 2022

          All of our online actions today are governed by a set of cryptographic rules, allowing secure exchanges between different parties; but a new threat is looming.

          While we are impatiently waiting for the emergence of quantum technologies, which will bring major technological progress, the internet is preparing for the day when quantum computers will be able to decrypt our secure communications, called “Q-Day” by some.

          But, while quantum hardware is not yet mature enough to decrypt the algorithms currently used, our data is already at risk from hackers, who accumulate encrypted data in order to decrypt it in the future. So, how do we prepare for this eventuality and how has this threat, as of today, changed the way we think about this relationship between cybersecurity and quantum computing?

          Quantum computing is a scientific technological evolution. Conversely, digital security is a concept: in essence, it can be interpreted differently depending on its uses, which are sometimes competing (Read more on this subject from B. Buzan and D. Batistella). Exploring the relationship between quantum computing and digital security can therefore generate paradoxical discourse.

          The paradox of quantum progress: threat or opportunity for security?  

          The current investment in quantum technologies is teeming and patent publications have grown exponentially over the last 10 years. The emergence of numerous venture capital companies in the ecosystem also demonstrates the market’s interest in these technologies. This will lead to numerous use cases due to quantum computers providing exceptional computing power, but which will also offer tools to cybercriminals, to deconstruct the security systems already deployed.

          For example, via the quantum implementation of Shor’s algorithms, it will be possible to break the current cryptographic protections, which are based mainly on the current difficulty in factoring large prime numbers in a reasonable amount of time. Therefore, the arrival of the quantum computer, and its use by networks of cybercriminals, will mean a significant risk for companies. Data requiring long-term storage (biometrics, strategic building plans, strategic IP, etc.) could then be easily accessible by malicious entities using quantum computing power.

          These threats call into question the creation of a digital space known as “trusted.” The risk is therefore too great to stand still in the face of this quantum revolution, and the launch of initiatives is necessary, as soon as possible, to identify the risks associated with the various activities of the company, and to reassure on a large scale.

          At the same time, new technologies promising quantum security are emerging.  First, new mathematically based cryptography is being developed that remains “unbreakable” even by quantum computers. According to the viewpoint published by ANSSI on January 4, 2022, the National Institute of Standards and Technology (NIST) has been analyzing “post-quantum cryptography” (PQC) algorithms, to increase the defense strengths of the virtual world tenfold since 2016. This standardization recommendation is expected to be published by 2024.

          Second, physics-based cryptography, called quantum key distribution (QKD), is emerging as an alternative for ultra-secure communications. While post-quantum cryptography is based on the premise that no efficient (quantum) algorithm has been found to “break” it, QKD technology relies on physical principles to detect the interception of secure keys. In case of interception, these keys can be regenerated until the parties are certain that the communication is secure.

          This standardization of post-quantum algorithms is focused on a competition organized by NIST in order to analyze different algorithms and keep the most efficient. In this competition, 69 candidate algorithms were submitted in 2017, of which only 7 reached the 3rd round of qualification in July 2020. The algorithms selected in this round are based on NP-Hard problems (whose solution is performed in exponential times, at best) and which will remain complicated to solve even for quantum computers. They are concentrated around “lattice” problems (based on the calculation of the smallest vector in a set of points) as well as “code-based” algorithms (based on the decoding of a linear code). The results of this first round were published on July 5, 2022.

          Finally, by drastically accelerating the learning speed of artificial intelligence and machine learning algorithms, quantum computing will also strengthen all the security systems that rely more and more on these technologies. Security operations centers (SOCs), which deploy tools to detect weak signals of attacks, and which aim to quickly identify deviant behavior in networks and uses, will be all the more effective. In all sectors, from fraud detection in banking to industrial incidents, quantum computing will increase the effectiveness of SOCs. As a result, the need for security teams to ramp up on these already visible technologies will only increase in the coming years.

          The paradox of quantum research: wait too long or act too fast?

          In all public and private organizations, the development of security teams’ expertise in quantum issues is still in its infancy. In the best of cases, the first reflections are oriented around the business benefits of quantum technologies, leaving cybersecurity issues and the concrete analysis of risks and opportunities in the background. This observation is reinforced by a lack of awareness of the subject within companies, which means limited internal training initiatives for employees, and so leaves the subject to external expert partners.

          How can we professionalize the approach to a technology that has not yet been democratized? At what point does research become real enough to trigger industrialized programs? Even as we enter the development phase of a new generation of more powerful quantum computers, the need for companies to experiment becomes essential, in order to avoid being threatened by cybercrime networks that are constantly one step ahead of the lines of defense, and more agile in absorbing new technologies and hijacking them.

          The presence of Cloud leaders (Google, Amazon, Microsoft, IBM, etc.) in this market allows a democratization of, and a relatively easy access to, these technologies. The availability of quantum resources, via platforms already implemented by their customers, creates a very fertile ground for various experiments. But how to secure sensitive information from these projects when using external and shared devices, which is the very basis of the cloud model? So-called “blind quantum computing” ensures that even the owner of the quantum computer is unable to know what computations users are performing on them. However, while this can have great applications in terms of privacy protection, there is, in return, a risk of losing any insight into the user’s or users’ intentions.

          Waiting too long or acting too fast? The answer is not simple and will come from a collective movement, driven by the construction of large training programs in quantum engineering (certification courses, allocation of research budgets, etc.), or the pairing between organizations and start-ups.

          The geopolitical paradox: a universal issue or a question of sovereignty?

          All scientific revolutions end up becoming universal, one way or another. They impact societal structures, means of production, and, de facto, citizens; the quantum revolution is no exception to the rule.

          However, it is clear that it is the subject of a geopolitical confrontation. Like digital technology, quantum computing is a field of economic and political competition between states. Some states invest more than others, implying two phenomena: a competitive advantage in the context of the market economy around quantum computing that will be created in the coming years, and a use for national intelligence purposes.

          China is well on its way to becoming the leader in quantum technologies, especially in the field of communications, with an estimated investment volume of 10 billion euros. It has the largest QKD quantum communication network, with satellites and a fiber optic network capable of communicating over 4600 km. This is a strategic project that aims to protect its commercial and military communications from intrusions.

          France, for its part, aims to become a leader on a European scale, with a plan to invest 1.8 billion euros over five years, announced in January 2021. With a very dynamic ecosystem of start-ups, France is investing in ways to facilitate meetings between academic experts and industry. Examples include Pasqal, which is developing its offer (a quantum computer capable of computing at room temperature, providing the best ratio of computing capacity to energy consumed in the world), merging with Qu&Co to create a European leader, and Alice and Bob, which is raising €27 million – and academics around the Saclay plateau.

          This dynamism has led to some initial achievements, such as the French Navy’s development of the Girafe system (interferometric gravimeters for research with embarkable cold atoms), an autonomous navigation system based on quantum detection technologies that can calculate its exact position without using the American GPS network, scheduled for 2026/2027.

          On the cyber side, the inauguration of the Cyber Campus on February 15, 2022, is another example: it demonstrates a French willingness to organize cross-sectoral, public/private cooperation around major cybersecurity challenges and future innovations. It is typically the place where the challenges of the arrival of quantum computing can be discussed.

          Quantum computing then becomes a question of digital sovereignty – the idea that in a polarized world order, it will be important for states to assert a quantum power and a capacity for self-determination over their digital space.

          Quantum computing will bring intrinsic contradictions: on the one hand, universal scientific progress and the strengthening of our anti-cybercrime capacities, and on the other, the reinforcement of a geopolitical and economic conflict and a threat to digital trust.

          First published in Les Echos, 15 September 2022 : Avec le quantique la sécurité numérique entre dans l’ère du paradoxe

          Author: Clément Brauner, Quantum Computing Lead, Capgemini Invent.
          Co-authors: Jeanne Heuré, Head of Digital Trust & Cyber, and Nicolas Gaudilliere, CTO – both of Capgemini Invent

          Clément Brauner

          Quantum Computing Lead, Capgemini Invent
          Clément is a manager at Capgemini Invent. Passionate about technology, he currently works as the SPOC for quantum activities in France and is a member of the “Capgemini Quantum Lab,” which aims to help clients build skills in quantum technologies, explore relevant use cases, and support them in their experiments and partnerships.

            How Microsoft industry clouds meet the objectives of industry cloud

            Sam Zamanian
            13 Dec 2022

            As predicted by Gartner*, industry clouds will be adopted by more than 50% of enterprises by 2027.

            In a previous article, I outlined a brief overview of industry clouds. In this article, I am going to reuse the same principles and metrics used in that post and map them to Microsoft’s industry cloud solutions.

            An overview of Microsoft Industry Clouds

            Microsoft Cloud for Industries is a range of offerings widely developed and promoted in the last couple of years in events and newsletters and within Microsoft’s partner ecosystem which is also available on Microsoft’s website. Below is an overview of some of the industry cloud offerings to date:

            • Financial Services: It is made to deliver differentiated customer experiences, improve employee collaboration and productivity, manage risk, and modernize core systems along with multi-layered security and compliance coverage for the FS industry.
            • Cloud for Healthcare: It provides trusted, integrated capabilities that make it easier to improve the entire healthcare experience.
            • Cloud for Manufacturing: It is designed to deliver end-to-end manufacturing solutions that can connect people, assets, workflows, and business processes, helping organizations become more resilient in workforce management.
            • Cloud for Nonprofits: It is designed to address many of our world’s most pressing challenges, providing critical services and support to communities everywhere.
            • Cloud for Retail: It brings together different data sources across the retail value chain and uniquely connects experiences across the end-to-end shopper journey.
            • Cloud for Sustainability: It helps customers record, report, and reduce their organization’s environmental impact.

            The building blocks of Microsoft Industry Clouds

            One of the advertised benefits of Microsoft Industry Cloud is that it is powered by the same underlying cloud platforms (horizontal or vertical) that are offered by Microsoft. As a matter of fact, Microsoft offerings have gone above and beyond Azure, which has created a diverse portfolio of services across Microsoft 365, Dynamics 365, and Power Platform besides Azure. As previously described in this article, the partner ecosystem plays a key role in sourcing and connecting existing industry solutions.

            Microsoft Industry Clouds and industry cloud intents and objectives

            The primary intents that make an industry cloud distinct from other horizontal clouds are outlined in this paper. These intents are what industry cloud is expected to achieve at a conceptual level. Below, each of these intents is mapped against Microsoft’s offerings and along with a brief description of how Microsoft Industry Cloud is positioned in each area:

            Ready-to-use solutions: There are two main categories of solutions available on Microsoft Industry Clouds:

            • 1st party: These are Microsoft industry solutions that can be used on day one with no or very minimal configurations. There is a range of first-party offerings available in each industry, supported by a delivery roadmap.
            • 3rd party: These are usually custom solutions, integration connectors, or extensions of the first-party solutions that are provided by the ISVs and SI partner’s ecosystem.

            Pre-built security and compliance controls: Microsoft’s industry cloud is designed to support compliance requirements as much as its underlying cloud platforms are (Dynamics 365, Microsoft 365, Azure, Dataverse, etc.). Besides, the cloud for industries comes with industry-specific customer scenarios to help meet the compliance requirements at the application and data levels. This will obviously enable faster cloud adoption by getting some of the time-consuming regulatory and compliance tasks out of the way. It is recommended to refer to Microsoft’s roadmap to check the available country and industry-specific compliance controls.

            Partner ecosystem and marketplace: Microsoft, being referred to as a partner-led organization, is backed by a large ecosystem of partners, and this is one of the drivers behind the industry solutions offerings. With App Source, Microsoft partners can build new – or bring in their existing – industry-catered IPs that can be integrated with the rest of the Microsoft industry solutions (1st or 3rd party). AppSource is a SaaS marketplace that is used by partners and Microsoft to supply Dynamics 365, Power Platform, and Microsoft 365 solutions. In addition, Azure Marketplace has been a long-standing IT store for partner-led Azure PaaS-based offerings.

            Open standards: Microsoft Industry Clouds come with out-of-the-box data models and schemas that are reusable, scalable, and extensible. Examples include the data model representations of customers, accounts, and campaigns. With common data models, Microsoft and partners can publish solutions using a collection of predefined data models including schemas, entities, attributes, and their relationships. API endpoints around these data models are available to enable partners to build applications on the industry data models or streamline data integration with Microsoft data platforms (Synapse, Dataverse, etc.) and make the solutions available via App Source or Azure Marketplace.

            Seamless integration at scale: With prebuilt integration endpoints on one hand and standard data models and the Power Platform connectors (1st or 3rd party) on the other hand, Microsoft Cloud becomes well equipped with an abundance of integration connectors available at no/low configuration or code. With these connectors, partners can bring data into the Microsoft data platforms at scale. Besides, ISV partners are empowered to build native connectors to their own proprietary solutions (e.g., core banking and marketing automation) and make their products interoperable with Microsoft Cloud.

            Customizations and extensions: PowerPlatformas the line of business low code/no code capability provides a family of products that can be used to customize and extend the industry cloud offerings, including the underlying platforms such as Dynamics 365. It is well-positioned to stitch together Azure, Dynamics 365, and Microsoft 365 via the prebuilt connectors.

            The picture below (built from some of the Microsoft collaterals) shows various industry cloud offerings by Microsoft and the underlying building blocks.

            The position of Microsoft Industry Clouds

            Microsoft’s position is to address the unique problems or micro challenges of the industry, as also stated in the latest Ignite event. Microsoft Industry Clouds are where the 1st party SaaS-based vertical solutions (such as Dynamics 365 and Microsoft 365 stack) come into play to address the industry-specific needs of customers (data models, compliance controls, processes, etc.) and join the horizontal and platform-level cloud platforms (Azure). With Power Platform, these SaaS solutions will come together with Azure and with AppSource, partners can complement the 1st party solutions. In a nutshell, it all means that there is a ‘single cloud platform’ offered by one provider that serves both horizontal and vertical use cases.

            * https://www.gartner.com/en/newsroom/press-releases/2022-10-17-gartner-identifies-the-top-10-strategic-technology-trends-for-2023

            Summary

            Microsoft is deemed as uniquely positioned (to date) in the alignment with the intents and objectives of industry cloud. It brings together a breadth and depth of cloud platforms across Azure, Dynamics 365, Microsoft 365 stack and Power Platform to support the horizontal and vertical cloud needs of customers. On the one hand, Microsoft offers Azure to enable horizontal cloud services to the market and on the other hand, with vertical industry capabilities, they help accelerate business initiatives for customers. Most of these industry offerings have been announced recently and there are roadmaps of work to deliver more services and include more sub-industries in the future ahead.

            Author

            Sam Zamanian

            Cloud Expert
            I am a technology leader with expertise in the cloud with over 20 years of experience in technology, architecture and advisory roles.

              Vertical flight is not for the faint-hearted

              Gianmarco Scalabrin
              12 Dec 2022
              capgemini-engineering

              Once engineering challenges are overcome, eVTOL innovators need to prove safety and reliability to authorities and the public.

              Today’s urgent need for green transport and noise-free aircraft is poised to propel growth of the electric vertical and take-off (eVTOL) market.

              Over the last decade, innovators and legacy manufacturers have been quietly developing the core technologies and testing prototypes and demonstrators. These include startups such as Joby, Archer, Beta Technologies, Lilium, and Vertical Aerospace, as well as established aerospace leaders including Airbus, Boeing, and Rolls-Royce. In 2021 alone, these companies raised around $7 billion in private investment; more than doubling the amount over the last decade.

              Vertical flight is very capital-intensive, with significant R&D costs in the emerging technologies on which it relies: high-density batteries, distributed propulsion systems, and novel aerodynamic designs which improve aircraft performance without compromising system redundancy or safety.

              Nonetheless, some organizations have succeeded in overcoming major engineering hurdles. That is an important step forward. The next few years will now be about proving the safety and operational reliability of their aircraft to airworthiness authorities and to the public.

              How do we prove and certify novel aircraft designs?

              A major endeavor when bringing a new aircraft to market is certification. It include ground testing, simulations, in-flight data acquisition, and critical software testing, and detailed data collection and reporting (see Appendix 1 below for more details). This process demonstrates how the aircraft systems meet EASA/FAA airworthiness requirements.

              Certification for a new, low-carbon aerospace system is by far the most expensive and challenging task prior to market entry.

              Such processes follow a rigid and gated approach known as Validation and Verification (V&V). In this process, requirements are first validated and cascaded down to the component level of the aircraft. Then, during the design, build, and test phases, the product is verified with a set of analysis and flight tests agreed with regulators. These aircraft design reviews are known as Preliminary Design Review (PDR) and Critical Design Review (CDR), after which the certificate will be released.

              Here a glimpse of how this looks:

              Today, the frontrunners in the nascent advanced air mobility space are racing to complete their aircraft design phase, which includes a concrete certification plan. In addition, most developers are also seeking to bend the cost and time curves when compared to traditional aircraft development, while following robust certification programs.

              Digitalizing Next-Gen Aircraft Certification Workflow to speed time to market

              As with so many things, digital technologies can help speed and optimise complex processes, whilst also increasing rigor

              At Capgemini Engineering, we have identified four significant areas of digital acceleration, where we see next-generation aviation companies achieving high standards in their design-for-certification goals with quality, time, and risk reduction:

              1. Digitalizing the certification workflow to improve data traceability, from regulatory airworthiness to evidence compliance, whilst enabling automated VTOL test case generation and systems trade-off analysis
              2. Incorporating data-driven machine learning to automate test scenario generation
              3. Automating task orchestration, and the generation of test cases combining a wide variety of sources of evidence, such as Model-in-the-Loop (MIL), Software-in-the-Loop (SIL) and Hardware-in-the-Loop (HIL) testing, computational analysis, and flight tests
              4. Building digital twins of flight scenarios. This will also support high-volume manufacturing and fleet operations that rely on flight test campaign results.

              A digital certification process will help ease the burden of aircraft certification, while automating test reviews and compliance evidence generation.

              Here is a glimpse of how a digital certification process could look:

              The technology of test automation and digital certification

              Doing all this means choosing, deploying, configuring and learning new technologies. And organizations face more technology decisions than ever. From smart cloud modernization, to AI and machine learning, to data analytics, weaving together different technology platforms can seem overwhelming.

              There is no one-size-fits-all approach. But we present here a few examples of technologies that are helping companies in delivering safe and reliable aviation products with reduced lead times.

              Constrained Software Test (Test Case Generation):

              Based on established principles of statistical testing, it enables companies to:

              • Establish early verification conditions (VCs) based on requirements and systems specifications
              • Define which software test to perform
              • Perform random tests derived from VCs and system constraints
              • Perform checks of the Systems Under Test (SUT) behaviors against reference models and VCs

              Validation Plan Generation and Intelligent Testing (ATLAS Test Scheduling):

              AI-based generation of a “Complete/Explicable/Smart” Validation Plan using genetic algorithms & Swarm intelligence to optimize data coverage:

              • Generation of hundreds of test scenarios in a matter of seconds
              • Automatic generation for «homogeneous covering» of test cases
              • Automatic validation of plans and updates after each change based on test results
              • Improved faulty or limited case diagnosis

              Auto-detection of Flight Test Anomalies (Improved Result Assessment and Classification):

              Detection of non-standard flight characteristics can be automated using a combination of statistical methods, Principal Component Analysis (PCA) and machine learning. Our proprietary AI-enabled anomaly detection has been used to save thousands of hours each year, whilst enabling deeper data analytics and diagnosis.

              Validation & Verification of Flight Controls (Automated Simulation and Analysis):

              Automated V&V processes that accelerate the Validation & Verification of flight controls and handling qualities. Thousands of simulation results that were previously written and checked manually are now automatically generated and analyzed on the complete set of certification requirements. Our customers are now able to define their own templates and automatically fill their test results including the release of certification dossiers.

              How Capgemini can help: Faster digital certification, without compromising rigor

              When it comes to speed, design, and certification, digital process that enable test automation, process traceability and better data capitalization are paramount. Companies looking to shape the future of aviation will need to collaborate with multiple suppliers and partners and leverage digital platforms.

              Capgemini Engineering is a long-standing engineering and R&D partner to aviation, working with suppliers and government regulators for many decades. We understand what it takes to design and certify parts and systems for the safety-critical aviation sector and the automotive and railway industries.

              The deployment of model-based systems engineering, digital twin, and the digital engineering practices we have put in place to support certification activities for our customers have been demonstrated to improve teams’ productivity and reduce recurrent costs across a product’s lifecycle by up to 40%. And we proactively invest and develop in the core technologies that will ensure a better and more compelling future for present and new generations.

              Appendix 1: The path to certification

              Currently, as part of the aircraft certification process, companies are required to:

              1. Engage as early as possible with regulators such as FAA or EASA to collaboratively define how airworthiness standards will apply to manufacturers’ specific eVTOL architectures and systems design. This includes:
                • Definition and agreement of working methods used for the development and certification of the aircraft
                • Agreement of the certification programs and level of involvement from the regulators
              2. Define a test plan that includes ground testing, simulations, in-flight data acquisition, HIL and SIL (Hardware- and Software-in-the-Loop) and critical software testing. For example, addressing:
                • How should manufacturers acquire the information required to comply with Part 23|Special conditions (SC) VTOL requirements?
                • How to optimize test plans based on data availability and flight test campaigns
              3. Collect data and engineering artifacts from real and virtual mission testing, including:
                • Production and collection of existing engineering artifacts and models, including model fidelity analysis
                • The collection of flight handling and performances data from HIL/SIL simulations, flight tests and engineering analysis
                • Automate the search and annotation of data to match the informational needs
              4. Perform compliance and systems performance checks:
                • Display the collected information in an accessible and automated form 
                • Assess and classify tests results   
              5. Evaluate compliance and submit documentation to airworthiness authorities       

              Author

              Gianmarco Scalabrin

              Solution Director​
              Gian is the Solution Director for Aerospace Innovation in the US and brings seven years of industrial and leadership experience to his wide range of clients. He is an aerospace engineer with a passion for electric and supersonic aviation and leads our innovation teams in topics such as sustainable aviation, advanced air mobility and autonomous air operations.

                Persona-led platform design drives enhanced finance intelligence

                Daniel Jarzecki
                12 Dec 2022

                Customized, persona-led design drives adoption of a finance intelligence analytics platform, creates a data-driven culture, and enables a more frictionless approach to finance operations.


                According to recent Capgemini research, only 50% of organizations benefit from data-driven decision-making, while only 39% of organizations successfully turn data-driven insights into sustained competitive advantage.

                In the world of finance and accounting (F&A), this really begs the question: how can your finance function implement an analytics platform that gives its users the insights they need to unlock potential and value for your organization?

                No one size fits all

                We live in a world full of information that comes at us from almost every aspect of our lives. We’re constantly bombarded with content, most of which has no relevance to us. And we appreciate the option to personalize how we receive and store this information through adding it to our feeds and favorites.

                The F&A world works in exactly the same way. The three main personas that work with finance intelligence in your F&A function (CFO, transformation lead, and service delivery lead) need the right dashboards and metrics embedded into an easy-to-use platform that gives them the information they need to provide actionable insights at the touch of a button.

                Let’s take accounts payable invoice processing as an example and reference the same input data. While a finance intelligence platform can give visibility on the status of all open or unpaid invoices to your service delivery leads (SDL), your transformation leads (TL) will be more typically interested in the healthiness of the overall invoice channels mix (paper, email, e-invoice, etc.), while the CFO will only want to look at the days payable outstanding (DPO) metric.

                Furthermore, fraud risk alerts will be relevant for the CFO and your SDLs, while metrics such as on-time payment will only interest your TLs and SDLs.

                The same metric – but at a different granularity level – can be relevant to different personas. For example, a single unpaid invoice vs. the systematic problem of late payment.

                But how can you customize your finance intelligence platform for different personas and users?

                Persona-led analytics platform design

                Understanding the role and tasks of your users enables you to select the most relevant content for each persona and build customized dashboards that help them be more productive. Showing a limited number of meaningful and actionable KPIs also helps your users stay focused on the right areas.

                The more seamless the adoption of a next-generation finance intelligence platform, the closer you are to establishing a data-driven culture. And while having the business insights alone will not make your challenges go away, having the right analytics and alerts in front of you will help you make the optimal decisions you need to succeed.

                In turn, this enables you to give your customers what they want, while achieving the benefits of a truly Frictionless Enterprise.

                To learn more about how Capgemini’s Finance Intelligence can help start your journey in smart analytics and real-time, frictionless decision-making, contact: daniel.jarzecki@capgemini.com

                About author

                Daniel Jarzecki

                Expert in Digital Transformation and Innovation
                Daniel Jarzecki is a transformation director with over 19 years of experience in managing Business Services delivery teams, building successful solutions, and running transformation programs for Capgemini clients across multiple industry sectors. Daniel’s passion is to enhance business operations with data-driven insights to help clients transform and improve.

                  Quantum’s balancing act: Exploring three common conflicts within the quantum roadmap for life sciences organizations

                  Gireesh Kumar Neelakantaiah
                  9 Dec 2022

                  To many life sciences organizations, 2030 may seem like a lifetime away. But when it comes to quantum technology, is it really?

                  There are investments to be made, teams to be assembled, use cases to be defined, partnerships to be forged, and, most importantly, a strategy to be set. Within this context, the road to 2030 suddenly looks a lot shorter.

                  In quantum, as with any nascent technology, finding the right approach is a matter of balance. The opportunity provided by quantum computing across the drug development lifecycle is undeniable, but the technology is simply not ready to be used at scale today. Companies need to make investments, but with so much uncertainty, where should they focus their efforts to generate a near-term value and be well-positioned in the future?

                  In this post, we outline three common conflicts within the quantum roadmap and how life sciences organizations should approach these issues to balance short-term return with long-term success.

                  Conflict 1: Where should the quantum computing team “live” within the life sciences organization?

                  As life sciences organizations build out their quantum computing capabilities, many struggle to decide where these teams and people should live within the business. Should the quantum team be part of R&D? Should these professionals be embedded within different discovery teams? Should they operate as a stand-alone function?

                  To some extent, the answer – at least in the short-term – depends on the nature of the organization. For example, some big pharma companies may find it helpful to set up specialist quantum groups within R&D or an innovation stream. Startups, on the other hand, may integrate quantum specialists directly within drug discovery teams. 

                  While there may be some variety in the early-stage strategy, companies should recognize that capturing the full value of the quantum investment over the long term will likely require the support of a centralized, formal team – as opposed to siloed groups or individual enthusiasts sprinkled throughout the organization.

                  To that end, companies should work towards establishing a quantum technology center of excellence (COE) to:

                  • Bring together early-stage teams, as well as new resources, and unite them under a common strategy;
                  • Serve as an organizing body for all quantum applications within the business and take a transversal view of all use cases; and
                  • Help the team share resources and best practices in a more efficient way.

                  At the same time, life sciences organizations should acknowledge that the structure of the quantum team will evolve over time. For example, while the COE model might be optimal over the next several years, as the company matures, it may make more sense to embed specialist teams in multiple different areas of the business. Over the long term, it’s possible that quantum will become a core competency of many groups across R&D, as well as the innovation team, making the need for a COE or specialty team obsolete.

                  Acknowledging that the structure and organization of the quantum computing team will evolve over time is an important consideration to keep in mind even at the earliest stages of the program. This will help ensure that the company designs and manages the team in a fluid way, allowing the quantum function to adapt in time with the organization’s changing needs, level of maturity, and technology advances.  

                  Conflict 2: Leading with use cases vs. being technology driven

                  From accelerating drug development through molecular design to creating manufacturing capacity at scale, quantum computing represents a strong opportunity for life sciences companies in many aspects of the business.

                  And therein lies the problem: with the world of possibility so great, it can be difficult to focus limited resources in the right place at the right time to generate the maximum return now and in the future.

                  Practically speaking, companies need to be clear at the outset of their quantum program about the problems they want to solve – and realistic about when the technology will be available to help them do so. Some use cases, such as molecular design, look promising today and are likely to demonstrate a return in the nearer term. Others, like large-scale optimization of manufacturing and supply, are more speculative in nature; investments in these areas likely won’t demonstrate a return for up to a decade or more.

                  It may be helpful to develop a quantum radar that tracks the application of quantum technology according to feasibility and timescale. This exercise could help the company crystalize what it wants to achieve, as well as assess if there is technology that could help them deliver and, if so, when it will be available. Of course, estimating the timescale when different applications could deliver value is not always straightforward and may itself require detailed research. Involving quantum and domain subject matter experts or a cross-disciplinary COE could help improve the accuracy of such estimates and better evaluate the use of the technology with respect to specific use cases.

                  At the same time, while it is essential to ground the strategy in specific use cases, especially in the short term, it’s equally important to continue to experiment with the technology. This means that even as companies focus on nearer-term applications, they should make a calibrated investment in the technology and build expertise so that the organization can execute more advanced use cases as time goes on. This helps ensure the company is ready to scale and grow as advances in quantum technology are made. This is especially important for life sciences companies that ultimately want to become leaders in this technology – and not just treat quantum as a fringe effort or tactical response.  

                  Conflict 3: Building out the quantum team without investing massive resources

                  Forging a quantum future will require new teams, new roles, and new talent. But it is unlikely that even extremely large companies need – or want – to recruit for hundreds of new positions in this area today.

                  Instead, it may be possible to reorient existing R&D teams around this capability. This will involve identifying staff with relevant backgrounds and experience, such as experts in data science or artificial intelligence.

                  For example, one common idea is to take mathematical experts with experience in the life sciences domain and integrate them within the quantum team to translate business problems into the space of quantum algorithms.

                  In addition, it’s important to develop a partner network that can help fill the gaps that exist within the current quantum team. As part of this process, it will be important to select a company that has the requisite technical skills, as well as relevant domain expertise. This is essential for understanding how the technology can be applied to sector-specific use cases and other skills specific to the life sciences domain. Given the limited number of quantum physicists and quantum scientists at the PhD level available within the talent pool today, it is imperative that organizations be pragmatic about this issue.

                  That said, computing is but one aspect of the drug development process. Competitive advantage will not be found in quantum hardware but the people who work with it, developing the algorithms and applications that run on new machines. Organizations may need to endorse a variety of methods to make sure they have the talent they need to capitalize on this technology.

                  Remember: In the life sciences industry, real success is based on patient outcomes. Ultimately, companies need to harness quantum computing to introduce new therapies more quickly, as opposed to simply proving out the capabilities of the technology.

                  Charting the quantum future through conflicts and challenges

                  As organizations define their quantum technology strategy, they will certainly run into many challenges and conflicts along the way. With so much uncertainty within the field about the rate at which the technology will mature and when it will be ready for modern applications, it can be extremely difficult for organizations to know where, when, and how to invest resources.

                  Given this landscape, it’s important for organizations to approach this task with flexibility and adaptability in mind from the very outset of the program. As discussed above, three of the biggest program elements – structuring the team, defining use cases, and building capabilities – are likely to require a multi-phased and multi-faceted approach. This means that the organization must balance short- and long-term needs and constantly evaluate their program to generate the maximum return from this technology.

                  Author: Gireesh Kumar Neelakantaiah, with contributions from Sam Genway, James Hinchliffe, and Clément Brauner.

                  Gireesh Kumar Neelakantaiah

                  Global Strategy, Capgemini’s Quantum Lab
                  Leading go-to-market initiatives for the Quantum Lab, including solution development, strategic planning, business and commercial model innovation, and ecosystem partner and IP licensing management; Skilled in Quantum computing (IBM Qiskit), Data science, AI/ML/Deep learning, Digital manufacturing & Industrial IoT, Cloud computing.

                    How to safeguard and protect our global forest ecosystems?

                    Pierre-Adrien Hanania
                    8 December 2022

                    Key takeaways on how Data & AI can play a leading role to safeguard and protect our global forest ecosystems 

                    Land use – including deforestation, which releases heat-absorbing carbon into the atmosphere – accounts for 25 percent of global greenhouse gas emissions, according to the Intergovernmental Panel on Climate Change (IPCC) Special Report on Climate Change and Land. In addition to playing a crucial role in carbon sequestration (essential in a warming environment), forests are home to earth’s most diverse species, and they provide a natural barrier between natural disasters and urban zones – all of which contribute to the United Nations’ 2030 Agenda for Sustainable Development Goals (specifically, goals 3, 9, 14 and 15).  

                    As part of Capgemini’s support for AI For Good, we recently gathered a range of experts from forestry research programs, startups, and business project teams to discuss how best to observe, defend, and enhance the world’s forests. These leaders shared their insights and experience with diverse technologies, all with the same goal – ensuring that the forests remain for years to come.  

                    Here are three key takeaways from that conversation:  

                    New observational technologies are increasing our capacities –  

                    Using AI, the physical labor that goes into the tedious process of analyzing imagery for forestry insights can be reduced tremendously, while improving data precision and quality. In combination with pre-existing government data and in-situ data, forestry professionals now possess high-quality tree-maps, which can then be leveraged to determine the effects climate change has on sustainable land practices, support or improve species habitat, and provide a more sustainable harvest. “AI and satellites give us the scale to be able to apply skill sets that people weren’t applying to the climate before,” said Kait Creamer, marketing manager of Overstory, a company specializing in vegetation intelligence through the use of geo-satellite imagery.  

                    New observational capabilities are promising, but must be paired with defensive action:  

                    “We need to know the past in order to predict the future,” argued Ms. Jonckheere, Forest and Climate Officer at the Food and Agriculture Organization of the UN (FAO). “And for this, machine learning and AI can really help.” Geo-satellite data can be used in combination with algorithms to predict the size, spread, and probability of a fire outbreak, protecting forests and also preventing the loss of life by inhabitants of rural areas.  

                    In addition to fire prevention insights, AI and data can easily identify which trees on the ground are affected by invasive species, for example, the spruce bark beetle in Sweden. These insights allow professionals to visualize and manage an infestation.  

                    Stéphane Mermoz, CEO and Research Scientist at GlobEO, a company that provides services based on Earth observation and remote sensing data, shared that another use case for predictive algorithms is illegal mining – data show that illicit mining operations on Indigenous lands and in other areas formally protected by law have hit a record high in the past few years1 – so we can use analysis through AI and machine learning to build correlations for predicting deforestation.   

                    Data analytics and AI are presenting key opportunities to defend the local ecosystems that are essential to life. “The forest is my backyard,” commented Alook Johnson, an indigenous trapper from Canada supported by the ShagowAskee Group. Whether we are far or near to the forest, citizens are all concerned with its health and conservation. AI techniques can also reimagine the place of trees in our lives – in a forest far from highly populated cities or merged directly into our urbanized areas to prevent urban heat islands.  

                    Policy and public sector coordination is key:  

                    “Policy is the thing which holds us all accountable,” Ms. Creamer remarked, “in a way that maybe an individual couldn’t.” Without both policy support and economic viability, many of the small businesses and innovators exploring these technologies will not be able to scale to the level that the current environmental crisis requires. Ms. Creamer remarked, “when we’re conscious of making policy that serves our communities and businesses – that has a climate in mind – there’s this inherent motivation to follow through.”  

                    According to Ms. Jonckheere, we have two things – global data, like the IPCC global report, that serve the needs of policymakers. The other is an action that needs to be encouraged on a national and local scale. Globally there is a UN forestry network and global goals, but then it’s up to different nations to come up with policies and measures and follow up with the implementation. Linking these two is crucial because these are global data products — which are very useful in the case that there is no national data that can be used by the national government or local end users. 

                    Data and AI are game-changing tools when supporting and counteracting the degradation of our world’s forests, but rather than relying upon the existence of new innovations, it is a commitment to action that will be decisive in this sphere.  

                    Watch the full replay on Youtube: 

                    Author

                    A well-dressed man in a suit and tie poses in front of the European flag, representing international relations.

                    Pierre-Adrien Hanania

                    Global Public Sector Head of Strategic Business Development
                    “In my role leading the strategic business development of the Public Sector team at Capgemini, I support the digitization of the public services across security and justice, public administration, healthcare, welfare, tax and defense. I previously led the Data & AI in Public Sector offer of the Group, focusing on how to unlock the intelligent use of data to help organizations deliver augmented public services to the citizens along trusted and ethical technology use. Based in Germany, I previously worked for various European think tanks and graduated in European Affairs at Sciences Po Paris.”