Skip to Content

Near-tech
Near future, not far-fetched

Brett Bonthron
28 Apr 2023

Capgemini has worked with high tech leaders for over 50 years. We understand the role of high tech – quite simply, it’s the engine that powers the highest levels of innovation. It’s the type of world-changing technology that transforms businesses, entire markets, and even human history. For example, when invented, the steam engine was absolutely high tech. Flip phones? High tech. When these innovations are early and have yet to cross the chasm to mass adoption, we sense the advance far away. The media starts buzzing with predictions of life-altering experiences and sudden changes in how we live, work, and play. Businesses begin to both worry and get excited. We call technologies in this critical, most exciting phase ”near-technology” or “near-tech.” This stage comes with unique challenges, where even a few days’ delay can be the difference between a market leader and a historical footnote.  

Near-tech describes the kinds of technology that exist not in research labs but just within reach. It represents tangible possibilities – technology that can, with the right expertise and capabilities, enable real opportunities. 

The extraordinary and the everyday 

These advancements ride in on massive waves of disruption, completely changing our global perspectives and human capabilities. The tension between extraordinary technology and everyday life drives the development of new business models and innovations. Right now, we are entering a remarkable time. Immense technological waves are cresting the horizon – generative AI, truly human robotics, individualized gene therapies, new chip manufacturing and lithography capabilities – changing the world and devastating our everyday. But living in the tension between the extraordinary and the every day isn’t new to Capgemini – it is our legacy. 

Outrageous yet logical 

The greatest innovations are born out of big bets by entrepreneurs and companies willing to challenge the core assumptions surrounding us. Software must run on-premises… enter SaaS. It’s only a phone… enter the Smart Phone. The common characteristic of transformative technologies is that they first fundamentally disrupt our mindset, then disrupt our infrastructure, manufacturing, supply chains, business models, and security. They may seem like outrageous ideas at first, but eventually, something tips and the disruption becomes normalized: This is the future. And the wave begins. We believe deeply that these innovations are outrageous and, at the same time, logical, and we help bring them to the world. It is our mindset of possibility that makes us different.

We are builders 

Capgemini High Tech recognizes that success is embracing and exploiting near-tech. It’s about bringing together talent and technology to help organizations reach near-tech faster. However demanding or specific the challenge might be, an expert can help solve it. We proudly act as a comprehensive partner for High Tech clients looking to leverage near-tech to transform their business. But what makes us unique is that we don’t just define a company’s future but also help them build it. 

Making connections 

Perhaps the most essential tool for any business seeking new opportunities through high tech is connection – connections between knowledge, capability, and technologies. By drawing on broad networks of deep expertise, companies can use high tech to enter industries and markets that were otherwise unobtainable until now. We enable our clients to connect with the right semiconductor manufacturing partner, the right business strategy, the right design and UX partner, the right production and shipping plan, and the right data and software security solution. We bring the connections to make near-tech real.

Capgemini High Tech serves the tangible possibilities that are just within reach – decisions and actions that matter now. Whether through connections, living in the gap between the extraordinary and everyday, building real solutions, or embracing the outrageous, we are the partner for near-tech.

Let’s innovate the near technology of your industry together. 

For questions, reach me here!

About the author

Brett Bonthron

Brett Bonthron

Executive Vice President and Global High-tech Industry Leader
Brett has over 35 years of experience in high-tech, across technical systems design, management consulting, start-ups, and leadership roles in software. He has managed many waves of technology disruption from client-server computing to re-engineering, and web 1.0 and 2.0 through to SaaS and the cloud. He is currently focusing on defining sectors such as software, computer hardware, hyper-scalers/platforms, and semiconductors. He has been an Adjunct Faculty member at the University of San Francisco for 18 years teaching Entrepreneurship at Master’s level and is an avid basketball coach.

    Winning the war on criminal shell companies

    Manish Chopra
    Manish Chopra
    28 April 2023

    Cassandra began working in a Toronto massage parlor as a teenager. She spent the next ten years in fear, kept in line by violence. A recent analysis uncovered 700 illicit parlors in Canada linked to transnational crime syndicates. There is strong evidence these criminal organizations used shell companies to launder their human-trafficking profits – dragging respectable financial institutions down into a world they would not have engaged with, had they known who they were dealing with.

    The problem of organized criminals involving financial institutions in their activities spans the globe, encompasses various types of financial crime, and has real-world effects on governments and individual victims. Fortunately, new technologies are giving organizations the tools they need to win the war on criminal shell companies

    Hiding money in real estate
    There are so many examples of abusive shell corporations that it’s difficult to choose just a few. What follows should give a sense of the range of organized criminal activity involving financial institutions.

    In the UK, some £4.2 billion worth of properties was bought by politicians and public officials with suspicious wealth. Not only does that give criminals a place to hide their illegally acquired assets, but it also drives up property prices, and puts tenants and future buyers in tenuous situations.

    Another report identified 766 corporate vehicles alleged to have been involved in laundering approximately £80 billion. Nearly half of the companies involved were based out of just eight addresses – which would have raised suspicion, if anyone had noticed.

    Money laundering
    In January 2017, the UK’s Financial Conduct Authority (FCA) and the New York Department of Financial Services (DFS) fined a European bank for failure to identify, prevent and report $10bn of Russian money laundering.

    The DFS commented: “The selling counterparty was typically registered in an offshore territory… and none of the trades demonstrated any legitimate economic rationale.” In addition, “The bank’s Know Your Customer (KYC) processes were weak, functioning merely as a checklist… Virtually all of the KYC files for the companies involved in the scheme were insufficient.” 

    Earlier that decade, some 5,140 companies and 732 banks in 96 countries were involved in the immense so-called “Russian laundromat,” in which 21 fictitious companies (most registered at the Companies House in London), laundered somewhere between $20 and 80 billion out of Russia.

    Sanctions evasion
    The US government has recently issued a warning to companies to be vigilant for Russia-related sanctions evasion, with regulatory expectations that businesses inside and outside the country should maintain effective compliance programs to minimize the risk of evasion. The UK government claims that Russian nationals have taken advantage of weak AML to launder war profits stolen from Ukraine.

    Human trafficking
    Cassandra’s case was far from isolated. When the data on suspicious massage parlors was cross checked with other databases, the full international scale was revealed. A spokesperson for Thomson Reuters Special Services commented, “These are not just individual massage parlors trafficking women but are globally-connected enterprises like a cartel.”

    Human trafficking is often transnational by its nature – victims are isolated far from support, in countries where they don’t know the laws or even speak the language. In Europe, over 900 potential victims were found in just one investigation last year. Another 200 victims of a Chinese “conveyer belt of sexual exploitation” were rescued in Belgium and Spain this February.

    Around the world, human trafficking and money laundering are linked. ACAMS Today reports on a White House fact sheet stating that, “approximately $150 billion in illicit proceeds are generated each year by these criminals globally; monies that will subsequently be laundered through our legitimate financial systems.”

    Red Flags
    Many of these cases share common themes:

    • weak KYC processes, where the checkbox approach was used,
    • the use shell companies which appeared to have no employees and/or apparent business activity
    • company formation services in offshore locations,
    • nominee directors and shareholders,
    • common ownership and addresses,
    • the country of operations, the registered office and where the payments flow is completely different and unconnected,
    • the countries where the funds end up or flowed through often lacked effective AML regimes,
    • involvement of/to politically exposed persons directly or behind the shell companies,
    • the volume and value fund transferred over 12 to 36 months so substantial that it did not make economic sense,
    • there were payments to seemingly unrelated business and individuals,
    •  accounts used as flow-through and the purpose of the wire payments is inconsistent with the stated businesses of the send and receiver – e.g., fees, commissions, and other information.

    We now have the tools to connect the dots – to redefine due diligence, go beyond a checklist and use data to form a clear picture. It’s possible for financial institutions to integrate their data with external data, including from corporate registries, adverse media and law enforcement agencies. A 360° view of the client and Perpetual Know Your Customer (pKYC) technology can help banks and financial institutions form a fuller picture of their clients over time, and react in real time to suspicious activity.

    Cassandra made it out of the parlors and went on to found an organization that supports women and girls in her situation. Is your institution doing its part?

    Author

    Manish Chopra

    Manish Chopra

    Global Head, Risk and Financial Crime Compliance
    Manish is the EVP and Global Head for Risk and Financial Crime Compliance for the Financial Services Business at Capgemini. A thought leader and business advisor, he partners with CXOs of financial services and Fintech/payments organizations to drive transformation in risk, regulatory and financial crime compliance.
    Karim-A-Rajwani

    Karim A. Rajwani

    Senior Advisory Consultant, Regulatory and Compliance

      Quantum computing: The hype is real—how to get going?

      Capgemini
      27 Apr 2023

      We are witnessing remarkable advancements in quantum computing, regarding the hardware but also its theory and usage.

      Now is the age of exploring: How for example will quantum machine learning differ from the classical, will it be beneficial or malicious for cyber security? Together with Fraunhofer and the German Federal Office for Information Security (BSI), we explored that unsettled question and found something sensible to do today. There are two effective ways in which organizations can start preparing for the quantum revolution.

      The progress in quantum computing is accelerating

      The first quantum computers were introduced 25 years ago (2 and 3 qubits), the first commercially available annealing systems are now 10 years old. During the last 5 years, we have seen bigger steps forward, for example systems with more than twenty qubits. Recent developments include the Osprey chip with 433 qubits by IBM, first results of quantum error correction by Google, as well as important results in interconnecting quantum chips announced by the MIT.

      From hype to realistic expectations

      Where some see steady progress and concrete steps forward, others remain skeptical and point out missing results or unkept promises—the most prominent of which is found in the field of the factorization into large prime numbers: There still is a complete lack of tangible results in breaking the RSA cryptosystem.

      However, development in quantum computing has already passed various important milestones. Dismissing it as mere hype that will pass eventually now becomes increasingly difficult. In all likelihood, this discussion can soon be laid to rest, or at least refocused towards very specific quantum computing frontiers.

      The domain of machine learning has a natural symbiosis with quantum computing. Especially from a theoretical perspective, research in this field is considered fairly advanced. Various research directions and study routes have been taken, and a multitude of results are available. While much research is done through the simulation of quantum computers, there are also various results of experiments run on actual, non-simulated quantum devices.

      As both the interest and the potential of quantum machine learning is remarkably high, Capgemini and the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, have delved deeply into this topic. On request of the German Federal Office for Information Security (BSI), we went as far as analyzing the potential use for, as well as against, cyber security. One of the major results of this collaboration is the report “Quantum Machine Learning in the Context of IT Security“, published by the BSI. Current developments indicate that there is trust into quantum machine learning as a research direction and it’s (perceived) future potential.

      Laggards increasingly lack opportunities

      The ever-growing availability of better and more efficient IT technologies and products is not always reasonable to implement and often difficult to mirror in an organization. Nevertheless, innovation means that a certain “technology inflation” constantly devalues existing solutions. Therefore, an important responsibility of every IT department is to keep up with this inflation by implementing upgrades and deploying new technologies.

      Let us consider a company that still delays the adoption of cloud computing. While this may have been reasonable for some in the early days, the technology has matured. Over time, companies that have shied away from adoption have missed out on various cloud computing benefits while others took the chance to gain a competitive advantage. Even more, the longer the adoption was delayed or the slower it was conducted, the further the company has allowed itself to fall behind.

      Time to jump on the quantum computing bandwagon?

      Certainly, quantum technology is still too new, too unstable, and too limited today to adopt it in a productive environment right away. In that sense, a pressure to design and implement plans for incorporating quantum computing into the day-to-day business does not exist today.

      However, is that the whole story? Let us consider two important pre-implementation aspects: The first of these is to ensure everyone’s attention for the topic: For an eventual adoption, a widespread appreciation for what might be gained is crucial to get people on board. Without it, there is a high risk of failing­—after all, every new technology comes with various challenges and affords some dedication. But developing the motivation to adopt something new and tackle the challenges takes time. So, it’s best to start early with building awareness and basic understanding of the benefits throughout all levels and (IT) departments.

      The second aspect is even more difficult to achieve: experience. This translates to know-how, participation, and practice within the organization to get prepared for the adoption of technologies once they are ready for productive deployment. In the case of quantum computing, gaining experience is harder to achieve than with other recent innovations: In contrast for example to cloud computing—which constitutes a different way of doing the same thing, and thus allows companies to get used to them slowly—quantum technologies represent a fundamentally new way of computation, as well as a completely new approach of solving problems and answering questions.

      The key to the coming quantum revolution is a quantum of agility

      Bearing in mind the scale of both pre-implementation aspects and of the uncertainty of when exactly quantum is going to deliver advantage in the real world, organizations need to start getting ready now. On a technical level, and in the realm of security, the solution for the threat of quantum cryptanalysis is deployment of post-quantum cryptography. However, on an organizational level, the solution is crypto agility : having done the necessary homework to be able to adopt quickly to the changes, whenever they come. Applying the same concept, quantum agility represents having the means to adapt quickly to the fundamental transformations that will come with quantum computing.

      Thus, building awareness and changing minds now will have a considerable pay-off in the future. But how can organizations best initiate this shift in mindset towards quantum? Building awareness is a gradual process that can be promoted by a working group even with small investments. This core group might for example look out for possible use cases specific to the respective sector. Through various paths of internal communication, they can spread the information in the proper form and depth to all functions across the organization.

      To build up knowledge and experience, the focus should not be on viable products, aiming to replace existing solutions within the company. Instead, it is a way of playing around with new possibilities, of venturing down paths that might not ever yield any tangible results but aiming to discover guard rails subjective to each corporation and examine fields where quantum computing might eventually be the way to substantial competitive advantages.

      Frontrunners are gaining experience in every sector

      For example, some financial institutions are already exploring the use of quantum computing for portfolio optimization and risk analysis, which will enable them to make better financial predictions in the future. Within the pharma sector, similar efforts are made, gauging the potential of new ways of drug discovery.

      In the space of quantum cyber security, together with the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Capgemini has built a quantum demonstration: performing spam filtering on a quantum computer . While this might be the most overpriced—and under engineered—spam filter ever, it is a functioning proof of concept.

      Justifying investment in quantum computing requires long-term thinking

      The gap between companies in raising organizational awareness and gaining experience with the new technology is gradually growing. Laggards have a considerable risk of experiencing the coming quantum computing revolution as a steamroller, flattening everyone that finds themselves unprepared.

      The risks and challenges associated with quantum technology certainly include the cost of adoption, the availability of expertise and knowledgeable talent, as well as the high potential of unsuccessful research approaches. However, the cost of doing nothing would be the highest. So, it’s best to start now.

      We don’t know when exactly the quantum revolution will take place, but it’s obvious that IBM, Google and many more are betting on it—and in the Capgemini’s Quantum Lab, we are exploring the future as well.

      Christian Knopf

      Christian Knopf

      Senior Manager Cyber Security
      Christian Knopf is a cyber defence advisor and security architect at Capgemini and has a particular consulting focus on security strategy. Future innovations such as quantum algorithms are also in his field of interest, as are the recent successes of deep neural networks and their implications to the security of clients he works with.

        Future IT: FinOps, GreenOps and sustainable cloud strategies

        Güncel Düzgün 
        26 Apr 2023

        Sustainable IT starts with public cloud…

        Driven largely by the Paris Agreement of 2015, businesses of all sizes are increasingly committed to sustainability. From reducing carbon emissions to implementing eco-friendly practices, there is a growing push towards sustainable business operations.

        This comes at a time when rapid growth of the digital economy is significantly increasing energy consumption and carbon emissions. The rise of computing has led to concerns about its environmental impact. As companies strive to become more environmentally conscious, a green concept for future IT has become a critical component of sustainable business operations.

        The role of public cloud will be central to sustainable IT. However, when managed haphazardly, public cloud can rapidly lead to waste and inefficiency – virtually bottomless and out of sight, public cloud tempts employees to shoot first and ask questions later. On the other hand, properly managed public cloud provides consistent energy savings in comparison to traditional on-premises data center solutions. This comes down to a conscious choice that each organization faces.

        …and continues with FinOps

        Leveraging energy-efficient technologies, optimizing resource usage, promoting remote work, developing green computing practices, and enabling collaboration among users and organizations, public cloud has the potential to significantly reduce carbon emissions compared to traditional on-premises IT infrastructure (by up to 95%).

        Therefore, when I work with organizations to help them reduce their carbon footprints, one of my first recommendations is migrating their workloads to public cloud (if an organization hasn’t made that move yet). However, the cloud sustainability journey doesn’t finish with an eco-friendly public cloud environment; this is where it begins. Once the workloads are landed in the public cloud, it is important to run them with a sustainable cloud operation approach. That’s where FinOps comes in.         

        The role of FinOps in cloud sustainability

        FinOps (short for financial operations) is a cloud financial management discipline and cultural practice that focuses on tracking and optimizing cloud spending. It does so by bringing together IT, finance, engineering, and business teams, and providing a cultural mechanism for teams to manage their cloud costs, where everyone takes accountability of their cloud usage.

        FinOps – when applied correctly – ensures the optimal usage of cloud resources, which in turn promotes not only cost efficiency but also energy savings and carbon emissions reductions. In other words, if you do not build such a cost-usage-control framework in your public cloud environment – where resource deployment is unlimited due to a pay-as-you-go cloud consumption model – this suboptimal cloud usage leads to underutilization and waste in the form of excess energy consumption and carbon emissions. Therefore, FinOps is sustainable by design.

        FinOps helps organizations achieve their sustainability goals in the cloud by implementing cloud cost optimization techniques like:

        • Establishing FinOps governance and policies
        • Rightsizing and autoscaling of elastic resources
        • Shifting workloads to containers
        • Automated scheduling of compute services
        • Decommissioning of idle and unused resources
        • Reducing log ingestion
        • Optimizing storage tier and redundancy
        • Refactoring and mutualizing of applications by cloud native architectures (e.g., plarform as a service)
        • Budgeting, forecasting, and overspending alerts
        • Managing anomalies
        • Establishing FinOps culture and accountability
        • Switching usage-based pricing to new consumption models (reservations, saving plans, spot instances, etc.) for more efficient resource usage 

        Brothers-in-arms – how FinOps supports GreenOps

        FinOps and GreenOps are two distinct methodologies that – when used together – form a powerful strategy for organizations looking to optimize their cloud usage and reduce their environmental impact.

        While FinOps is focused on managing cloud consumption, GreenOps is focused on implementing sustainable practices within an organization’s operations. It involves developing strategies that prioritize environmental sustainability while still achieving business objectives like reducing waste, promoting renewable energy sources, using eco-friendly materials and processes, and promoting a culture of sustainability and environmental responsibility. By identifying and mitigating carbon emissions, organizations can reduce their impact on the environment and contribute to a more sustainable future.

        There is a bit more to it though. GreenOps encompasses everything an organization does to reduce the carbon footprint of their cloud program. This might include relocating resources to companies or regions where sustainable energy is used or using code that requires less energy to run. But the greatest contribution by far is usually – in my experience – made by FinOps.

        The most obvious way is through optimization of cloud resource consumption, which naturally reduces energy consumption. FinOps can also help companies choose the most environmentally friendly cloud service providers. Best of all, the gains made by FinOps allow for continuous improvement and ensure that a company remains on track to achieve its sustainability targets.

        FinOps and GreenOps can create common work areas with unified dashboards, enabling teams to make joint decisions. A great example of how FinOps and GreenOps can work together is through the optimization of supply chains. Together, teams can identify areas where supply chain inefficiencies are leading to unnecessary energy consumption so that organizations can reduce their carbon emissions and save money. FinOps can be used to identify supply chain inefficiencies, while GreenOps can be used to implement sustainable supply chain solutions.

        Future IT: Sustainable and cost-effective cloud operations

        Sustainability is a natural benefit of FinOps – one of many. In fact, FinOps can be an essential framework for organizations looking to reduce their environmental impact while saving money. Moreover, when combined, FinOps and GreenOps can create a more holistic approach for companies to manage their cloud resources in a sustainable and cost-effective way while also maximizing the business value of public cloud.

        Incorporating FinOps recommendations into cloud operations can help drive both sustainability and cost-effectiveness. Creating a vision for end-to-end cloud operations and decarbonizing procurement are essential first steps that enable teams to identify and optimize savings opportunities. Continuous improvement and automation, empowering teams to take action, and promoting a culture of green accountability is crucial for long-term success. By adopting these measures, businesses can achieve their financial goals while also contributing to a more sustainable future.

        Looking to go deeper into FinOps? Check out our FinOps services & download the whitepaper developed by IDC and commissioned by Capgemini to explore the full potential of FinOps and the value it brings to every stage of the cloud journey.

        Meet our expert

        Güncel Düzgün 

        Güncel Düzgün 

        Global FinOps Offer Co-Lead

          Ensuring Aerospace & Defense supplier resilience & sustainability in a volatile world

          Gilles Bacquet
          24 Apr 2023
          capgemini-engineering

          Over the last few decades big manufacturing companies created vast global networks of suppliers, perfectly setup to deliver critical parts just-in-time.

          But, supply chains have proven more fragile than once thought. The global pandemic showed big shocks can create dramatic change, which suppliers may not be ready for. And when we emerge from that change – whether a day or a year later – individual suppliers may not be the same. They may have gone bust, lost key staff, pivoted to different customers, or found they are simply not setup for the new situation before them.

          Hopefully the next pandemic is years away. But shocks are becoming more common and the dream of ‘borderless’ supply chains is fading. Whether its Brexit, Ukraine, lockdowns, protectionism, sanctions, fallings out, or someone getting their boat stuck in the Suez canal, the world is less stable for suppliers, which means those who rely on them need to be more vigilant.

          This is felt particularly strongly in Aerospace and Defence, industries where around 80% of any finished product comes from the supply chain, and which are currently expected scale up in this time of supplier uncertainty.

          Forecasts suggest 40,000 new planes will be needed in the next 20 years, off the back of a rapid scale down over the pandemic. Defence had been reducing spend for years and discontinued many products, but now must restock and reinvent as military threats rise and its support for Ukraine depletes supplies.

          As they build and evolve supply chains, they will face new challenges. As example, materials from aluminium to chips are in high demand from automotive, a far more material intensive industry. And they will need to pay more attention to sustainability – new rules and pressures may mean that polluting suppliers will not be allowed to stay on the books for long.

          So, they need sustainable supply chains, with long term certainty, to fuel a massive production increase, in the face of still competition for resources, in an unpredictable world.

          What should they do?

          Building resilient & sustainable supply chains from the bottom up

          At the top level, a resilient supply chain might be seen as a control tower, with visibility of stock levels and events, backed by whizzy AI making real-time optimisations and recommendations. And indeed Capgemini has insight on how to build such a system.

          But such a system is only as good as the information that goes into it. In a volatile world, that means understanding the risks facing suppliers themselves, so you can make decisions at an individual supplier level, which embeds resilience across your supply chain. 

          What are the risks to resilience in your supply chain?

          Some risks lead to problems that are highly specific. A broken machine will delay orders. If the supplier is slow to act, the solution may literally be to send your auditor back to oversee the ordering of replacement parts and repair.

          Others may be more structural. If suppliers are becoming reluctant to sell you aluminium – eg because of a slowdown in global supply – you may need a change your approach, such as moving your commitment from six months to five years, or bidding higher to secure priority, or adding new suppliers. That needs careful analysis of your needs. If you make a commitment and the supplier goes bust or you change to a non-aluminium design, you lose out. But if you do nothing, you may not have the materials to make your product.

          Increasingly, being resilient means being sustainable. As new climate regulations emerge and consumers pile on pressure, working with polluting suppliers will cease to be viable. If those suppliers are important to your product, then that poses a threat to resilience, meaning you need to make them change, or find new ones.

          First action will be to be able to aggregate all GHG emissions from the Supply Chain (from raw materials to Tier 1 suppliers) to calculate Inbound emissions and generate Scope 3 reports (all indirect emissions (not included in scope 2) that occur in the value chain of the reporting company, including both upstream and downstream emissions). Define industry standards and tool for reporting is actual challenges to be tackle.

          Understand the risks, and make changes to build resilience

          Understanding where you have these and other problems means doing the hard work of visiting suppliers. This may be part of a continuous auditing process, part of onboarding, or a specific intervention following a shock. For example, after the pandemic reopening, we found ourselves working with a client to audit their entire 200 company supply chain in just a couple of weeks, to assess readiness to scale in an uncertain time.

          Either way, it means sending experts to suppliers’ sites to understand their situation, and gather info. These must be people who can spot problems or risks that would not show up in a call or data analysis, such as lower than reported stock, incorrect storage, lack of skills, crumbling machinery, or higher than claimed emissions.

          Individual supplier resilience data can be combined with Business Intelligence (BI) tools, risk models and expert analysis to build up full situational awareness across the supply chain – a moderate risk at one supplier may be manageable, but if that risk is replicated across all suppliers of that product, that may be a red flag that needs addressing.

          Having understood the problems and risks, you can identify a remediation or recovery plan. That may include upgrades, training, new processes, data collection and reporting. You must also ensure your supplier implements it, using an appropriate mix of carrots (committed orders, investment) and sticks (threats to take business elsewhere).

          Even with the best laid plans, suppliers fail, and many companies take months or years to onboard new ones. In a volatile world, these processes need to be revisited to allow much quicker onboarding and transferring of work. It is why implement now a Sustainable procurement strategy is critical to only select low emissions new suppliers and develop current ones to improve their current GHG emissions to anticipate future restrictions

          Managing everyday issues

          Supply chains are complicated and even reliable suppliers get things wrong. Another key part of supplier resilience is having quick and efficient processes for resolving day-to-day problems which can quickly add up to a large costs of business.

          These are issues like rejecting defective goods. Often companies handle this at a local level, but a central team with a dedicated company-wide platform is usually more efficient. When there is an issue at any level, it is flagged in the system, resolved by dedicated experts including processing the issue, reordering, resolving payments, and closing it.

          Additionally, the key to getting this right is having people with technical knowledge, change management skills, and the soft skills to get suppliers to listen and act. Having these skills in local teams is also important, as knowledge of language and culture are critical to getting results. Local teams are also vital to maintaining a low carbon footprint.

          The key for success here involves mixing central expertise and a network of experts.

          Conclusion

          In an ideal world –we would have something like the film Minority Report – where we predict exactly what will happen, and act in advance, sending instructions straight to suppliers to change focus to adjust for the upcoming shock.

          This is not realistic. The world is too complex to predict every shock, from a machine breaking at a critical moment, to a global lockdown. But we can be ready for problems, with resilient suppliers and processes to help them adapt and respond to problems including the integration of environmental protection (GHG) as a key pilar. That means a combination of on-the-ground experts, processes and technology.

          Meet our expert

          Gilles Bacquet

          Gilles Bacquet

          Senior Portfolio & Product Manager, Resilient & Sustainable Supply Chain offers owner
          Gilles is a Production & Supply Chain engineer and has joined Capgemini group in 2001. Starting as consultant expert in Supplier Quality Management for Automobile & Aeronautic, he has extended his responsibilities in creating Supply Chain offer and developed business oversea. He is today leading Resilient & Sustainable Supply Chain offers for Capgemini Engineering.

            One platform to rule them all – or not?

            Gustaf Soderlund
            24 April 2023

            Around 15 years ago, I was involved in a large ERP (Enterprise Resource Planning) system deal. It involved HR capabilities (including talent management, compensation etc), financial capabilities (like accounts payable and receivable), logistics capabilities (stocks and inventory) and procurement capabilities (vendor management etc). At the time, the obvious choice and the best solution was to have all these capabilities integrated into one platform and coolest at the time was the SAP system (possibly it still is). The major advantage was having it all connected and with huge amount of transactions handled by one platform by everyone, the cost per transaction went down considerably (economies of scale).

            Could this approach at amalgamation on to one platform work in the DPA world as well? Does it make sense to run all the processes on one platform in order to get the economies of scale? For many years organizations have decided to stick with one platform to see cost per transaction go down, simplification of the architecture and an advantage of only building skills on one platform. And, if you trust everything the polished vendor slides are saying, you should use the same platform for CRM, Customer Service and Sales to really reap all the benefits. The story of one platform (to rule them all) is indeed quite convincing! And this trend (especially the use of a platform for many purposes) has been going on for some time now (just look at the waves and quadrants). The big Question is how come not all organizations are ‘buying it’?

            In my view it all comes down to two other strategies, ‘best-of-breed’ and ‘fit-for-purpose’. Adopting a best-of-breed strategy means you want the best technical capabilities from the leading vendor in each specific area, be it DPA or CRM or Marketing automation. This is not new and explains why the ERP concept wasn’t as successful in the CX area. However, having a fit-for-purpose approach is newer.

            Imagine you bought a state-of-the-art DPA platform (like Pega or Appian) to manage your super complex and highly regulatory payment processes with embedded business rules for payment investigations and disputes. The platform might be quite expensive but considering what you’re saving in quality, compliance and automation – ‘it’s so worth it!’. For this exact purpose, the leading DPA platform would be a perfect fit, especially if it’s a framework (like a regulatory scheme) that the vendor consistently updates twice a year. Now imagine a few years down the line, you’re implementing GDPR. Still highly regulatory and some complexity. The leading DPA solution might still be fit-for-purpose here. But, what happens if the bank wants to send out a survey about the new Mobile App and is offering each respondent two movie tickets, that are being sent to their home address? Probably, the leading DPA platform is a bit of an overkill for this purpose, while a simple low code tool or a basic case management tool could be fit-for-purpose here. 

            Only the last month, I’ve come across three very large banks that are using several process platforms at the same time, as a part of their strategy. But, on the other hand, medium or smaller sized organizations can possibly not afford three parallel technologies based on different use cases. There, it would make more sense to see if there are platforms that are good enough for all (or at least most) of the prospective use cases and usage areas. The advantages of using one platform for both DPA, CRM, Customer Service and Sales can also be substantial there. Especially, since you have only one technology to maintain, one technology skill to build capabilities on etc, which would argue the case for ‘one platform to rule them all’. Are you unsure of where your new use case should fit best? Feel free to reach out to us!

            Author

            Gustaf Soderlund

            Gustaf Soderlund

            Global VP Public Sector Sweden, Nordics
            Gustaf has many years of experience selling, delivering, and leading business process and customer engagement solutions in a variety of industries, including banking and insurance Gustaf currently leads Pega globally and is the Augmented Services leader for Financial Services.

              Related Expert Perspectives

              Truck OEMs and sustainability:
              Realizing the ambition

              Fredrik Almhöjd – Our expert
              Fredrik Almhöjd
              21 Apr 2023

              Net zero targets are a great start, but many commercial vehicle manufacturers have yet to put together a credible strategy for reaching them. A holistic approach is key, believes Fredrik Almhöjd, Director, Automotive & Manufacturing at Capgemini and the company’s go-to-market lead for commercial vehicles in the Nordics.

              Climate change is now widely recognized by commercial vehicle (CV) manufacturers as one of our generation’s biggest challenges, and most companies seem determined to tackle it. However, many find that, while it’s relatively easy to define net zero targets, creating a coherent strategy for achieving them is trickier.

              This article takes a general look at the CV industry’s sustainability ambitions and concerns and proposes a holistic response. In subsequent articles, I’ll delve deeper into some key aspects of this topic.

              A strategic imperative

              Until recently, automotive OEMs tended to view the “sustainability agenda” as a box to be ticked for PR purposes. That picture has now changed drastically. With transportation accounting for 37% of global CO2 emissions in 2021 according to the International Energy Authority, stakeholders including regulators, customers, and the public are piling on the pressure for OEMs to lower emissions in line with the Paris Agreement and similar targets.

              As a result, automotive industry boards now recognize the strategic importance of sustainability and have put it at or near the top of their agendas. One sign of this recognition is that more and more corporations are appointing Chief Sustainability Officers. The Harvard Business Review recently reported that in 2021 more CSOs were appointed than in the previous five years together – that’s for all industries but we see a similar trend in automotive.

              In line with this trend, all the major truck OEMs communicate clear, ambitious goals. Many of these companies have signed up for the Science Based Targets initiative (SBTi) to help them achieve Paris Agreement objectives, for example.

              Truck OEMs’ goals include phasing out diesel in favour of fossil-free trucks within the next decade. While there’s general agreement that this needs to happen – at least in most markets – many OEMs have yet to formulate a clear strategy regarding battery electric vehicles (BEVs) and fuel cell electric vehicles (FCEVs). BEV will most probably be appropriate for regional and distribution trucks, while FCEV will be the usual choice for the long haul – and so both are likely to be in the portfolio.

              What’s still missing?

              While OEMs’ product plans for zero-emission vehicles are already well advanced, they are not yet able to realize their overall sustainability vision. That’s because most companies do not yet have a holistic, systematic approach. Such an approach needs to look beyond the product portfolio and address the whole automotive product lifecycle, and much more besides.

              The other thing that’s lacking is speed. To stand a chance of reaching the Paris Agreement and similar targets, the industry urgently needs to move from talking the talk to taking meaningful action.

              A holistic approach to Commercial Vehicle Sustainability

              In planning the approach, it’s helpful to think in terms of three building blocks:

              1. Sustainability culture

              Companies need to work toward sustainability across the end-to-end lifecycle. This requires the creation of a whole portfolio of sustainable products and services, with an emphasis on the circular economy.

              For that to happen, the whole organization – and its ecosystem – needs to move to a sustainability-aware culture. Senior management should communicate clear targets and KPIs that support sustainability ambitions. These targets and KPIs must be translated into meaningful goals and incentives for everyone involved, from the boardroom to the shop floor.

              With the right culture in place, the journey to sustainability will rapidly gather momentum, as leading OEMs are already discovering.

              2. Reliable analysis and reporting

              To navigate and manage the journey, it is critical to be able to measure progress. Sound metrics are also vital to substantiate sustainability claims and fend off accusations of greenwashing.

              OEMs, therefore, need to gather accurate, up-to-date data about all activities and projects. They also need to put in place the analytic tools to report progress against baselines and targets at any required level, as well as to deliver comprehensive ESG reports.

              The right data and connectivity architecture is critical because real-time or near-real-time data may be needed on occasion. Our report Driving the Future with Sustainable Mobility makes the case for implementing an “intelligence nerve centre” to address this requirement.

              3. Methodical innovation

              Innovation is a key enabler of sustainability, and OEMs need to have clear strategies for achieving it – whether in-house, via partnerships, or most likely through a combination of methods.

              It’s not just technical innovation that’s needed. New business models will also be required – particularly circular economy models.

              The need for collaboration

              Sustainability can’t be achieved by any one company in isolation. Let’s look at just a few examples where collaboration with other organizations is essential.

              1. Working with governments

              OEMs should lobby governments to incentivize the take-up of EVs, as well as to put in place low-emission zones and similar restrictions. Governments also have a part to play in establishing the necessary infrastructure for BEVs and FCEVs.

              2. Working with ecosystem partners

              Charging and power companies can make a big contribution to sustainable transportation. Less obviously, perhaps, the same is true of technology providers developing digital services for customers, because such services can facilitate more sustainable vehicle use.

              OEMs also need to collaborate with parts suppliers to ensure that inputs are produced and delivered as sustainably as possible.

              Don’t forget the opportunities

              This article has focused on the challenges of sustainability, but there are great opportunities as well, not least in terms of securing access to investment by demonstrating compliance with stakeholders’ ESG targets. Circular economy models, too, have the potential to generate new revenues, as well as help companies overcome sustainability challenges.

              These opportunities will be covered in more depth in future articles, as will the topic of collaboration and the need for a sustainability culture.

              Meanwhile, please contact me if you’d like to discuss any of the issues raised here or learn how Capgemini can support your sustainability journey.

              About Author

              Fredrik Almhöjd – Our expert

              Fredrik Almhöjd

              Director, Capgemini Invent
              Fredrik Almhöjd is Capgemini’s Go-to-Market Lead for Commercial Vehicles in the Nordics, with 25+ years of sector experience plus extensive knowhow in Sales & Marketing and Customer Services transformation.

                How governments are using IT to shrink their carbon footprints

                Gunnar Menzel
                19 Apr 2023

                The public sector has a mandate to lead on sustainability, and this includes their IT footprint. How can governments achieve digital growth while cutting carbon emissions?

                In my experience working with public sector leaders, an often-overlooked source of emissions come from IT. The global IT industry is responsible for approximately 3% of total CO2 emissions, and if left unchecked, this could grow to as much as 10%. Fortunately, more and more organizations, public and private, are taking the necessary steps to reduce their carbon footprints:

                1. Reducing the energy needs of an IT system and making sure that only sustainable energy sources are being used
                2. Using IT and data technologies to help drive down an organization’s total CO2 emissions, for potential results of as much as 20%.

                Let’s start with some simple tools that public sector leaders can use to control their IT systems’ carbon footprints, and then I’ll turn to some larger sustainability strategies made possible by new technologies.

                Sustainable IT is observable IT

                The first step to managing your IT CO2 footprint is understanding how your IT system works, and where it draws energy. This can be done with a tool like an application portfolio manager, by conducting a carbon audit or by using a carbon calculator . For public sector leaders interested in going a step further, you can’t do better than a digital twin. A digital twin is a virtual replica of a system (in this case, an IT system), which provides a team with an x-ray view into how it functions. Digital twins make it possible to experiment with changes virtually, before implementing them in real life, and they’re an outstanding tool for general understanding. Governments are often saddled with legacy technologies, which require inordinately high costs in the form of energy and maintenance. Optimizing public sector IT systems should be step one for any department that can spare the up-front investment. The benefits are immediate and lasting.

                How data can help governments radically reduce carbon footprints

                Now, what happens if you turn those same optimization tools on your overall carbon footprint? In my work with the public sector and private sector, I’m often struck by two things: how little people know about their organization’s energy usage, and how easy some of the gains can be.

                Once an organization’s energy usage is observable, the doors are open to optimization. Among the specific tools that can lower an organization’s carbon footprint are:

                • Going digital
                • Moving to the cloud
                • IoT

                Let’s look at each in turn.

                Cutting carbon by going digital 

                While digitization can be a valuable tool in the pursuit of sustainability, it’s important to approach it with a nuanced understanding of the potential risks and benefits. On one hand, digital systems reduce the need for paper-based processes and streamline operations, leading to a more efficient use of resources and reduced labor. They can also enable governments to collect and analyze data, providing better visibility into their sustainability efforts.

                However, there are also potential downsides to digitization. For example, gathering and processing more data requires energy, which can contribute to carbon emissions. Additionally, the proliferation of digital devices can lead to a rise in e-waste if not managed responsibly. One report by the Capgemini Research Institute found that less than half of executives are aware of their companies’ IT carbon footprints, or of the steps they might take to reduce it.

                The key is planning. Digitization is a valuable tool for governments committed to sustainability, when combined with a holistic understanding of its energy usage, and when management take steps to mitigate any negative effects. With the right strategy in place, digitization becomes a powerful enabler of decarbonization and resource efficiency.

                Cutting carbon by migrating to the cloud

                Another way that IT can help reduce CO2 emissions is through the implementation of cloud computing. By moving data and applications to the cloud, private and public sector organizations can dramatically reduce their energy consumption and carbon footprints.

                One example of a government saving energy by migrating to the cloud is the U.S. Government’s Cloud Smart initiative, which encourages federal agencies to move their IT systems to the cloud. As a result, agencies can reduce the number of physical servers they need to maintain, which in turn lowers energy consumption and greenhouse gas emissions.

                Cutting carbon through IoT 

                By connecting devices and systems, the Internet of Things (IoT) also helps organizations optimize their operations and reduce energy consumption. For instance, a smart building can automatically adjust lighting and temperature based on occupancy levels, resulting in significant energy savings.

                However, reducing CO2 emissions is not the only benefit that IT can offer. It can also open new opportunities and help address wider sustainability challenges. For example, using IT to improve supply chain management can help organizations reduce their environmental impact by reducing the amount of waste and increasing the efficiency of their operations.

                Maintaining focus in difficult times

                In the public sector today, new events constantly vie for attention. Inflation, the war in Ukraine, chemical train derailments and other challenges must not distract public sector organizations from addressing the global warming challenge. IT has an important role to play in reducing CO2 emissions and helping to create a more sustainable future. By understanding our current CO2 footprint, establishing proper governance, selecting and scaling the right use cases, and using real-world examples, we can make a meaningful impact. Let’s take action and do our part to protect our planet.

                Read more about real-life use cases for carbon-cutting IT in TechnoVision for Public Sector, our yearly look at leading technological applications in the public sector space.  For more information on contact me at gunnar.menzel@capgemini.com

                Author

                Gunnar Menzel

                Gunnar Menzel

                Chief Technology Officer North & Central Europe 
                “Technology is becoming the business of every public sector organization. It brings the potential to transform public services and meet governments’ targets, while addressing the most important challenges our societies face. To help public sector leaders navigate today’s evolving digital trends we have developed TechnoVision: a fresh and accessible guide that helps decision makers identify the right technology to apply to their challenges.”

                  The metrics that matter

                  Vinay Patel
                  20 April 2023

                  How banks are using data to monitor and manage their customer experience performance

                  It’s 2023, and banks are still being asked to go in two directions at once.
                  On one hand, banks must keep up with customers’ evolving expectations, while on the other, they must continuously improve business profitability. It’s a challenge that requires a high degree of coordination and strategic planning. Unfortunately, for many banks a lack of effective metrics and processes is hindering their ability to make informed, real-time decisions, and that’s holding them back. This blog will explore how effective customer experience performance management can help banks achieve their business objectives with optimal customer satisfaction.

                  The intersection of business and customer experience
                  Banks must focus on achieving specific business improvements such as cost reduction and revenue enhancement – without sacrificing customer experience. Preference should be given for transformation projects that meet these expectations­. Projects such as omnichannel communication, self-service adoption, knowledge management, mobile-first design, and CCaaS all have the potential to improve the customer experience and drive business results.

                  To improve customer service, banks must define specific, controllable customer service activities that can be performed, measured, and improved at each level of the organization. This should include coordination across the customer life cycle, technology acquisition, processes, customer interactions, and collaboration with partners. There’s a term for this process: customer service performance management. The benefits of customer service performance management are clear – streamlined decision making, faster process delivery, and lower customer service expenditures. So… where’s the snag?

                  Defining metrics…
                  If you can’t measure it, you can’t master it. The key to focusing attention and effort is to define the right metrics. This means leveraging data analytics and customer feedback to gain insights into customers’ behavior and preferences. In means enhancing data-sharing capabilities between departments. Do your customer service agents have access to your CRM data? Are salespeople leveraging customers’ unique histories? By defining and tracking metrics, and by sharing them securely between teams, banks can improve customer experience and drive profitability, ultimately achieving sustainable growth and long-term success in the dynamic banking industry.

                  …that drive improvement
                  With a set of data-driven metrics, step two is using those metrics to inform decisioning at every level. Some banks find it helpful to appoint a Customer Experience Officer with the authority to determine whether the defined metrics and activities support or inhibit CX goals. They should also groom and compensate customer service managers based on their enterprise vision of customer service. Additionally, banks must build a real-time analytical framework to ensure that a customer is treated appropriately at every phase in the customer life cycle.

                  Here are several more recommendations for improving service for banking customers:

                  1. Appoint a Customer Experience Officer with authority to determine whether the defined metrics and activities support or inhibit collaboration between and among multiple groups.
                  2. Groom and compensate customer service managers based on their enterprise vision of customer service.
                  3. Evolve the concept of customer life cycle management from an often discussed but poorly administered concept to a more practical approach.
                  4. Measure the specific effects technology has on decision making, organizational structures, business processes and customer expectations.
                  5. Build a real-time analytical framework to ensure that a customer is treated appropriately at every phase in the customer life cycle.

                  Customer Experience Analytics & Insights

                  customer-experience-analytics

                  A foundation for lasting customer loyalty
                  By making customer experience an integral part of the overall business strategy, banks can improve customer satisfaction, build customer loyalty, and enhance overall business performance. This process can also help break down silos within the organization, leading to better collaboration and communication across departments, for a more unified and cohesive customer experience.

                  Ultimately, banks must view customer service as an enterprise-wide business objective and prioritize effective performance management to achieve their business objectives. By leveraging customer initiatives across departments, mapping touchpoints to ensure consistency, and communicating a clear customer service roadmap to employees, banks can better meet customer expectations and achieve their business objectives.

                  Author

                  Vinay Patel

                  Vinay Patel

                  Senior Director, Contact Center Transformation Leader
                  Banking and Capital Markets sector are focused on delivering a customer-centric contact center leveraging a customer experience hub to  optimally engage customers across interactions.
                    Related Research

                    Six minutes to make customer contact

                    Orchestrate people, process and technology to make these moments exceptional.

                    Client Story

                    Unleashing the potential of omnichannel service: a real-world case

                    Discover how a leading Financial Services firm revolutionized their customer experience with next-gen technology.

                    6G for the hyperconnected future

                    Capgemini
                    Capgemini
                    17 Apr 2023

                    A point of view on the technology advancements in 6G platforms and ecosystems.

                    Life in 2030

                    In 2030, the world will look dramatically different due to technological advancements in connectivity and associated technologies. The metaverse is likely to become fundamental to everyday life. 8K virtual reality (VR) headsets and brain-interface devices will probably become mainstream. There could be widespread proliferation of level-5 autonomous vehicles and hyperloop tunnels could enable faster international travel. Hypersonic airliners could enter service. “Smart Grid” technology will become widespread in the developed world. 3D-printed organs, blood vessels, and nanorobotics may improve our quality of life. Artificial brain implants could restore lost memories.  Quantum computing may become cheap enough to be mainstream. The first version of the quantum internet is likely to emerge, with terabyte internet speeds becoming commonplace. The entire ocean floor will probably be mapped, making deep ocean mining operations feasible. Hypersonic missiles will be a plausible addition to most major militaries, as will be AI-enabled warfare. The High-Definition Space Telescope (HDST) could be operational. The first permanent lunar base could be established.

                    Making this new hyper-connected world a reality will require a massive leap forward; one that that provides 1,000 times faster connectivity than what is possible today, with data transfer speed in terabytes per second and extremely low latency allowing response time in a few microseconds.

                    Although 5G networks are slowly maturing, and their full potential is still to be unleashed, the limits of 5G do not allow infrastructures and networks to simultaneously guarantee a speed of terabytes/second with extremely low latency. This calls for thinking beyond 5G.

                    Mobile networks: past, present, and future

                    Wireless cellular communication networks have seen the rise of a so-called new-generation technology approximately every ten years and each consecutive generation has resulted from disruptive technological advancement and societal change (Figure 1). If this trend continues, 6G may be introduced in the early 2030s, or at least that’s when most smartphone manufacturers will release 6G-capable mobiles, and 6G trials will be in full swing.

                    Figure 1: Mobile networks: past, present, and future

                    It is too early to provide a detailed list of features that 6G will bring, but there are emerging themes from research that are shaping new technologies like new spectrum, visible light communication, AI native radio, cell free networks, intelligent surfaces, holograph communication, non-terrestrial networks (satellites, High Altitude Platforms (HAPs), drones etc.) etc. In addition, the lessons learned from 5G network deployments and user ecosystems will play a big part in defining 6G.

                    What really is 6G and how it is shaping up?

                    6G is expected to provide hyper-connectivity that will lessen the divide between humanity and the inanimate world of machines and computers.

                    Considering the general trend of successive generations of communication systems introducing new services with more stringent requirements, it is reasonable to expect that 6G will build on the strengths of 5G and introduce new technologies with requirements that far exceed the capabilities of 5G.

                    Regulatory bodies are considering allowing 6G networks to use higher frequencies than 5G networks. Since spectral efficiency, bandwidth, and network densification are the three main ingredients needed to achieve higher data rates, this is likely to provide substantially higher capacity and much lower latency. Terahertz (THz) bands from 100GHz to 10THz are currently being considered. This will allow the delivery of a peak data rate of 1,000 gigabits/second with over-the-air latency lower than 100 microseconds. The current intent is to make 6G, 50 times faster than 5G, 10,000 times more reliable, and able to support ten times more devices per square kilometer while offering wider coverage.

                    Though these are early days of 6G, a rough sketch of what 6G performance will look like and its comparison with 5G is suggested in the initial studies. For example, peak data rate in 5G is 200 Gb/s, whereas that in 6G is estimated to be 1 Tb/s, maximum bandwidth is 1 GHz in 5G vs 100 GHz in 6G, latency is 1 millisecond in 5G vs 100 microseconds in 6G, reliability is 1-10-5 in 5G vs 1-10-9 in 6G, peak mobility supported is 500 km/h in 5G vs 1000 km/h in 6G, energy latency is not specified in 5G but in 6G it is estimated to be 1 Tb/J. Detailed comparison is available in [1].

                    This research into 6G may seem premature, but the geopolitical race for leadership on this next big thing in telecommunications technology is already gearing up. Countries across the globe are spending huge sums on 6G research. Various consortia are forming, and research projects are starting to address the new standards and vertical use cases, such as vehicle connectivity and private industrial networks. The key 6G initiatives across the globe are shown below (Figure 2).

                    Figure 2: Global 6G initiatives

                    6G use cases and the technologies driving them

                    Expanding upon the foundation of 5G, 6G will enable a much wider set of futuristic use cases that, when deployed on a massive scale, will transform the way we live and work in remarkable ways.

                    Telecom operators, technology providers, and academia are joining forces under various alliances and consortia and deliberating which use cases will emerge in the next decade and be adopted by 6G. NGMN [2], Next G Alliance [3], one6G [4] are just some of the leading alliances that have recently published 6G use cases.

                    Figure 3 shows the categorization of various 6G use cases that enhance human-to-human, human-to-machine, machine-to-machine, and machine-to-human communication.

                    Figure 3: Emerging 6G use cases

                    Key technical areas

                    These use cases are driving the technology trends and steering the requirements for future generational change. The key technical areas that will accelerate 6G introduction include technological enhancements, architectural improvements, and accelerating adoption (Figure 4).

                    Figure 4: 6G technical areas
                    Top technology areas include:
                    1. New Spectrum: The 6G era will necessitate a 20X increase in network capacity. 6G will meet this challenge through new spectrum in range (7 to 24 GHz) including sub-THz range (larger than 100 GHz) and ultra-massive MIMO.
                    2. AI Native Networks: AI will become a native ingredient in 6G networks so that the network can become fully autonomous and hide the increased network complexity from users. A dynamic AI/ML-defined native air interface will be key for future networks. These interfaces could give radios the ability to learn from one another and from their environments.
                    3. Sensing and positioning: With near-THz frequencies, the potential for very accurate sensing based on radar-like technology arises. 6G networks will be able to sense their surroundings, allowing us to generate highly realized digital versions of the physical world. This digital awareness would turn the network into our sixth sense. It will particularly improve performance in indoor communications scenarios by acquiring and sending better information about the indoor space, range, barriers, and positioning to the network. Please refer to this for more information.
                    4. Security, trust, and privacy: 6G will provide advanced network security, trustworthiness, and privacy protections to unlock the full value potential of 6G, with quantum safe cryptography and distributed ledger technologies, such as blockchain.
                    5. Ubiquitous connectivity: 6G will provide reliable networking connectivity, focusing on extreme performance and coverage when and where needed through seamless integration of non-terrestrial networks such as satellites, drones, and HAPs with the terrestrial network. Please refer to this for more information.
                    6. Intelligent Reflecting Surface (IRS): IRS is a thin panel integrated with many independently controllable passive reflection elements– that can improve the security, spectrum, energy efficiency, and converge of 6G networks by adjusting the amplitude and phase shifts of reflection elements in IRS for achieving fine-grained reflect beamforming.

                    Capgemini’s 6G initiative

                    Capgemini has a rich history of leading mobile technologies and possesses end-to-end capabilities in RAN, edge, and core. In addition, Capgemini is a leading player in Open RAN, something that is likely to become more widespread as technology deployment continues.

                    Capgemini has a head start in 6G research, particularly in the areas of mesh networks, AI for network automation, sustainability, and quantum cryptography. Our first 6G research paper on “xURLLC in 6G with meshed RAN,” was published in the “ITU Journal on Future and Evolving Technologies (ITU J-FET) – Volume 3, Issue 3, December 2022[9]. The objective of this research is to define a new network architecture that will make the 6G networks simpler, more flexible, and able to support extremely low latency communication.

                    Academic collaborations with IISc Bangalore to identify rouge base stations using abnormally high power and with Princeton University to allow federated learning towards the goal of a user-centric cell-free 6G are also underway.

                    Capgemini is also an active member of O-RAN alliance and participates in its next Generation Research Group (nGRG) task force to determine how O-RAN will evolve to support 6G and beyond.

                    Conclusion

                    Today, we are in quite early stages of the rollout of 5G, and we still have a long way to go with the maturing of this technology. However, this is the ideal time to plan for the future and ask what’s next. Emerging use cases for beyond 5G and 6G seem to be taking a firm footing and suggest that 5G may only open the door to such use cases. New and more stringent requirements will continue to push the evolution of wireless well beyond 5G and 6G. Capgemini is at the forefront of 6G research, with strong partnership with academia and industry.

                    TelcoInsights is a series of posts about the latest trends and opportunities in the telecommunications industry – powered by a community of global industry experts and thought leaders.

                    References
                    1. White Paper on Broadband Connectivity in 6G
                    2. NGMN Identifies 6G Use Cases
                    3. Next G Alliance – 6G Applications and Use Cases
                    4. One6G – 6G Vertical Use Cases
                    5. What’s Inside Counts: How 6G Can Enable Ubiquitous, Reliable Indoor Location Services
                    6. The Future of 6G is Up in the Air — Literally
                    7. Non-Terrestrial Networks in 5G & Beyond: A Survey
                    8. Faster, Smarter, Greener: Intelligent Reflecting Surface for 6G Communications
                    9. xURLLC in 6G with meshed

                    Meet the authors

                    Subhankar Pal

                    Subhankar Pal

                    Senior Director and Global Innovation leader for the Intelligent Networks program, Capgemini Engineering 
                    Subhankar has over 24 years of experience in telecommunications, specializing in advanced network automation, optimization, and sustainability using cloud-native principles and machine learning for 5G and beyond. At Capgemini, he leads technology product incubation, product strategy, roadmap development, and consulting for the telecommunications sector and related markets.
                    Sandip Sarkar

                    Sandip Sarkar

                    5G and 6G strategy lead for Capgemini Engineering
                    Dr. Sandip Sarkar holds a B. Tech from IIT Kanpur. and a PhD from Princeton University. With over 30 years of experience, Dr Sarkar holds over 100 patents and over 30 published papers in the field of telecommunication. His research interests include wireless communications, error-control coding, information theory and associated signal processing systems. Dr. Sarkar was the author of multiple wireless standards and is a senior member of IEEE.