Skip to Content

The augmented architect – unleashing the power of ChatGPT

Pascal Espinouse
9 Mar 2023

I am co-authoring this article with ChatGPT to showcase how ChatGPT can be an accelerator for architects. Not only does this co-authorship demonstrate the capabilities of ChatGPT, but the suggestions and examples we provide serve as a blueprint for how ChatGPT can help architects work more efficiently and effectively.

Consider ChatGPT as a virtual assistant, dedicated to streamlining your workflow and taking your architecture to the next level.

An Augmented Architect?

Gone are the days when IT architects had to rely solely on their own expertise and intuition to make critical design decisions. Today, we live in an era where technology is constantly evolving. It is crucial for IT professionals to stay ahead of the curve, through technology and sector monitoring, networking, and engaging in the Community of Practices and ChatGPT represents another valuable opportunity for IT architects to become “Augmented Architects.

But what exactly is ChatGPT, and how can it help IT architects do their jobs better? Put simply, ChatGPT is a state-of-the-art AI language model that can generate human-like text based on a given prompt. For IT architects, this means that they can leverage ChatGPT’s advanced capabilities to streamline their work, improve their accuracy, and ultimately enhance the outcomes of their projects.

How does ChatGPT enhance the work of IT architects? Let’s examine three examples to understand their impact.

1. Architecture design assistance

IT architects constantly face complex design problems that require swift and efficient solutions.

ChatGPT can assist in the design process by providing suggestions for architecture patterns, best practices, and trade-off analysis:

  • Pattern suggestions: ChatGPT can provide suggestions for commonly used architecture patterns that are relevant to the specific problem the architect is trying to solve. This can save architects time and effort in researching and identifying suitable patterns.

  • Best practice recommendations: ChatGPT can recommend best practices based on previous experience and industry standards. This can help ensure that the design is scalable, secure, and efficient.
  • Trade-off analysis: ChatGPT can provide a trade-off analysis to help architects weigh the pros and cons of different design options. This can help architects make informed decisions and choose the best course of action.

Here is a concrete example of a trade-off analysis, conducted with ChatGPT:

“Imagine an IT architect is tasked with designing a new e-commerce platform for a company. What are the possible architecture scenarios, considering factors such as cost, complexity, scalability, and performance?”

After several deep dives and interactions with ChatGPT, here was its answer:

“When it comes to designing a new e-commerce platform, IT architects have to weigh several factors such as cost, complexity, scalability, and performance. Two common architecture scenarios to consider are composable commerce architecture based on microservices and monolithic architecture. Let’s explore the trade-off analysis between these two options:

Option 1: Composable Commerce Architecture based on Microservices

In this approach, the e-commerce website is built using microservices, each responsible for a specific function such as product catalog, shopping cart, payment processing, etc. This approach offers greater flexibility and scalability, as new microservices can be added or removed as needed. However, it also involves a higher level of complexity, as each microservice must be managed and maintained separately.

Option 2: Monolithic Architecture

In this approach, the e-commerce website is built as a single, integrated application. This approach is simpler to manage and maintain but may be less flexible and scalable than the composable commerce architecture.

IT architects can benefit from ChatGPT’s fast and effective support in the design process, freeing up time and resources to delve into scenarios and concentrate on strategic matters.

2. Component selection

The selection of the right components for an IT architecture can often be a time-consuming and challenging task. With ChatGPT’s ability to analyze large quantities of data and provide informed recommendations, IT architects can more easily select the most suitable components for their projects, saving time and effort in the process.

I asked ChatGPT:

“Let’s consider a scenario where an IT architect is tasked with searching for the right front-end framework for a company website. Can you recommend a set of solutions to consider, with their related advantages and disadvantages?”

ChatGPT responded with:

  1. React has a relatively low learning curve, but it may require more resources compared to the other two options. Additionally, it benefits from a thriving community that offers extensive support and resources.
  2. Vue.js is known for its simplicity and flexibility, making it a popular choice for building user interfaces. However, it may come with a slightly steeper learning curve compared to React.
  3. Angular offers a comprehensive solution for building web applications and has many features and tools out of the box. It also has a steeper learning curve compared to React and Vue.js, but once you understand the framework, it can be a powerful tool for building complex web applications.”

Such answers from ChatGPT can serve as a starting point for architects who are unfamiliar with the topic, or as a way to accelerate the work of those who are already knowledgeable in the field.

3. Technical documentation

Technical documentation is a critical but often time-consuming aspect of an IT architect’s work.

ChatGPT can offer valuable support by streamlining the documentation process through its ability to process large amounts of information and understand technical language.

With ChatGPT’s help, architects can more easily summarize and structure technical data, allowing them to allocate more time to higher-value activities.

As an illustration: you will find a  summary, generated by ChatGPT, at the very end of this article.

Here’s another concrete example of how ChatGPT can assist in technical documentation:

Let’s say we need to document the architecture of a new cloud-based application. The architect can provide ChatGPT with the necessary technical specifications, design documents, and any other relevant information. ChatGPT can then assist in generating a high-level summary of the architecture, outlining the main components, relationships, and dependencies.

This summary can serve as a useful starting point for the architect, who can then use it to generate diagrams (ChatGPT cannot) or create a more detailed technical document. In this way, ChatGPT can help IT architects save time and effort in writing technical documentation while ensuring that the information is accurate and well-structured.

This allows architects to focus on more value-added tasks, such as identifying and resolving technical issues or exploring new technologies to drive innovation.

Limitations

While the benefits of ChatGPT for IT architects are clear, it is important to recognize that the technology is not flawless. ChatGPT’s performance is limited by the data it has been trained on. In my testing, I also encountered some errors in the context that required restarting the interaction with ChatGPT.

ChatGPT now officially warns you that your messages may be read by others (including the previously mentioned issues with context), so we need to be careful not to disclose any confidential information.

Additionally, as with any AI technology, it is crucial to critically assess the validity and relevance of the information produced by ChatGPT. There may be instances where its suggestions are partially, or even entirely, inaccurate. The way you formulate your questions can mitigate this risk. In a nutshell, ChatGPT is not a substitute for experts, but it can support them if used effectively!

What about sustainability?

The use of large language models such as ChatGPT poses a challenge in terms of sustainability, because the training and deploying of these models require a significant amount of computational resources and energy, resulting in a large carbon footprint.

Not only does the training process consume vast amounts of energy, but the deployment and ongoing use of these models in production also consume significant resources. Given the environmental impact of using large language models, it is imperative to explore and adopt more sustainable approaches for AI development and deployment.

Implemented practices, such as using renewable energy sources and optimizing the use of resources during training and deployment are necessary, and there is still much to be done in this area. It is crucial for the AI industry to focus on sustainability and reducing the carbon footprint of AI systems.

My conclusions

ChatGPT can become a valuable asset for IT architects in their daily work, offering numerous benefits.

Whether it be assisting with architecture design, component selection, or technical documentation generation, ChatGPT has the potential to transform the way IT architects work.

It can ultimately increase their productivity, efficiency, and effectiveness.

I’m convinced that technologies such as ChatGPT have the potential to serve as virtual assistants for not only architects but also many other professionals in various industries.

So why not try ChatGPT, and experience the benefits for yourself?

Summary (generated by ChatGPT)

This article, co-authored by Pascal Espinouse and ChatGPT, explores the potential of using advanced AI language technology, ChatGPT, to boost the productivity and success of IT architects. ChatGPT can act as a virtual assistant and provide support in areas such as architecture design, component selection, and technical documentation.

Throughout the article, real-life examples are presented to illustrate the impact ChatGPT can have in these areas. While the benefits are clear, it is important to acknowledge the limitations of using ChatGPT and emphasize the need for critical evaluation and decision-making by the IT architects.

Ultimately, the article concludes by stating that ChatGPT is a valuable tool for IT architects to become “Augmented Architects”, but also for many other professionals looking to work more efficiently and effectively.

About Author

Pascal Espinouse

Architecture Director at Capgemini
Pascal is a Chief Architect at Capgemini specializing in digital transformation and innovation. He is a passionate trainer and mentor within Capgemini global architects community. He plays a key role in the core team responsible for leading the 1500+ Architects of Capgemini France. Besides Architecture, Pascal’s highest involvement relates to sustainability.

    Why we should turn the tide on open data skills in Europe’s public sector

    Giulia Carsaniga
    9 Mar 2023

    Literacy used to refer simply to reading, writing and, occasionally, numeracy. Nowadays, however, digital literacy is viewed by many to be equally – if not more – important for economic growth and social inclusivity.

    With digital literacy comes a need for the data skills that will ensure the EU’s Digital Decade goal of ‘a digitally skilled population and highly skilled digital professionals’ comes to fruition. And among the EU’s objectives through to 2030 are to give small businesses and industry access to data, as well as the ability for all parties to compete in the digital work on fair terms. This suggests open data – but what is open data and why are skills in it so important?

    The open data skills gap

    Open data is data that anyone can access, use and share, free of charge – subject, at most, to the requirement to attribute. Governments, businesses and individuals can use open data to bring about social, governmental, economic and environmental benefits. For example, open data can help to track the waiting time in the ER of an Italian hospital, monitor the air quality in a French city, support the construction of a public procurement dashboard for Greek small- and medium-sized enterprises (SMEs), and build an application that provides migrants in Germany with key information about the country. Some of these and more cases can be found in the data.europa.eu’s high-level Open Data Maturity (ODM) Report 2022, which gathers insights into the state of open data in 35 European countries.

    Given the great potential of open data – reflected yearly in the ODM publication – it is no surprise that more and more national governments and municipalities are publishing parts of their datasets on their national and local (open data) portals and encouraging its (re)use by citizens and businesses. Yet, the open data value chain – from the identification of datasets to the publication and reuse for creating impact – is long and requires both skilled data providers and re(users).

    2023: The European Year of (open data) Skills?

    Will 2023 be a turning point for open data skills in Europe? After all, it is the European Year of Skills. Yet, according to the ODM Report 2022, all EU27 Member States could do more when it comes to open data skills and training across the public sector.

    So, what are EU countries doing to equip public sector bodies and citizens with the skills they need to collect, share, process, publish, and reuse open data? The state of play is described in the ODM Report 2022, which was commissioned by the Publications Office of the European Union and the European Commission (EC) in the realm of data.europa.eu, the official portal for European open data. Coordinated by Capgemini Invent, the report discusses the findings of the annual open data benchmarking exercise conducted across European countries (EU27+). In it, we discover that open data is a cornerstone of the EC’s policy for shaping Europe’s digital future.

    However, despite data proliferating, governments and public agencies still lack the full complement of skills needed to translate open data into government policy and better citizen outcomes. In fact, the ODM Report reveals an absence of adequate data skills and literacy among civil servants. In particular, we observe a need for upskilling in aspects such as data literacy in general, understanding different licenses, knowledge of the different languages (e.g., DCAT-AP), and quantitative research on the economic impact. data.europa.eu also points to a need for greater skills in the technical, quality, legal (e.g., CC licenses) and governance aspects of making data openly available.

    This is not the first time that limited skillsets have been cited as a barrier to digital and data-enabled advancement. For example, in a recent Capgemini Research Institute report, 54% of public sector organizations cite cultural challenges that include a lack of talent and skillsets as one of the barriers holding back the deployment of public sector data ecosystems.

    How public sector data providers are addressing open data skills

    Positively, moves are already underway to address this situation across the public sector. Almost all (26 of the EU27) Member States report that they have set up training activities to refine the data skills of civil servants working with open data.

    The ODM Report identifies several best practices being deployed:

    • Establish formal qualifications: 19 of the EU27 indicate that their training activities offer a formally recognized certification. For example, as part of a national digital competencies program, the Portuguese government has launched an initiative for the digital qualification of public servants. This aims to promote the coherent execution of digital and open data policies with short-term courses on several ICT domains, including open data, provided by the national institute for competencies in public administration.
    • Collaborate with academic institutions: A program made available only for Public Sector Liaison Officers in Cyprus is delivered by the country’s Open Data Team in collaboration with the Open University of Cyprus. Participants receive a formal certification, which can then be submitted to the Public Service Commission. The program includes modules on data audits and selection of data for publication, preparing data for publication (formats), licensing, data and metadata quality, publication processes, and releasing data APIs.
    • Invest in training: In Luxembourg, regular training sessions designed to inform and support data providers cover topics such as transparency, interoperability, open data, and data management. Elsewhere, the Polish National Open Data Portal (dane.gov.pl) includes multimedia training on open data and preparation of data in open formats. This features tutorials on how to make data available on the portal, and how to add data(sets) and resources.
    • Set national policy: Some countries have incorporated skills into their national data policies. In France, for example, every ministry is expected to draw up a roadmap for data, algorithms, and source coded strategy, for which a list of skills needed will be required.

    Increasing open data literacy among data (re)users

    Beyond public sector data providers, there is, of course, a need to ensure the citizens, businesses and institutions who use and reuse the data gain benefit from it. Indeed, the future envisioned by the EC for Europe will have data as a foundational building block. The EC’s policies (and rationale of the annual Open Data Maturity assessment) aim to prepare EU Member States for this future with a focus on generating value for the economy and society through the reuse of public sector information – in other words: open data.

    Some countries are already advancing their open data maturity by stimulating data literacy among (potential) users. The Open Data Maturity (ODM) Report 2022 points to events, such as hackathons, conferences or open data days, designed to raise awareness about open data and increase data literacy beyond public sector bodies. Focused on open data, these events are held annually at national, regional, and local level across the EU. According to the ODM 2022 assessment, 52% of EU Member States organize more than nine such events every year.

    As well as upskilling today’s workforce, the employees of tomorrow will also need open data skills. The Czechitas organization in the Czech Republic, for example, runs the Digital Academy project, which, among other things, focuses on data analysis using open data. Czechitas is a non-profit organization that fosters the IT skilling of women and their subsequent integration into the IT labor market. This tackles two barriers to open data maturity, that of a lack of STEM education and the limited opportunities for women in the world of IT.

    Accelerating open data learning – why now?

    It is hard to envisage a world in which books, articles, and magazines aren’t accessible to everyone. And in today’s digital economy, the EC is determined that a lack of data literacy should be equally hard to envisage. Why? Because it wants the EU to be a leader in a data-driven society.

    This ambition demands open data maturity. Without it, the EU risks trailing behind in the wake of those countries and regions already advancing their open data maturity, with the potential to impact economic prosperity, competitiveness, innovation, digital inclusion, healthcare outcomes, and evidence-based policy making.

    Currently, however, it is clear from the ODM 2022 Report that more needs to be done to strengthen the skills of participants in the open data value chain. Even the most mature EU countries in open data (categorized as trend-setters) need to up their game when it comes to accelerating skills and training, with a recommendation for them to: ‘Work with training institutions on providing advanced open data courses and training’.

    Rethinking how we build skills

    Giulia Carsaniga, lead author for the ODM Report 2022, comments: “A Europe fit for the digital age is one of the European Commission’s overarching priorities. Skills will form the bedrock of this. The European Year of Skills is surely the time to rethink our approach to open data maturity and how we build the skills to accelerate it.”

    The ODM Report encourages countries and data providers to do more for their national teams and for individuals, with a set of extremely useful recommendations for countries at varying levels of maturity. The data.europa.eu academy is also a good starting point for public sector open data users as they accelerate their open data learning.

    Author

    Giulia Carsaniga

    Senior Consultant and Lead Author Open Data Maturity 2022
    ”Digital technologies can help us address today´s greatest challenges, from climate change to global pandemics. At Capgemini, we support public sector clients to make the best out of data-driven tools, promoting a citizen-centred, sustainable, and inclusive digital transformation, in line with the EU’s priorities”.

    Eline Lincklaen Arriëns

    Senior Consultant and Expert on European data ecosystems Capgemini Invent NL
    “Digital technologies are crucial in addressing global challenges, including climate change and environmental degradation. Capgemini aims to support clients accelerate their digital transition in a manner that is sustainable to their organization, society, and the environment, and in line with EU priorities such as the EU Green Deal.”

      Innovation games – change doesn’t always come with a bang

      MANUEL SEVILLA, Vice President, Chief Technology and Innovation Officer and Head of Alliances at Capgemini's Business Services
      Manuel Sevilla
      1 Mar 2023

      Innovation isn’t something that can always be precisely defined. More often than not, it’s a small but game-changing step and having the courage to take them.

      One of my roles at Capgemini has been the development of our Innovation Games over the last four years.

      Innovation Games is an internal Capgemini competition in which our people nominate a project that has really made a difference through leveraging innovation. Its main purpose is to provide a showcase for the great work we do in devising creative solutions for our clients across the globe, so that we as an organization can learn from one another’s successes.

      They also enable us to demonstrate that the Frictionless Enterprise is not just a digital principle, but something that delivers tangible financial and efficiency benefits. There are other benefits too, as you’ll see.

      First, though, let’s look at the three awards categories. In the 2022 edition of the Games, we asked for the best projects having sales/revenue impact for our clients; for the best intelligent process automation (IPA) projects using data, AI, and analytics or robotics; and for the best projects delivering client outcomes.

      Of course, the metrics for success were specific to each of these categories. For sales/revenue impact, success is quantifiable in numbers. For intelligent automation, the criteria were more qualitative – for instance, judging the extent to which productivity was boosted, and to which the customer or employee experience was improved. For client outcomes, key factors could be either quantitative (e.g., time-to-market, cashflow, or client retention improvements), or qualitative (e.g., service improvements).

      Teamwork, communication, and networking

      What are those other benefits I mentioned? I’ll highlight just three of them here.

      The first is teamwork. Whereas for client projects, our people work together to achieve desired outcomes, for the Innovation Games, they need to work together in a different way. They aim to achieve a different purpose, which is principally one of communication. Our contenders find that the Games bring a new dimension to their notions of teamwork.

      Which brings me to the second benefit – and that’s presentation skills. Delivering great results is one thing, but getting the message across about how it was done is a different proposition. That’s why we gave our contending teams coaching sessions in storytelling. The result has been that people emerge from the Games not just with a reward, but with a new skill, and I hope greater general confidence.

      The third benefit I want to mention is networking. At the Innovation Games, it’s not just that we learn from one another’s successes. It’s that our contenders get to meet one another and make new connections, which help them either personally or professionally – or both. They also have the opportunity to meet senior people from Capgemini’s Business Services global business line, and in our experience, these encounters are rewarding for everyone involved.

      Live! In person!

      What made these three benefits so much more tangible in 2022 was that, with lockdown now behind us, we were able to host the Games as a live, in-person event, in Les Fontaines, France, with only four colleagues being obliged to join us down the line due to local restrictions or visa challenges. The production quality was of the highest standard, and our contenders – three finalist teams per category, and two people per team – were given a training day in advance of the full-day event, and were our guests at a gala dinner afterwards. Everyone who joined us was keen to participate once more next year.

      Winning entries

      Among the 2022 submissions, we saw digital humans – an incremental but significant step beyond chatbots. We also saw intelligent fraud detection – the ability for systems not just to spot anomalies in invoice submissions, but to distinguish between honest mistakes and genuine attempts to defraud.

      In both cases, the difference may seem small; but in the first, the improvement in personal connection that can be made is significant; and in the second, the tone is completely altered. What may have been accusatory and guilt-inducing by default – “did you try to cheat us?” – becomes collegiate and conciliatory – “it looks like you slipped up here. Let’s fix this.”

      Case study #1: agri-tech
      Company: specialist in agricultural science and technology
      Challenge: this organization had diverse processes, with multiple business units operating in siloes. HR platforms, in particular, were high-maintenance, and gave low levels of self-service and workforce analytics
      Solution: a new, unified HR system covering the full employee lifecycle and payroll processes for all business units
      Outcome: digital, agile, and unified HR ecosystem, harmonized and efficient HR processes, enhanced employee experience, reduced HR operations costs, and workforce analytics for planning purposes.

      Case study #2: logistics
      Company: multinational provider of delivery and other business services
      Challenge: this business had a siloed approach with many manual transactions
      Solution: a digital transformation and automation initiative on a global scale: one team and a single way of working across the organization
      Outcome: total savings of $40 million, across cash apps, disputes, billing, collections, and master data.

      Case study #3: healthcare
      Company: multinational healthcare company
      Challenge: this organization needed to provide fulfilling customer experiences on a scale beyond the scope of conventional human help
      Solution: the digital nurse, Elina, performs the check-in of patients. She gives them the information and support they need, with lifelike conversations, facial expressions, body language, empathy, and tone of voice
      Outcome: patients benefit from sympathetic and responsive access, anytime, anywhere, with immediate access to crucial information regarding their therapy. Healthcare professionals benefit from better understanding of patient conditions, physical and mental requirements, and have more time for interactions with patients in need.

      Case study #4: FMCG
      Company: multinational consumer goods company
      Challenge: the business had complicated legacy processes. Silos between sales, finance, and supply chain organizations were leading to longer order processing times and slower time-to-market deliveries. Too much time was being spent on operations, and too little focusing on growth or on customers.
      Solution: a new integrated approach to operations enables information to flow seamlessly between people and processes, intelligently, and as and when it is needed.
      Outcome: total savings of over €20 million in overdue claims, accounts receivable, and operational costs, as well as better experiences for customers and employees.

      Key takeaway?

      What did we learn about the nature of innovation from the 2022 contest? To my mind, the main lesson has been that’s it’s not a word that can be precisely defined.

      Too often, I suppose, we think of innovation as a major leap forward – from handwritten transcriptions to the printing press, from horse-drawn vehicles to the combustion engine. But sometimes, that’s not the case. Sometimes, it’s a small but game-changing step.

      As I say, these are small but significant steps – and what distinguishes the cases I’ve summarized above is that we and our clients had the courage to take them.

      I’m really excited about the 2023 edition of Innovation Games we’re currently planning, and even more excited to learn how our teams have been driving innovation for our clients.

      To learn how Capgemini delivers innovation for its clients, across finance and accounting, supply chain management, HR, customer operations, and financial services, contact: manuel.sevilla@capgemini.com

      Author

      MANUEL SEVILLA, Vice President, Chief Technology and Innovation Officer and Head of Alliances at Capgemini's Business Services

      Manuel Sevilla

      Vice President, Chief Technology and Innovation Officer and Head of Alliances at Capgemini's Business Services
      Manuel advises customers to move to a new world with radically faster time-to-market, new business models, new ecosystems and new customer expectations. He assists clients in embracing fields such as cloud computing, cloud-native technologies, artificial intelligence, blockchain, and DevOps to facilitate this transformation.

        The need for wealth-as-a-service

        Shreya Jain
        06 March 2023

        Recent times have witnessed the popularity of white-label banking, or as it is widely known – Banking-as-a-Service (BaaS).

        BaaS allows banks to expand their reach by catering to wider and newer segments of customers – made possible by the integration of their APIs with non-bank services. Both incumbents and new-age banks are leveraging the BaaS model extensively, as is made apparent by the projected global market size of BaaS – on target to reach USD 74.55 billion by 2030.

        Exploring the Potential of Wealth-as-a-Service (WaaS)

        The banking business today expects agility with quick, tangible results. Banks have thus been increasingly reluctant to commit heavy IT spending to programs that are costly, complex, and come with a high risk of failure. In this climate, the BaaS model has proved to be an asset to the Financial Services (FS) industry by providing customers with the financial services they require, delivered at the time-of-need and through the appropriate means. Some in the FS industry are now wondering: could the same SaaS model be applied to wealth management? A Wealth-as-a-Service (WaaS) model could allow wealth managers to expand their reach to hitherto-inaccessible markets. For instance, by offering services modularly to clients, without spending a fortune and sacrificing valuable time building the capabilities in-house.

        Traditional Challenges in Wealth Management

        Apart from this, there are many traditional challenges that a WaaS offering could help tackle in the WM industry:​

        Costly and inflexible servicing

        Every evolution to a service or product requires a vast amount of energy across siloed applications. Moreover, IT relies on legacy platforms with a frontier between front office, middle office, back office and between data and production. This status-quo favors ballooning back-office compliance costs and risk costs.

        Limited digital maturity

        Traditional wealth solutions rely on ageing platforms and are complex to maintain and upgrade​. Even as banks embrace digitalization, their efforts are often either customer-centric or bank-centric, but rarely both. Also, information that could drive personalization in wealth offerings by building on commonalities in advisory and investment, is rarely used to its utmost benefit.

        One product, one price

        Pricing of WM products has historically been complex and tightly coupled to the product. Since the client base for wealth managers varies from Mass Affluent to Ultra High Net Worth Individuals (UHNWIs), the pricing of products should ideally be customized and variated across customer segments and profiles.

        Scattered wealth players

        The complexity of wealth investment requires expertise and technology from very different areas to be pooled together, with no common ground to play with. This further results in each player having its own tools, limiting their ability to interact without extreme (costly) customization.

        Technological Enablers of Wealth Management as a Service

        BaaS was made possible by technology. Today WealthTechs perform that role, providing services across the value chain of Wealth Management, and serving as a conducive ecosystem to implement the WaaS model. Firms such as Temenos and InvestCloud already offer platforms that can be modularly deployed across the entire WM value chain. These extended bank services foster the energies of institutions and third parties to better collaborate through a “Wealth Marketplace” that add value for end clients.

        Key Features of an Ideal WaaS Offering

        As with any other SaaS model, an ideal WaaS offering should leverage new technological paradigms to enable modularity and be adaptable to customer needs and ambitions. It should thus have the option to be offered either as a turnkey solution on a shared platform with low customization, or as a personalized platform that is custom-made for advanced client needs with extended bank capabilities. Offers could range from a full WaaS on a shared platform, a Hybrid WaaS that can be deployed modularly, to a private WaaS on a personalized platform. In any form, WaaS must have the necessary features to append to the capabilities of an FS provider:

        • Open: It should allowomni-channels to be easily plugged to third-party APIs
        • Modular: It should be built on new architectures that offer a modular approach – to deploy progressively, only the required components
        • Multi-tenant: It should be able to serve multi-entities in a vast array of geographical, legal, and financial combinations
        • Cloud-native: It should be designed to reside in the cloud, across any cloud service provider, and offering the benefits of microservices and auto scaling
        • Pay-as-you-use model: Pricing models should be evolutive – usage based, packages, subscriptions – to be able to serve the needs of every client firm​
        • Continuously enriching: To remain competitive, the WaaS ecosystem should be continuously enriching its service catalogue with best-of-breed solutions​

        As many providers of WaaS models emerge in the market, banks have already started to join the trend and grab the early mover’s advantage. With the Wealth Management industry ripe for harnessing the benefits of a WaaS model, the success of BaaS models has already paved the way for its adoption. With the technology and a marketplace already established, it will be interesting to witness the trend unfold.

        Author

        Shreya Jain

        Manager, Global Banking Industry

          Data sharing – unleashing a new generation of public services

          Pierre-Adrien Hanania
          28 February 2023

          Governments and public agencies are facing an expanding range of complex crises in areas such as climate change, immigration, and healthcare. To respond, these organizations need to find ways to leverage the increasingly rich public data at their fingertips to develop multi-layered programs and services.

          This is why data sharing is becoming an important part of the conversation as governments accelerate their digital transformations to meet the evolving needs of their citizens. The benefits of digital ecosystems and data spaces – secure infrastructure where sensitive information can be pooled and shared while preserving privacy – are coming into focus for leaders with a strong digital vision.

          And yet, serious obstacles are slowing progress toward realizing the full potential of these new paradigms. In a new Capgemini Research Institute report, -“Connecting the dots: Data sharing in the public sector,” more than half of public organizations surveyed cite trust, technology, and culture as the main factors holding them back from large-scale deployment of data ecosystems. Indeed, technology tops that list of concerns with 39% of surveyed organizations saying they lack the infrastructure to collect and identify data, 50% stating they can’t store and process it, and 52% are struggling to embrace modern tools like cloud and AI.

          It’s easy to be frustrated by the slow pace of change. Only 10% of the public sector organizations we surveyed have deployed data ecosystems, while another 17% are in the early deployment phase. The remaining 73% are still planning or piloting initiatives.

          While, these hurdles may seem high, they are not insurmountable. In speaking to these public digital officers, we came away encouraged, even inspired. Our survey found that public sector leaders believe data ecosystems could enable close 10% improvement on average in the use of government funds and resources. These public leaders are rising to the moment by finding creative and innovative ways to build data platforms that one day could revolutionize their roles in the daily lives of their people.

          They understand that building mature data-sharing ecosystems is a long-term journey. Breaking that journey down into definable steps can make it less intimidating, and the goals more achievable.

          In this report, we recommend a roadmap based on four key actions that public sector organizations should take to begin building support for their own data sharing strategies:

          1. Identify the use cases, data sources, and participants necessary for an ecosystem: By clarifying the reason for developing a data ecosystem, uses cases will emerge that fit these goals. That, in turn will help identify and prioritize the internal and external data sources required, including citizen-generated data. Finally, determine who the participants in the data ecosystem should be, the value the ecosystem can deliver for each of them, and potential incentives to encourage data sharing.
          2. Develop the infrastructure for interoperability and collaborative work with data: The digital infrastructure should include data exchange platforms, decentralized data management architectures such as data mesh, and a cloud-based foundation. There is a large choice of data exchange platforms that can enable sharing. Couple this with decentralized data that allows for federated ownership and governance policies that are easy to define, and the elements are in place to ensure security and compliance in a cloud environment.
          3. Establish trust across the full spectrum of data-sharing practices, from governance and organizational set-ups to technology: Trust is one of the biggest barriers to data sharing and must be a priority. To build governance mechanisms that drive transparency and accountability, public organizations should leverage advancements in privacy-enhancing technologies (PETs). These include technologies such as differential privacy, federated learning, and homomorphic encryption that safeguard data privacy.
          4. Develop skills and capabilities for a data-driven culture: This starts with holistic training to provide employees with skills in AI, data, and privacy management. Our survey found that only 55% of organizations have trained employees on the ethical use of citizen data. All employees must recognize their obligations and opportunity to participate in shaping data services rather than seeing data as a niche issue for the IT or data teams. Clear communication of the mission-driven goals can be an important motivator. There is also a robust playbook for building data culture, including sandboxes, innovation labs, hackathons, data academies, and academic collaboration, where appointing a Chief Data Officer can accelerate driving data strategy and coordinating initiatives with other stakeholders.

          The data road ahead

          At Capgemini, we see four key benefits of collaborative data ecosystems: citizen engagement, cost reduction, insight multiplication, and interoperability that leads to process efficiency. But we are not there yet. Governments have not answered the technological, human, and governance challenges necessary for laying the data sharing groundwork.

          Maria Bäcklund-Hassel, a Senior Advisor and International Coordinator at Sweden’s eHealth Agency, explained the multitude of reasons that her country is struggling to embrace data sharing. Even so, she is optimistic that the rise of healthtech startups, the lessons from Covid, and the European Health Data Spaces (EHDS) proposal are going to break the logjam and create consensus for overcoming those issues. “The COVID-19 pandemic has been a great initiator of momentum around data sharing,” she said. “How great would it have been if we could have shared information then? To see the level of vaccination, and how the virus was spreading throughout Europe – this type of data is critical for us, it’s not just nice to have.”

          It’s easy to see how such leaders can get sidetracked and demoralized. Public data leaders must confront data fragmentation, antiquated policies, complex rulemaking, conservative practices, and turf wars between different levels of governments. And yet, in the interviews for this report, we found a growing range of examples of leaders who were undaunted by these problems and were finding creative ways to blaze data sharing trails.

          Consider, for instance, Europe. The European Commission has launched “DS4Skills”, a project focused on developing a data space for education and skills as part of a broader effort to create common European data spaces to accelerate the EU’s digital transformation. The Gaia-X initiative in Europe was created to develop a European cloud infrastructure based on the principles of cloud sovereignty and has spawned such experiments as Agdatahub, a data ecosystem to facilitate agricultural data sharing to promote sustainability and innovation. And the EU-funded project “MyHealth-MyData (MHMD),” is demonstrating the power of homomorphic encryption to facilitate the sharing of sensitive health data.

          In Spain, Chief Data Officer Alberto Palomo-Lozano said his agency has a mandate to explore “potential ecosystems which can facilitate greater insights by combining this data with cloud technology.” He’s been collaborating with the nation’s CIO to develop the necessary governance and technology systems. As an early step, they’ve released a MyCitizen tool that let’s citizens browse information government agencies have collected on them. “One hope with this project is to facilitate a greater sense of data ownership by citizens and be able to engage them even more so in these processes,” he said.

          Meanwhile, three Spanish hospitals launched a collaborative initiative using federated learning to increase the speed and accuracy of COVID-19 screening. Dr. Javier Blázquez, Head of the Radiology Department at Hospital Ramón y Cajal – one of the participating hospitals – said: “Federated learning allows us to improve our diagnostic reliability without disrupting data privacy, and since the experience of a hospital is shared among several others, the results improve a lot with respect to those obtained separately.”

          These efforts clearly show, the benefits of collaborating with data. The report’s four action items chart a path for public data leaders ready to steer their organizations toward a data-sharing future. The time to start is now. It’s a matter of having the right vision and starting to plan to take those first steps.

          Author:

          A well-dressed man in a suit and tie poses in front of the European flag, representing international relations.

          Pierre-Adrien Hanania

          Global Public Sector Head of Strategic Business Development
          “In my role leading the strategic business development of the Public Sector team at Capgemini, I support the digitization of the public services across security and justice, public administration, healthcare, welfare, tax and defense. I previously led the Data & AI in Public Sector offer of the Group, focusing on how to unlock the intelligent use of data to help organizations deliver augmented public services to the citizens along trusted and ethical technology use. Based in Germany, I previously worked for various European think tanks and graduated in European Affairs at Sciences Po Paris.”

          Wilde Thyholt

          Lead Data Scientist and Data driven Government SME in global public sector
          Wilde is an experienced leader helping Nordic organizations adapting to and drive innovation in large-scale data platform transformations. She is also a skilled data analyst with knowledge from several cross-border data sharing initiatives

            Data set file system: A major turning point for IBM mainframes

            Capgemini
            Capgemini Engineering
            27 Jun 2023
            capgemini-engineering

            Overview

            The Multiple Virtual Storage (MVS) file system is the most commonly used file system for the IBM Z family of mainframes. However, MVS is not a hierarchical file system like today’s typical file systems such as Windows, Unix, macOS, and others. Most of the next-generation IT recruits are not used to working in the MVS environment. Other than MVS, IBM Z also has the Unix file system, which is hardly used compared to MVS. However, next-generation IT professionals are very familiar with Unix.

            With z/OS2.5, IBM introduces a new file system type that will render traditional MVS data sets accessible by the z/OS UNIX namespace. As a result, Z2.5 will enable z/OS UNIX applications, tools, and utilities transparent access to data in these data sets securely and consistently.

            IBM is calling this file system type Data Set File System (DSFS). It will map MVS data sets on a Unix namespace. IBM’s idea of accessing MVS data sets from the Unix address space makes MVS data sets visible and easily accessible to and from Unix commands, applications, tools, etc.

            A turning point for IBM mainframes

            Since the 1970s, businesses running on IBM Z mainframes have generally stored their data and application code on an MVS file system. As a result, it is one of the oldest and most widely used file systems for large and mid-sized businesses worldwide.

            However, training recruits on MVS is a significant challenge for businesses running on IBM Z. Today, most universities and colleges don’t educate students on MVS file systems. It would take years of training for the current generation to make themselves familiar with MVS internals. Thus, there is a significant gap between the supply and demand of MVS-trained instructors in the market.

            Suppose your new software recruits are not required to understand the MVS file system. Instead, they just need to work on the familiar Unix Bash shell environment to access your existing databases and codebases. That would be amazing, right?!

            With z/OS 2.5, businesses don’t need to train recruits on MVS. They can even do lateral hiring based just on Unix skills. Most importantly, companies will not have to migrate their databases/codebases from mainframes because recruits lack the MVS skillset. Also, there will be no need to code separate code logic for accessing data from MVS and Unix file systems, which reduces program size and complexity. These are the kinds of benefits that DSFS can bring to mainframe-based businesses.

            Figure 1 captures the technical details describing the use of DSFS to access MVS datasets on a Unix namespace.

            Figure 1: How to use DSFS to access MVS datasets on a Unix namespace
            Source: Capgemini Engineering

            Figure 2 below shows how MVS datasets, residing on a direct-access storage device (DASD), can be accessed on the Unix namespace using DSFS. MVS dataset AAAA.BBBB.CCCC can be accessed as /dsfs/txt/aaaa/bbbb.cccc on Unix. Similarly, PPP.QQQ.RRR can be accessed as /dsfs/txt/ppp/qqq.rrr.

            Figure 2: Data Set File System mapping
            Source: Capgemini Engineering

            DSFS brings value to the mainframe world in five important ways:

            • A mapping mechanism to access MVS-cataloged (DASD) datasets from Unix namespace
            • Reduction in the gap between demand and supply for MVS resources
            • Reduction in cost and time in training resources on MVS
            • Reduction in program code as single logic will be required to access both MVS datasets and Unix files
            • Increase the confidence of businesses in the IBM mainframe. Yes, mainframes are adapting to your business needs and mainframes can modernize and change to help solve your problems.
            Capgemini Engineering

            Capgemini and IBM – Product Partnerships

            The partnership introduces product capabilities into new geographies, platform eco-systems and deployment models

            Capgemini Engineering

            Is Zowe the perfect IBM Z companion?

            Sandeep Kumar, Capgemini Engineering
            27 Jun 2023
            capgemini-engineering

            The introduction of Zowe is a turning point for the IBM Z mainframe allowing engineers to work with z/OS in a way that is similar to working with cloud platforms.

            The first IBM mainframes were born in the 1950s and dominated the corporate computing center for five decades. However, since the turn of the century, many companies have decided that IBM Z platforms are outdated technology. Negative comments about mainframes by software developers are common. “The mainframe is dying” and “new programmers are not interested in mainframes” are common refrains. The average age of a US-based mainframe programmer is 46.4.

            Several financial intuitions have even tried to replace their IBM Z mainframes with more modern systems citing their bulky size and expensive hardware and software. But despite the talk about ditching their mainframes, the fact remains that today about 90% of all credit card transactions and 68% of the world’s production IT workloads are handled by mainframes.

            However, today, young developers have little interest in working on the IBM Z platform for four reasons:

            • IBM Z platforms are too complex, and the green screen is not attractive
            • There is a lack of IBM Z open developer communities and forums
            • Unlike Linux VMs, the IBM Z is not accessible to many users
            • Developers are moving on to shinier things like cloud and Kubernetes

            Modernizing the user/developer interface could be the factor to improve the IBM Z platform’s popularity. In 2015, Linux Foundation and IBM came together with sixteen other independent founding members to form the Open Mainframe Project (OMP) alliance. In mid-2018, the OMP launched Zowe, the first open-source integrated and extensible framework for z/OS.

            Overview

            Zowe was designed as a set of interfaces to modernize the IBM Z platform. The new interfaces interact with z/OS and allow engineers to work with z/OS in a similar way to the experience of working on today’s cloud platforms.

            Zowe provides out-of-the-box applications to access major z/OS utilities like Job Entry Subsystems (JES), z/OS File System and Data Sets, z/OS Spool/Job Logs, etc. One of OMP’s goals is to create a massive community of developers who have some experience with IBM Z platforms and are capable of developing modern applications on a mainframe.

            The Zowe architecture

            Zowe is a collection framework with four major components:

            1. Zowe Application Framework (Zowe Desktop or Web User Interface)
            2. Zowe Command Line Interface (CLI)
            3. Zowe Explorer (VS Code plugin extension)
            4. Zowe API Mediation Layer (API ML)

            The first three components constitute the Zowe client category. The fourth resides on the z/OS server and acts as a gateway to interact with z/OS services, and includes a catalog of REST APIs and discovery functionality. (See Figure 1.)

            Figure 1. An overview of the Zowe architecture

            Source: Open Mainframe Project

            When using Zowe clients, users can decide how to interact with z/OS. The most popular option is the IBM z/OS Management Facility (z/OSMF), a REST API-based framework that supports OS management functions in a task-oriented way. The other options are:

            1. z/OS FTP, which is directly compatible with Zowe CLI and Zowe Explorer
            2. Remote System Explorer (RSE) APIs, for which users will need the IBM RSE API plugin at the Zowe CLI end.

            Some of the z/OS products, such as CICS and DB2, provide API support for Zowe CLI. Also, users can develop their own REST APIs to interact using Zowe.

            Many more open-source projects are available and under development for the Zowe framework. Examples include Zowe Python SDK, Zowe Client SDKs, Zowe Mobile, and Zowe Embedded Browser for RMF/SMF and APIs (ZEBRA).

            Zowe CLI

            Zowe CLI enables the user to access IBM Z platforms with a command-line interface. Before installing and using Zowe CLI, it is necessary to install NODE.js and NPM on the client machine. After NPM is installed, it is just a single command to install Zowe CLI:

            zowe plugins install @zowe/secure-credential-store-for-zowe-cli@zowe-v1-lts

            Different CLI plugins can also be installed using NPM:

            zowe plugins install @zowe/cics-for-zowe-cli@zowe-v1-lts @zowe/db2-for-zowe-cli@zowe-v1-lts @zowe/ims-for-zowe-cli@zowe-v1-lts @zowe/mq-for-zowe-cli@zowe-v1-lts @zowe/zos-ftp-for-zowe-cli@zowe-v1-lts

            After the installations have been completed, Zowe CLI is ready to use. The user now simply has to enter the Zowe command to perform various functions like listing a dataset:

            zowe zos-files list dataset “MY.DATASET.*” –host my.company.com –port 123 –user myusername123 –pass mypassword123

            Another function is to download a PDS member:

            zowe zos-files download data-set “MY.DATA.SET(member)” -f “mylocalfile.txt” –host my.company.com –port 123 –user myusername123 –pass mypassword123

            You can also set up your profile with Zowe CLI to avoid having to provide passwords and use the correct profile with Zowe commands, and it will list the physical data set (PDS) members:

            zowe profiles create zosmf-profile myprofile123 –host my.company.com –port 123 –user myusername123 –password mypassword123

            zowe zos-files download data-set “MY.DATA.SET(member)” -f “mylocalfile.txt” –zosmf-profile myprofile123

            Zowe CLI is the ideal feature for DevOps developers as it can directly be used for automating the IBM Z tasks like submitting JCLs, fetching data sets and spool logs for analysis. Cloud-based implementations and CI/CD pipelines can be set up using this option.

            Zowe desktop

            The Zowe desktop is a web-based UI that can be accessed through any browser. It contains several applications like File explorer to browse through datasets and PDS members, and JES Explorer to edit and submit the JCLs. It also contains a TN3270 terminal application to access IBM Z Platforms the traditional way. (See Figure 2.)

            Figure 2. The Zowe Desktop

            Source: Zowe

            Zowe Explorer for VSCode

            VSCode is a very popular code browser among developers, and various plugins are available for every type of software development in VSCode. Similarly, the Zowe Explorer plugin can be downloaded and configured with the IBM Z platform. Moreover, it modernizes the developer and systems administrator interaction with IBM Z platforms with simple context menus to fetch datasets and PDS, submit JCLs, and fetch spool logs. For example, users can perform the following tasks using the Zowe Explorer plugin:

            • Create, modify, rename, copy, and upload datasets to a z/OS Platform.
            • Access the USS files directly in a similar way as data sets
            • Create, edit, and delete Zowe CLI z/OS Management Facility (z/OSMF) compatible profiles
            • Use the Secure Credential Store plugin to store credentials securely in the settings
            • Leverage the API Mediation Layer token-based authentication to access z/OSMF

            Examples of the usefulness of the Zowe Explorer’s context menus include:

            Summary:

            Zowe could be a boon to IBM Z platforms as it simplifies work for developers, administrators, and users. IT professionals will once again find mainframes attractive and make the IBM Z platform the large direct userbase it deserves to be. This newfound popularity would make it easier to create, modify, rename, copy, and upload data sets directly to z/OS. Users will create, modify, rename, and upload USS files directly on an IBM Z platform. Zowe will provide a secure, streamlined way to access data sets, use files, and jobs.

            Most importantly, Zowe has introduced IBM Z platform engineers to the open-source world and is on its way to becoming a large and growing ecosystem, with user support communities that will help developers in the decades ahead.

            References:

            Zowe.org | Zowe Docs webpage

            Recent articles about Zowe https://medium.com/zowe

            “Mainframe Programmer: Demographics And Statistics In The US,” Zippia

            Christopher Tozzi, “9 Mainframe Statistics That May Surprise You,” Apr 19, 2021, Precisely

            Barry Baker, “Open source project Zowe: Fast, simple, familiar z/OS development,” Aug 15, 2018, IBM

            Javier Perez, “The Zowe project: Build open source with open source,” Mar 15, 2021, IBM Developer Blog

            Nan Nan Li, “A new Zowe trial experience for YOU!,” IBM Z and LinuxONE Community

            Capgemini and IBM – Product Partne

            The partnership introduces product capabilities into new geographies, platform eco-systems and deployment models

            Capgemini Engineering

            Sandeep Kumar

            Senior Manager, IBM Sterling Connect:Direct for Z Platform, Capgemini Engineering

              Three data-foundation strategies for scope 3

              Roosa Säntti
              28 February 2023

              Scope 3 emissions pose one of the biggest dilemmas within corporate sustainability agendas. Our view is that a data-powered approach is a key to solving it. The good news is that getting a better grip on emissions data not only helps cope with mandatory regulations, it also provides additional business potential and builds data mastery. Here are three foundational strategies to get started.

              Long gone are the days when an organization’s sustainability ranking was only judged by the impact of its own actions. Organizations that cannot provide relevant Scope 3 data in public reporting face the likelihood of stakeholders criticizing them for providing an incomplete picture of their total environmental impact. A company’s Scope 3 emissions account for 65 to 95 percent of the total CO2 footprint, but the findings from Capgemini Research Institute’s latest Data for Net Zero research show that only 22 percent of companies are measuring Scope 3 data today. This means that a majority of companies overlook most of their carbon footprints and simultaneously ignore the business value potential from emissions data.

              The complexity of Scope 3

              A company’s carbon footprint consists of Scope 1, 2, and 3 greenhouse gas emissions. Scope 3 emissions result from activities that are not directly owned or controlled by the company, including those coming from suppliers, distributors, partners, and customers. They are divided into 15 categories.

              Measuring and reducing emissions and focus on shifts, but some valid Scope 3 measures are:

              • Avoid unnecessary business travel and inefficient modes of transport

              • Switch to lower-emission fuels and vehicles

              • Manufacture more energy-efficient products

              • Design durable, recyclable products.

              Although sharing Scope 3 emission information has been optional for most businesses so far, both US and EU regulators are now pushing for increased transparency, and companies will increasingly face new regulatory requirements. Furthermore, companies are also realizing the business potential that sustainable value chains, operations, and products can provide. Findings from the Capgemini Research Institute show that 85 percent of organizations recognize the business value of emissions data, but it remains inadequately used: 45 percent of the companies use the data only for reporting.

              “Without accurate data and the skills to use it, you don’t only struggle to report the right numbers, but it also becomes impossible to make predictions for the future and steer your business towards net-zero.”

              The Scope 3 dilemma is all about solving a big data challenge

              Scope 3 emissions come from external parties, so collecting accurate and reliable data can be particularly challenging. Without accurate data and the skills to use it, organizations not only struggle to report the right numbers but also to make predictions for the future and steer the business toward net-zero targets. So, Scope 3 is a big data challenge: reliable emissions data is needed for both asking the right questions and providing the right answers. But how to get started? Here are three data-foundation strategies that get things moving in the Scope 3 space.

              Emissions data collection: start with identifying hotspots

              When a company has defined net-zero targets, it usually has already created an understanding of the most carbon-intensive parts of the value chain. For example, to a manufacturing company, such a hotspot might be logistics. A data-collection strategy for Scope 3 is a plan aligned with the sustainability strategy. It describes what data is needed, from where to collect it, and what to measure. With Scope 3, organizations should always aim at getting the source data if available. In practice, this means reaching out to suppliers and partners and asking for emissions data allocated to the products and services purchased. In some cases, when the actual data cannot be accessed, another way to estimate the activity data should be agreed.

              As with all data these days, a smart approach should be used. Hence, instead of collecting it all, focus on what data is really needed to understand where the organization is, and make predictions for the future.

              Establish a shared single source of emissions data

              A hub to ingest and store data in one place is next. The data will come in different formats and structures, and a common emissions data model will help make sense of it. Major cloud-platform providers such as Microsoft, Snowflake, and AWS have already developed accelerators in their stacks to support Scope 3 emissions data collection, modelling, and analytics. One advantage of these cloud platforms is that they tend to have great data collection and sharing capabilities for data outside the organization. In the end, Scope 3 is all about sharing across the data ecosystem. Once the data is in one place, it is possible to use this single source of truth for multiple purposes, such as carbon accounting, reporting, or analytics.

              Let data lead the people

              Calculating and reporting Scope 3 emissions can still be considered a quite straightforward part of the puzzle. When an organization starts building analytical models, what-if scenarios, or utilizing advanced AI – for example with digital twins – it gets tougher. The tricky part is how to make people trust the data and make decisions based on it. The data-powered sustainability game is, in the end about people, culture, and the right skills. As an example, in the manufacturing industry, R&D and procurement teams are at the heart of change. These teams decide what kind of products to make, materials and processes to use, and from where to buy. When these teams have knowledge of the net-zero targets, understand their role in reaching them, and have access to data, tools, and insights that lead the way, change can happen.

              Emissions data literacy is a quality that involves people, tools, and processes. Similarly, to financial data literacy, emissions data and related skills are essential tools for tomorrow’s leaders. It is evident that one of the toughest elements in the race to net-zero lies in Scope 3. Despite this big data challenge, here’s the good news: by adopting a data-powered approach – starting with three foundational strategies – a business can take a leap towards a net-zero future, building data-management capabilities and data mastery along the way.

              INNOVATION TAKEAWAYS

              BE SMART WITH DATA COLLECTION

              For Scope 3, identify hotspots, collect what is really needed, and aim always for the critical data.

              ESTABLISH A SINGLE SOURCE OF TRUTH

              Collect and store this data in a central place and manage it on top of a common sustainability data model.

              SUPPORT AND EDUCATE

              Equip teams with the skills and tools to understand and use emissions data for business decisions, investing in a net-zero future.

              Interesting read?

              Capgemini’s Innovation publication, Data-powered Innovation Review | Wave 5 features 19 such articles crafted by leading Capgemini and partner experts, about looking beyond the usual surroundings and be inspired by new ways to elevate data & AI. Explore the articles on serendipity, data like poker, circular economy, or data mesh. In addition, several articles are in collaboration with key technology partners such AWS, Denodo, Databricks and DataikuFind all previous Waves here.

              Author:

              Roosa Säntti

              Head of Insights & Data Finland
              Roosa’s ambition is to make people, organisations and the society to understand the power that lies in their data and how they can harness it to build smarter and more sustainable environment for all. She helps businesses in innovating their strategy with the help of digitalization, technology and data. In her current role she is leading a business unit serving customers within data, analytics and AI. Her special interests lie in the intersection of innovation, technology and people.

                Hit the ground running with HR Cloud to boost your new workforce capabilities

                Dr. Sandra Duesing
                27 Feb 2023
                capgemini-invent

                “Employees who have a positive Onboarding experience are almost three times as likely to feel prepared and supported in their role, boosting their confidence and improving their ability to perform their role well.”

                Source: Harvard Business Review: Onboarding Can Make or Break a New Hire’s Experience (April 2022)

                As mentioned in our first blog article, the optimal employee-centric Onboarding journey consists of three phases: pre-boarding, acclimation, and belonging and performance excellence. In this article, we take a closer look at the second phase of the employee-centric Onboarding journey – the acclimation phase. Acclimation begins on day one at the new company and extends over the first month. The aim of this phase is the professional and organizational enablement of new joiners by helping them understand tasks, get familiar with relevant people, procedures, and the organization in general. The employee starts to become performant.

                Admin and technological layer

                An important part of the acclimation phase is ensuring the technical competence of employees. This includes ensuring that access to all necessary applications is guaranteed, that registration can be carried out without complications, and that appropriate authorizations are stored. Additionally, it is essential to provide employees with information on where central documents (e.g., payroll) can be viewed and where work resources (e.g., PowerPoint templates in line with corporate identity) can be found. A corresponding checklist in the Onboarding portal can provide valuable guidance here. Beside the checklist from ServiceNow, which we described in the previous blog article on Pre-boarding, other cloud tools can serve the same function, such as SAP SuccessFactors, Oracle, and Workday.

                The focus in the acclimation phase is on initiating the employee’s ability to work. Mandatory training courses are a necessary part of this process, which must usually be completed by the new employee during the first weeks of employment. HR Cloud tools should be used to make people aware of the training courses in a timely manner, but also to conduct them. For example, Workday Learning offers quick and user-friendly access to the training courses (see illustration 1 and 2). In addition to these mandatory courses, structured on-the-job training is a vital component of Onboarding. For this to be achieved, the relevant contact persons need to be available to support new employees whenever necessary, helping them get familiar with the new tasks and topics. The new employee must be given sufficient time to do this.

                Organizational and cultural layer

                To continue to communicate the culture of the company and ensure optimal Onboarding at the organizational level, communicative measures are necessary. On the one hand, this means information events that provide new employees with insights into central company topics. On the other hand, discussions about expectations and goals must be held with new employees on an individual level. Another aspect is that new employees should always be given sufficient time to get acclimated to regular administrative tasks and the professional environment.

                Social and network layer

                There are many ways to ensure new employees feel welcome within the organization. To become trusted companions during their first weeks in the new company, employees can get to know the team and colleagues by joining (virtual) social groups. These groups also make it possible to build up your own network in the new company. Another key to rapid acclimation is regular interactions with dedicated buddies, who play a particularly significant role for new employees. Buddies are always available to answer any questions that may arise and actively support and facilitate a successful start in the new company.

                Our next Blog #4 takes a closer look at the third and final phase of the Onboarding journey – the belonging and performance excellence phase.

                Our authors

                Dr. Sandra Duesing

                Vice President in Workforce & Organization and the Global Head of Reinventing HR | Capgemini Invent
                As Capability Lead Workforce & Organization at Capgemini Invent with a dedicated focus on Experience Excellence in HR & HR IT, I am passionate to re-imagine work & unlock underlying human potential to drive digital transformation journeys for business and society successfully.

                Svenja Stegemann

                Senior Consultant in Workforce & Organization | Capgemini Invent

                Anne Geiter

                Consultant in Workforce & Organization | Capgemini Invent

                  Immersive experiences – the new frontier of commerce

                  Xaviere Tallent – Our Expert
                  Xaviere Tallent
                  27 Feb 2023

                  Immersive experiences, the metaverse, NFTs, blockchain – the new frontier of commerce. These terms are floating around and catching everyone’s attention.

                  We all know the transience of hype and how it can sometimes lead organizations to take the wrong turn or over-commit to new technology that doesn’t ultimately pan out. But for all the futuristic ideas surrounding these buzzwords, the tech is already here, and it’s exponentially developing into hitherto unseen horizons – think virtual smelling (more on that later).

                  The technology to simulate anything and everything a person can imagine in real life is now emerging. And the potential applications are inexhaustible. But Innovation is a means, not an end. The challenge is relevancy: how can this be applied now to benefit organizations?

                  In this blog, we’ll see how immersive experiences and NFTs can be utilized in commerce. Gently – but boldly – transition companies to higher consumer engagement.

                  Esteemed colleagues and expert speakers at the NRF 2023 have drawn an electrifying picture of what the future holds. Revealing some of the secrets behind these technologies and how they can be implemented to drive commerce into an exciting future.

                  What are immersive experiences?

                  Immersive experiences place the user in an environment that diverges from daily life. The applications are stunning. They could be anything from the more distinct type, like VR, to exploring the map of an otherworldly video game, or even something like an exclusive event at a remote location in real life.

                  The term itself refers to the psychological involvement of the experience rather than a specific set of external catalysts. (Immersive experience does not equal wearing virtual reality headsets.)

                  This is a broad definition, but it is essential to understanding the scope of what immersive can do – and the crux lies in the psychological aspect, which is what the consumer wants and values. As has been known for a while, consumers – the millennial demographic and the new Gen A even more so – are showing more interest in experiences rather than things, which provides a solid foundation for building a new strategy around product development and especially CX.

                  Where are we now with the tech? Some startups have managed to simulate touch and smell in a virtual environment. This is the latest and most encompassing simulation technology for the metaverse and virtual experiences to be released later this year. This opens up a new world of marketing opportunities in retail to extend e-commerce further than it’s ever reached.

                  NTFs – below the tip of the iceberg

                  We are in the age of Web 3.0, the new era of the internet with evolving decentralized blockchain technologies, including cryptocurrencies and non-fungible tokens, or NFTs. The popularity of NFTs surged and reached media outlets with stories of people paying outrageous amounts of money to own unique artwork – perhaps the first thought that comes to people’s minds when hearing the term. But what are they, really?

                  Blockchain technology enables a “transparent” and non-reproducible chain of blocks of information, with encryption linking each one to the previous with a timestamp and transaction data.

                  This distributed ledger, as opposed to a centralized database, removes the risk factor of a single point of failure. If part of the system suffers damage, data still exists in an identical form in another part of the system. This solid method of guaranteeing that the blocks are non-fungible and cannot be replaced/tampered with is the basis of cryptocurrencies and NFTs.

                  Once someone is recorded as being the owner of an NFT, this ownership is authenticated and certified by the blockchain. This authenticity can lend itself to the incorporation of exclusivity and uniqueness in products and rewards from brands.

                  How NFTs can be used? Exclusivity and a demo

                  Consumers like exclusivity, especially with luxury items and experiences. For example, there is an exclusive dining club in New York that is the first one of its kind – you can only enter it if you own an NFT for it (and further applications could be made to have tiered experiences within the restaurant, like additional NFTs allowing patrons to have premium dining experiences). This sort of use of NFTs can elevate luxury products, experiences, and rewards to new heights that surpass the mass market – a digital environment, tied to real life, that enables people to have the most exclusive possible access to what companies have to offer.

                  To showcase how this can be used, Capgemini has created a demo: someone buys a luxury bag. The journey for the customer starts with a QR code attached to the bag and when they scan this code, their ownership of the bag gets registered on the blockchain. Then, the customer receives an email inviting them to an exclusive event, like Fashion Week. This bag’s digital twin is now associated with an NFT. The customer can attend the event while others can’t – creating exclusivity – and they become a part of that community. This is the power of combining online-offline experiences.

                  Other online-offline applications that already exist include:

                  • Real-world shoes that, in the digital world, enable you to fly in a game
                  • A real-world car that can go faster in a racing game
                  • A ticket is being minted as an NFT to attend a conference

                  The same principle can be applied in countless ways.

                  Consumer naysayers versus the new generation of commerce

                  Sixty-five per cent of consumers have said they would prefer to have real-life, physical experiences than immersive ones, and many people consider the metaverse to be a bad thing for society. But many of us remember a time when the same was said about Web 2.0, what we’ve had for nearly two decades now – the internet as it exists today. Socializing as a kid would involve going to the mall or maybe chatting online with friends. Kids today instead might immerse themselves collectively in Minecraft or Roblox, entire digital worlds where they can create anything they want.

                  Like the reticence many in the industry had about the idea of online commerce back in the infancy of Web 2.0, it is expected that the same would be thought about where we are now, but the applications of Web 3.0 in the new age of digital commerce are already here. The younger generations especially are primed for the new experiences that this technology has to offer.

                  Moreover, according to the newest 2023 Capgemini Research Institute report, 58% of consumers – of several thousand across 11 countries and a thousand organizations – believe that immersive experiences will influence their next purchase. 77% believe that immersive experiences will influence how they engage with a brand after purchasing, using, and servicing their product. And in customer service, 77% also believed that immersive experiences will create loyalty. The key will be designing an omnichannel strategy with outstanding CX, based on robust research into customers’ expectations and use.

                  They don’t need to know

                  This sounds insidious, but the idea is that the consumer doesn’t need to be familiar with or understand the technology itself to find value in it. This is a simple concept that is implicit in the marketing of most products – most people don’t expect to know about the chips and processors in their electronics, for example. A mere 20% of US online adults said they were familiar with the metaverse, and only 20% have heard of NFTs but 31% do not understand what they are (including Gen Z). But the psychological aspect of immersive experiences is the actual intrinsic value that the consumer will find through the exclusivity of the event or experience tied to the product.

                  The novelty of Web 3.0 technologies might tempt companies to label their immersive experiences with the associated buzzwords. Companies that use NFTs and immersive experiences in their products and services do not necessarily need to have the term “NFT” attached to anything. In essence, the idea is to combine the technology with good CX research to produce the best experiences possible for the consumer and market them in the correct way for a frictionless experience. Nothing is gained by alienating customers with unnecessary jargon.

                  What are the limitations?

                  The hardware is not yet as fast as the software. We now have 5G, are exploring 6G, have WiFi 6 on the horizon, and are beginning to explore 7G, but these are not quite where they need to be yet. The technology and infrastructure are there and improving, but the current state of the hardware is a limiting factor.

                  For example, virtual try-on – the ability for consumers to virtually try on products using their devices via augmented reality and layering visuals – is still not quite right and doesn’t sufficiently simulate the look of an item of clothing or shoes on the person. But it will be soon. (In the case of some clothing, such as sunglasses, it’s already there.)

                  The hardware limitations can be a thorn in our side, preventing us from moving faster, but the existing technology has reached the stage that, if used in the right way, it can launch organizations further into the new wave of commerce.

                  How can I get in with the new frontier of commerce?

                  Immersive experiences are undoubtedly the new frontier and future of commerce. At Capgemini, we urge clients to start their journey toward these new vistas – and it’s better to begin now than later. With so much on the horizon, you need a good place to start and to select the right tools and strategies for your organization.

                  To see what else happened at NRF 2023, check out our event page. You can also get in touch with our team to find out more about immersive experiences and book a demo.

                  If you’re interested in discovering what Capgemini can do to kickstart your journey into the future, visit our Retail and Consumer Products pages.

                  WHAT MATTERS TO TODAY’S CONSUMER?

                  2023 consumer behavior tracker for the consumer products and retail industries

                  About Author

                  Xaviere Tallent – Our Expert

                  Xaviere Tallent

                  Global Commerce Realized Offer Lead
                  With over 30 years of experience in data-driven digital commerce transformation, Xavière has helped organizations successfully reinvent their business and leverage new technologies to deliver better online and offline customer experiences, throughout the value chain. As a commerce trend expert, with a deep industry expertise, she assists her clients in putting their customers first to drive business value at scale.

                    Explore more

                    Reimagine the future of consumer products

                    Delivering against consumer expectations in the new era of connected commerce

                    How metaverse can benefit customer experience and operations?