Skip to Content

How to leverage AI and data to boost insights-driven orchestration of automation and human escalations

2021-08-23

The need for IT service delivery and support to become better, faster, and cheaper has never been more pressing. AI and automation have a lot to offer in this space. They’re disrupting almost every facet of business and can deliver marked value improvements when it comes to service management. With AI and automation, high-volume, low-value tasks can be automated, in order to enable ITSM and service desk teams to focus on higher-value work.

Orchestrating AI to better predict outages, catch performance degradation, and improve root-cause analysis

In my previous blog posts, we saw how AI could be applied to the monitoring space to help organizations accomplish IT systems observability. While most IT teams are currently using automation, this is often in silos or one-off projects with no real control over how they integrate or interact with other systems.

Orchestrating automation efforts into a meaningful line-up of activities with human-in-the-loop (HITL) automation, chatbots, improved knowledge management, and machine learning can enable you to better predict outages, catch performance degradation, and improve root-cause analysis.

Human-in-the-loop AI for ITSM

It’s not a hard to imagine that IT Service Management (ITSM) serves as the hub of IT operations processes. Two of the most important processes are incident and problem management. The traditional goals of these two processes are to standardize and optimize to improve efficiency. NLP-based AI solutions can provide a deep view of frequent incidents, repeat incidents that may benefit from problem management rigor.

AI can also enable intelligent routing of remediation workflows either to a human or a prescribed bot to resolve issues. IT organizations can also continuously assess seasonality using time-series AI algorithms to predict outcomes for critical business events. These much-needed capabilities provide insights to the better planning operations and support, in order to avoid any major incidents that could have huge negative business impacts.

Chatbots for service request automation

Requests to reset passwords and account locks are common requests that service desk teams have typically been handling manually. AI-based chatbot solutions that are integrated with automated fulfillment can be a viable solution for many IT organizations. These chatbot solutions can eliminate the manual handling of service requests and free up IT resources so they can take on more strategic work.

Knowledge management in service orchestration

To accomplish better outcomes with automation and chatbots, knowledge bases need to be rich and useful. To ensure that knowledge bases are dynamic and up to date, we need AI-based solutions to continuously keep knowledge articles validated according to real-world scenarios. With AI-based analytics and orchestration, knowledge articles can be validated through historical data, and end-user acknowledgement and peer reviews before an object is committed to the system. User input is important here – and driving adoption with knowledge management gamification through AI and analytics will provide a wealth of benefits.

AI and ML for better CMDB

AI and ML can also be leveraged here to monitor and check the performance of the critical assets and orchestrate an approved escalation mechanism to ensure system-wide IT asset performance.

In summary, orchestration that brings intelligence and human-in-the-loop automation to digital workflows is essential for proactive issue management and overall performance improvement.  ADMnext can help you formulate a successful orchestration and comprehensive AIOps strategy that applies data mining and analysis with advancements in Big Data, AI, and ML algorithms and visualization techniques.

In next part of this series, we’ll look at incident resolution automation through user reporting and the proactive monitoring of system performance. In the meantime, please contact me here to get started on building your AIOps strategy and visit us at ADMnext here.

Innovation and you – don’t wait for the apple to drop

2021-08-23

In 1666, Sir Isaac Newton was engaging in the 17th-century form of social distancing. He left Oxford to stay at his mother’s estate, in order to avoid the plague that was sweeping the country at that time. It was during this stay that he saw the fabled apple fall from a tree, which would eventually lead him to his Law of Universal Gravitation. However, there is no evidence of the apple actually hitting him on the head.

If your approach to generating ideas consists of only waiting for Newton-like flashes of inspiration, then your innovation pace will be slow with few ideas materializing. The increasing pace of technology change, along with startups emerging to disrupt more and more markets dictate that companies must see innovation as a standard part of doing business if they are to survive and thrive.

Companies will quickly learn that this is not necessarily easy, as they find out that very few innovative ideas make it all the way implementation. Upon closer inspection, ideas that initially look good could turn out to be based on erroneous assumptions or may not offer much of a benefit. Additionally, they could introduce too much risk, or the implementation costs and disruption may outweigh the benefits.

Looking to the past to carve out an innovative future – feeding the beast by scavenging, foraging, fishing, hunting, and cultivating your way to groundbreaking ideas

A steady pace of implemented innovations requires a high number of ideas to be generated for evaluation. So, how do you find the best fresh ideas to feed the innovation beast? There are a number of approaches that we can take from our ancestors.

Scavenging

You can feed off what others have killed. This means looking at what your competitors and other companies are doing to see what you can adopt. This approach can reduce effort and risk, as it will be looking at proven solutions and implementations. However, on its own, this will only allow you to catch up or keep pace with your competitors. To gain an edge, you must look beyond this approach.

Foraging

You can also look at emerging technologies and offerings, and review market trends to see what you can find. This requires more effort, as the wider you search, the more likely it is you will find something. Your success will be based in part upon knowing where to look – and your ability to connect the dots between the technology and how you can apply it. While there is no guarantee of finding anything, if you can be one of the first to work out how to effectively apply what you find, you may gain a clear competitive advantage.

Fishing

You can bait your hook (issue an RFI or RFP) and see if you can get a bite (review the responses). This will enable you to benefit from outside ideas and creativity. But as all fishermen know, success requires using the right bait (thoughtful and targeted RFIs), and knowing in which waters to cast their lines, (which service providers to engage).

Hunting

If you have identified a need or opportunity, then you can actively pursue your prey (technology or offering) to meet this need or opportunity. By setting a narrower focus and a known target implementation, it can provide a quick innovation win if successful.

Cultivating

You can take the long view and cultivate an innovation culture within your company. In the spirit of as you sow, so you reap, the more you put into it, the more you are likely to benefit in the long run. A company’s own employees are a potential valuable source of innovation – and you can get them engaged through issuing challenges, gamification initiatives, offering rewards, and giving recognition. While this may turn up major ideas, their internal knowledge and experience can also yield a lot of smaller ideas. Other actions you can take include establishing an innovation lab, along with processes to manage, evaluate, measure, and report on innovation.

These approaches are not mutually exclusive, and companies should ideally utilize a variety of them at different times. When done correctly, the cultivation approach will be the most reliable source of innovation – even if it is more of the evolutionary ­– rather than revolutionary type. You will also increase your chances of success by including as many viewpoints as possible – both internally and externally.

Whichever approaches you end up taking, it is important that you embrace the need for innovation and establish a governance process to manage, track, and report on innovation. Sorting the good ideas from the bad also requires establishing processes to refine and test the ideas, assess any risk, and build and validate the business case. I will take up this subject in a coming blog post.

ADMnext – a source of innovation

Core managed service offerings and tools from Capgemini’s ADMnext are continuously evolved and rolled out to clients – and the identification and introduction of innovative ideas are built into the deals with assigned senior architects. While Capgemini’s Design Office offering targets innovation governance and helps you to create, manage, and report on your innovation journey.

Other innovation-focused components include Accelerated Solution Environment (ASE) workshops for rapid solutioning and consensus building, and Applied Innovation Exchanges (AIEs), which are dedicated, partner-network facilities that focus on solving client issues or developing new solutions.

To learn more about Capgemini’s ADMnext and all the bright ideas we can bring to your business on your future innovation journey, drop me a line here.

Going beyond No-Ops with Touchless-IT-Ops

Randy Potter
2021-08-23

No, really – what is “No-Ops”?

In 2011, Forrester released a report entitled Augment DevOps With No-Ops and noted that “DevOps is good, but cloud computing will usher in No-Ops.” But if we fast-forward to today, this definition is a bit too narrow. It relies on the cloud and does not address enough of the typical enterprise IT estate, which is more on-premise legacy than cloud based.

So, what does “touchless IT operations” entail – and why does it matter? I define “touchless IT operations” as the automation of infrastructure and application management. Not everything can be automated or resolved without human intervention – but a lot can. And the list of what can be automated gets longer all the time, while the list of required human interventions gets shorter, enabling IT staff to be deployed to more valuable activities that are directly related to the business. The potential benefits here include significantly faster speed-to-market and higher quality and cost savings.

No, really – that all sounds great – but how do we make this happen?

Biz/Dev/Sec/Ops – move to a product mindset and your operating model will follow

Firstly, move from a project mindset to a product one and shift to an Agile methodology. Next, rather than finishing an application development (AD) project and throwing it over the wall to applications management/support (AM), treat the application(s) as a product with a lifecycle plan and roadmap. This should include “marketing” and “adoption” strategies, as well as “end-of-life” considerations.

Ideally the product owner should come from the business and should drive decisions on the backlog in terms of features, non-functional requirements (NFRs), and incident/ticket resolution. By combining AD and AM into ADM, you create a new paradigm of “you build it, you run it.” In other words, the more resilient the application is from development, the less the team has to support L2 and L3 incidents. And that’s the kicker – the team supports L2 and L3 incidents – not a dedicated support organization. The ADM team is intrinsically motivated to create higher-quality applications, including better monitoring and alerting.

But what about testing and security? Both are included in the “team” and test automation and security “testing” are included in what the team does, along with automated coding standards and the prevention of technical debt.

Harness better intelligence and insight into IT operations and automate as much as practical

Artificial Intelligence applied to IT Operations or AI-Ops is a dual approach of using artificial intelligence to mine IT operational data for opportunities to get to the root-cause and remove the issue(s) causing incidents. And where this is not practical, the resolution should be automated.

Sounds easy right? No, not really. This requires several integral operations to be successful

Logging – monitors can only pick up what gets logged or passes through the network. When developers develop an application, they need to include “telemetry” or information about what’s going on with the application so it can be logged and evaluated. If developers don’t do this, then you may not have enough information to determine the root-cause should an issue or opportunity arise.

Monitoring – there are several good third-party tools on the market for this, but in general, you need something to monitor the logs, aggregate and contextualize the data, and provide alerts – along with additional data mining for insights and predictive metrics.

Actionable insights – from monitoring, you need actionable insights that identify root-causes or significantly lead to root cause discovery. The most beneficial incident resolution will ensure that it does not happen again. The second most beneficial resolution is one so fast that users don’t even notice it.

Automation – as previously mentioned, what cannot be prevented needs to be automated. And having a robust library of automation routines and software bots are great accelerators to automate as many resolutions as possible.

Site/Service Reliability Engineering (SRE)

Site/Service Reliability Engineering or SRE is exactly what it sounds like – engineering reliability. That being said, there is some unpacking to do with this statement. For the most part, SRE is about non-functional requirements (NFRs like security, reliability, scalability, etc.). SRE is a profoundly serious focus on the reliability of all the systems required to keep your application(s) up and running as expected.

Think about that – network, storage, logging, compute, scalability, security, etc. It’s a lot. SRE has a lot of similarities to AIOps. This includes leveraging analytics to better understand root causes and resolving issues before they become incidents, automating everything that cannot be prevented, and tearing into technical debt. Some companies consider this a “role” but at Capgemini, we consider this a capability that’s led by an SRE lead.

Tangible results with Touchless-IT-Operations – accelerated time-to-market, improved quality, higher cost savings

Implementing these changes to IT operating systems and utilizing capabilities like AIOps and SRE will yield an application development and management capability with “touchless IT Operations.” This means an organization without dedicated application management and accelerated time-to-market by working more closely with the business through a product-based Agile approach. Additionally, you can expect higher quality for the same reason (including SRE capability) and cost savings through a smaller and more Agile ADM capability.

“Touchless-IT-Operations” or TIO is much more than “No-Ops” or automated cloud management – it’s a profound move towards automated IT operations. To get a feel for Touchless-IT-Operations and the potential it has for your business, get in touch with me here.

Empowering HR with word recognition technology based on intelligent automation

Capgemini
2021-08-12

How can organizations overcome typical challenges in HR such as categorizing and answering email queries? Capgemini’s Answer Generator tool has the answer – forgive the pun.

Manually categorizing queries is an arduous task for customer service agents at the best of time – and hardly time efficient. Capgemini’s Answer Generator tool leverages intelligent automation to standardize processes and streamline operations across business departments and languages, delivering rapid, frictionless, and user-friendly email support to its Global L&D Service Desk team.

Implementing this tool has increased cost-effectiveness, boosted employee engagement, and decreased the time it takes to respond to emails. This has helped the team create a more people-focused HR culture with frictionless business processes that better respond to the needs of employees.

An innovative tool that leverages intelligent process automation

Based on Visual Basic for Applications (VBA), a computer programming language developed by Microsoft, Answer Generator scans client mailboxes, segregates queries, storing them in specific folders.

For each email, one keyword is extracted that suggests the right choice of email template, which is automatically sent in response to the query. It also creates a folder for uncategorized emails, where queries that were categorized using more than one keyword are placed. This is where the user seamlessly steps in to pick the right keyword from the list before an automatic response is sent.

The tool also simplifies team management by storing keyword lists assigned to different teams in separate files. This automates email responses separately across multiple teams and enables different files to be stored under the same keyword. This way a single admin can deliver effortless cross-tower support by making it easy to select a different team from the list and maintain several mailboxes simultaneously.

The tool also enables a super-user to add or remove keywords and maintain answer templates. This eliminates the risk of unauthorized changes to the database.

Intelligent automation drives impressive business outcomes

Answer Generator has delivered a range of impressive business outcomes for our Global L&D Service Desk team.

It has reduced the time and effort of our employees by 80%, cutting down the 20 hours per day previously required to just four hours – an outstanding testament to the effectiveness of the tool.

The tool combines AI with human effort, automating the essentials, while retaining the need for a human touch. On top of this, it doesn’t require specific training or installation of additional software, making it very accessible to any team or organization working with repetitive daily customer queries.

Recognition for innovation in intelligent automation

Designed by Capgemini’s HRInnHUB team – an internal function dedicated to continuous innovation and automation in HR – we’re really proud that Answer Generator was recognized as the world’s “Best Intelligent Word Recognition Solution” by the 2021 AI Breakthrough Award.

And as for the future? The HRInnHUB team plans to add a reporting function that enables automated reporting of volumes and SLAs, expanding the scope of the tool at a global level.

To learn how Capgemini’s Answer Generator tool can help streamline your organization’s email categorization and response process through leveraging Intelligent Process Automation, contact: malgorzata.praczynska@capgemini.com

Malgorzata Praczyńska – HR Automation Manager

Małgorzata Praczyńska is the founder and manager of Capgemini’s HRInnHUB team. She specializes in finding automated solutions for a range of HR processes, including project and people management that provide significant cost benefits to clients.

Go to the next section >  Back to homepage >  
    <div class="row">
        <div class="col-12 text-center">
            <a href="https://www.capgemini.com/us-en/wp-content/uploads/2021/12/Innovation-Nation-Autumn-2021-2022-Brochure-BSv-v3.pdf" id="btn-1638278573966" class="section__button btn-general" target="_self">Download the full magazine</a>
        </div>
    </div>

    <div class="row">
        <div class="col-12 text-center">
            <a href="https://www.capgemini.com/us-en/business-services-thought-leadership/innovation-nation/subscribe-to-innovation-nation-next-edition/" id="btn-1638258781233" class="section__button btn-general" target="_self">Subscribe to Innovation Nation</a>
        </div>
    </div>

Decoding trust and ethics in AI for business outcomes

Capgemini
2021-07-23

Do you trust Artificial Intelligence (AI) to do what you intend it to do, and do your customers trust you to use AI responsibly? These two perspectives, one internal, one external, are central to your ability to succeed with AI.

So how do we succeed?

The answer is far from simple, but I do think that it has simple foundations, and that from those foundations strong solutions can be built.

To ensure AI can be trusted both internally and externally, organizations must demonstrate that ethics and accountability are embedded across the entire lifecycle, from design to operations.

Trust is a must, Ethics are forever

The European Commission’s digital chief, Margrethe Vestage, agrees with this perspective, saying “On artificial intelligence, trust is a must, not a nice-to-have”.

The Commission’s Regulation on a European approach for Artificial Intelligence, released in April this year, also the ethical foundations of trust. It emphasizes the need to base innovation on rules that ensure people’s safety and fundamental rights.

In our Data & AI community at Capgemini, we call this ‘human-centered AI”, AI solutions which ensure that human ethical values are never undermined.

The responsibility of organizations

Waiting for regulations to tell you what to do isn’t enough, as the pace of technical advancement increases so the potential for AI to become an existential threat to your organization increases. If you can’t trust your machine learning models to do what they should, how do you know they won’t disrupt your business and decision-making process in the near future? If your customers don’t trust you to use AI responsibly, why would they continue to do business with you?

To help guide organizations, and help them demonstrate to their customers that they are responsible users of AI, Capgemini has developed its Code of Ethics for AI, which includes seven key principles:

1.   Have carefully delimited impact

2.   Be sustainable

3.   Be fair

4.   Be transparent and explainable

5.   Be controllable, with clear accountability

6.   Be robust and safe

7.   Be respectful of privacy and data protection

A business case for trusted AI

From our AI and the Ethical Conundrum report, we know that 70% of customers expect organisations to provide AI interactions that are transparent and fair. 45% of customers saying they would share a negative AI experience with family and friends and urge them not to engage with the organization. There is a real risk of reputational damage and the associated revenue impact of not being able to demonstrate the ethical use of AI to consumers. Beyond these external risks, trusted AI provides the robust foundations to ensure AI delivers the expected, positive, impact for your business, your customers and employees, and society as a whole.

An example of the sort of active Trust that organizations can develop is shown by SAIA  (Sustainable Artificial Intelligence assistant) a tool developed by teams from Capgemini Invent. SAIA recognizes, analyses, and corrects bias in different AI models, and helps ensure that organizations are not unfairly discriminating against people due to their gender, race, or socioeconomic background when assessing credit risk.

AI techniques such as Generative Adversarial Networks can also help us respect the privacy of individuals while accelerating innovation, for instance our Sogeti teams have enabled a European health agency to accelerate its research by using their ADA tool (Artificial Data Amplifier), to produce synthetic data that accurately reflects real world information.

Trusted AI is accelerating business outcomes

From reputational risk and regulatory obligations to moral duty, it’s clear that organizations need to be able to trust AI and to demonstrate they have mastered Ethical AI; applying AI technologies the right way and for the right purpose to build and nurture trust with their customers, citizens and partners.

Trust is about accelerating and ensuring outcomes, it’s about being able to have confidence that your AI solutions will do what you need, and only that. For your customers and employees, their trust in you to use AI responsibly will be based on their confidence in your ethical foundations for AI. Trusted AI is about accelerating towards an assured outcome, and about being able to deploy AI in a way that doesn’t risk reputational damage with your customers.

So, how do you build this trust in “AI”?

On July 12, I have had the honor to joining incredible panelists like Sally Eaves, known as the ‘torchbearer for ethical tech’, Capgemini’s Chief Innovation Officer Pascal Brier, Francesca Rossi, AI Scientist, IBM Fellow, IBM AI EThics Global Leader, and Sandrine Murcia, CEO & Co-founder of a fantastic company called Cosmian. You can watch our discussion again here.

For those of you who don’t have time to watch I will just share my closing statement on how I think you can progress towards a trusted use of AI:

  1. Build your own code of conduct, that is in line to your values. Speak about it at board level, with your Data & AI teams, as well as with your teams that will use AI solutions – Ethics & AI is not only an expert discussion! Obviously once you have “your” code, you need to build a simple but efficient governance to apply it, otherwise it will only be nice principles that won’t be applied
  2. Training your teams on your code, why it’s important, how to apply it. Provide tools for them to ask the right questions right at the design phase, and equip them with best practices for the delivery of projects, for them to be able to build AI solutions within the right framework
  3. Set up instances for people to reach out to when they need help, it’s not easy topic! On our side we built what we call « Flying squads » specifically on Ethics & AI – a group of experts that every Data & AI project can reach out to when there is a question to be addressed
  4. Be very intentional about building diverse and inclusive teams to ensure your Data & AI teams are representative of society as a whole, to avoid “perspective blindness” if all people from your team think and look the same (and to understand what is perspective blindness, I recommend the excellent book Rebel Ideas from Matthew Syed).

Share with us your thoughts and challenges – and your progress! It’s not an easy topic, so it’s worth exchanging best practices to move the industry forward.

Seven key lessons from data-sharing masters

Capgemini
2021-07-23

By Zhiwei Jiang, CEO, Insights & Data, Capgemini and Ron Tolido, CTO, Insights & Data, Capgemini

Rather like the saying, it takes a village to raise a child, it certainly takes an ecosystem to realize the true value of data.

Pardon the metaphor, but as enterprises across the world positively drown in data, deriving new value from it lies in how they source, select, and use only the most appropriate assets. In fact, it takes a flexible and open ecosystem – a data village, if you will – to achieve that. One that thrives on the art of shared data, a collaborative culture, and group initiative. The enterprise’s stakeholders, staff, customers, and bottom line all depend on how it is implemented.

Indeed, data sharing masters – organizations that continuously create new value in shared ownership and accessibility of both internal and external data – have the brightest future. They create superior customer experiences and highly optimized operations and drive innovation faster than their market peers. They may even completely reimagine their business model, claiming their rightful place in a new data economy.

They are the data savants who critically understand that data cannot be gleaned or analyzed in a silo. They share aggregated sources, track efficiencies, and customer behaviors across businesses and industries, providing high-value insights to whoever might be looking to use them.

And the difference is often made by data from external sources: the data out there that complements the enterprise’s own data, creating unique, surprising insights and superior, killer algorithms. This is where the notion of data ecosystems comes into play – organizations pooling their data resources in cross-industry partnerships, getting more value out of data for all parties involved.

The Capgemini Research Institute’s brand-new ‘Data sharing masters’ report outlines how important data ecosystems exactly are for future business health, growth, and reimagination. To whet your appetite for it, here are 7 lessons that struck us most:

1. Data ecosystems are taking shape

Admittedly, the notion of data ecosystems is not necessarily entirely new. But only now are organizations starting to make a significant impact with them across their business. And it sure pays off, looking at the numbers. Those already engaged in data ecosystems today see an improved customer satisfaction of 15%, increased operational productivity of 14%, and reduced costs by 11%, year-on-year. It makes the majority of organizations much more bullish about data ecosystems than ever before, expecting to see the same level of benefits achieved in the next three years. Also, 54% state a renewed push to monetize their data as the main reason to get busy with data ecosystems.

2. Data sharing is platform caring

Alongside this obvious enthusiasm, new forms of data sharing are emerging – designed to allow organizations to share data in less intrusive, more anonymous, and rock-secure ways. The next generation of data sharing platforms are evolving/have evolved that enable data collaboration with even the toughest competitors, without ever giving up even a fraction of data privacy, security, and ownership. Yet, 56% of organizations cite a lack of suitable data-sharing platforms as one of their top challenges. A carefully crafted platform strategy – and accompanying architecture – is hence needed to fully reap the phenomenal benefits of data ecosystems.

3. Data monetization is unexplored

It’s ostensibly on the bucket list of many executives, with chief data and digital officers in the front row. Data monetization – creating new, organic value with data as the key asset – has tantalizing potential, especially within the realm of data ecosystems.  Yet, it turns out that only 43% of organizations are successfully monetizing their data.  First things first: if data monetization is indeed the aim, organizations must ensure they can properly identify the value of their data assets. Only then they can start to think about their data monetization market strategy, data ecosystem choices, and pricing options.

4. Data sharing delivers on investment

Did we already point out that sharing data can bring enormous financial benefits? Just to make sure: we are talking about an estimated $940 million over the next five years with data ecosystems realized on a supranational level. Something to get your mind set on. But it’s going to take money. A whole lot of spending money. Over the next two to three years, most organizations are expected to invest between $10 million and $50 million in data ecosystems (averaging about $40million). Clearly, this may vary considerably per sector and region, with telecoms and banking set to be the biggest net spenders, and the UK and US being the most committed regions to invest in data ecosystems.

5. Data ecosystems thrive on rules

Data powers growth. That’s why, for example, the European Strategy for Data aims to create a leading data-powered society – in a data market worth €550 billion by 2025. As a key part of this strategy, a single market for data is established that will impact large-scale data gathering, sharing, and use by governments, private companies, smart cities, across regional aggregators, and anybody looking to disrupt with data. Rules and regulations thus facilitate trustworthy data sharing between all parties involved. It also spurs innovation. As the pandemic has shown, the more and better-quality data is shared, the stronger results are achieved for the economy, and society in general.

6. Data sharing needs active collaboration

The complexity of collaboration is still a barrier. Three in five organizations only participate in low-collaboration data exchanges. But shifting the onus onto more advanced collaboration definitely has its benefits. 14% of organizations currently involving themselves with more intense collaborative data sharing models are set for a financial advantage to the tune of $378 million over their peers. A good place to kick off more data collaboration can be internal: sharing data with other departments can itself serve the need for a use case, and it’s a useful practice. Alternatively, organizations that struggle to share data internally may learn and benefit from high-exposure collaboration externally

7. Data ecosystems for positive futures

Data ecosystems can bring much more than ‘just’ financial benefits to the parties involved. Many organizations already boosted their progress on sustainability goals by sharing data. For example, data aggregated from vehicles can not only bring new revenue streams to automotive firms but can also be used to battle pollution. 60% of organizations mention progress on sustainable development goals or climate change as a top driver to take part in data ecosystems. Organizations that still struggle to put together their financial business case for data sharing, may find that collaborating on data for positive futures provides a much more compelling way forward.

For more information on how to become a data-sharing master, download the report.

The missing part: including shadow-IT in efficiency programs

Capgemini
2021-07-23

The cost allocation and cost identification problem

The cost structures of automotive OEMs or manufacturing companies deviate heavily from other industries, such as banking or pharmaceuticals. Usually, a central IT department exists in the form of group IT, or centralized IT departments per product group or product line. In our experience, the central IT covers all applications, network services, etc. – down to the production line, excluding the production facilities and the production line itself.

Analyzing the cost distribution of these companies, most are production-related costs. Overhead costs, where general IT costs as well as the central IT department costs are generally included, are just a small proportion compared to production costs. Conducting a cost-efficiency program within the IT department therefore only covers the central IT costs occurring within the group IT, as there are no more ‘official’ IT costs with the company. But is this realistic?

Analyzing the existence of shadow-IT costs within the shop floor

We examine the question on the existence of a shadow IT using an automotive example:

New production lines within an automotive assembly plant consists of (but not exclusively) body shop, paint shop, assembly line, and finishing area. Each part of the factory is planned separately and constructed by suppliers specialized e.g. in paint shops, conveyor systems, or robotics. Each of the main production steps works autonomously and is connected with the rest of production through a central steering system to share production-related information. This steering system is standardized in terms of virtual and physical technology. For example, the paint shop works as an isolated application, running the paint shop equipment manufacturer’s software, same as the assembly line software solutions (screw guns, etc.). The data out of this island is then shared within the central steering system. The problem, or “supplier bias” is the emergence of massive amounts of shadow IT equipment, which is delivered, installed, and implemented with the production equipment in the isolated networks.

Taking the paint shop example, a paint shop runs:

  • Its own network devices (switches etc.)
  • Its own physical and virtual servers
  • Its own workstation
  • Its own backup-solutions
  • Its own vendor-based software solutions for monitoring, control and steering
  • Additional customized software solutions for reporting.

From outside, it’s a black box, as license costs, repair or maintenance fees, as well as any other IT-related costs not figuring within the official IT costs. We call this the “supplier bias,” as maintenance contracts often include services such as application programming, hardware replacement, backup services, etc., which are clearly IT-related costs but kept in the books as “paint shop maintenance.” From a controlling perspective, one gets a biased view on the cost occurring within these production areas.

Looking at the bigger picture, we can identify various isolated solution landscapes with more-or-less their own data centers, consisting of various amounts of physical and virtual servers and hundreds of autonomous applications.

To manage and run this environment in critical production facilities, each of the isolated solutions negotiated their own maintenance contracts with the OEM supplier. The contracts mainly include software license agreements, software development services, hardware maintenance, or hardware backup. We call this construct and the IT-related costs within the production “shadow IT” or “hidden IT.”

There are two ways of proofing the existence of shadow IT costs:

  1. The simple way: While conducting a walk-through, watch out for servers, network devices or large screens displaying the production status. In the next step, ask the responsible person who they will call in case of failure – IT or someone within the maintenance department. Then, check the cost allocation with the controlling.
  2. The hard way (not recommended): Analyze the maintenance contracts with the OEM suppliers and search for software and hardware solutions. Usually there exists a variety of contracts per production area, which makes this a time-consuming task.

Operational benefits through shadow IT optimization

After investigating and proofing the existence of shadow IT solutions, we recommend including shadow IT within central cost-cutting approaches. This process should be headed with support of change management, as production fears a lack of control and operational risks through cost measures. The cost responsibility should stay within the production, but the methodological support and guidance through the process must come out of the central efficiency program.

We can support our clients methodically in three ways:

  1. Cost exploration on shop-floor level with a structured approach
  2. Measure identification for savings and value optimization
  3. Operationalization of the program and implementation of measures.

We are doing this by implementing the Capgemini IT Cost Efficiency Framework on the shadow IT. Our Capgemini IT Cost Efficiency Framework enables our clients to get transparency on his shadow IT services. Further, we implement measures to increase the operational efficiency through fewer costs and higher operational output.

Figure 1: Capgemini Invent’s Cost Optimization Framework

Some examples of levers and measures which could be conducted within production facilities, to cut costs while achieving a consistent or improved performance of the IT support:

  • Sourcing: Reduce the cost-per-part by common sourcing and increase the response time by common storage of all production IT hardware (i.e., switches).
  • Partner: Centralize the sourcing for shop-floor software development and introduce standard developer rates per technology among all production areas.
  • Staff: Build a capability tower for IT shop-floor maintenance, containing all software experts currently distributed among several maintenance departments.
  • Infrastructure: Centralize all isolated server solutions within one production data center, implement a central backup solution.

Conclusion:

It’s crucial for the automotive and manufacturing industry to include the shadow IT on shop-floor levels into the central efficiency program of the IT, to reach cost and operational benefit within their core business.

By bringing in our automotive and manufacturing expertise as well as our IT Cost Efficiency Framework, we help our clients:

  • Uncover shadow IT costs and the cost drivers.
  • Implement cost levers on shadow IT costs.
  • Boost the central efficiency program through additional savings

IT cost efficiency toolkit for quick potential identification

Capgemini
2021-07-23

The following blog post gives an explanatory example of the implementation of our Capgemini IT Cost Efficiency Toolkit in the context of IT costs.

General obstacles at restructuring programs: Missing toolkits and long-lasting ramp-up phases

Restructuring demand is often driven by external factors, such as shrinking margins due to general economic conditions or greater competition. Restructuring programs are usually conducted with two major purposes: Reduce cost and/or increase revenues.

Common Restructuring approaches

Figure 1: Common Restructuring approaches

The revenue-optimization side can be linked precisely to products. The cost side, especially general spending such as IT spending, is often not controlled on a product basis and is therefore not transparent in terms of the value added for the company.

In traditional efficiency approaches, the major task is to derive and analyze, from a financial perspective, the cost-and-effect chain of various types of IT spending, in order to identify the right levers. This results in longstanding IT efficiency programs with low financial benefits reported in the books. Furthermore, the setup of these programs is costly – we experienced strong demand for toolkits to identify and track the measures in their potential during the program lifecycle as well as the program effect itself. The ramp-up phase is usually conducted using Microsoft Excel as the basic tracking instrument. After 12 months, a software selection process that blocks capacity within the restructuring team and causes additional costs starts.

Let’s take a new path:

The Capgemini IT Cost Efficiency Toolkit for measure identification, tracking, and effect-reporting

To tackle these challenges, we’ve created the Capgemini IT Cost Efficiency Framework, together with a toolkit for tracking and effect-reporting on the program status. Our Capgemini IT Cost Efficiency Toolkit consists of three parts:

  1. The IT Cost Efficiency Framework: quick measure identification based on best-practice levers

Restructuring and efficiency programs are conducted best when there is clear guidance and structure.  Therefore, we’ve created a framework consisting of six focus areas within the IT:
Application: Enterprise architecture, domain architecture, as well as the current application landscape
Infrastructure: Covers infrastructure topics from physical to virtual provisioning, from a technology and implementation perspective
Staff: All staff-related and organizational topics, including organizational layers, in-/external task split, span of control
Partner: Full partner management, contract management of external partners, delivery models, and license management
Facilities: Focus on IT facilities such as physical data centers or IT office environments
Operating model: Includes capability management, process management, and working methods (e.g., agile).

Figure 2: Cost Dimensions within the Framework

Figure 2: Cost Dimensions within the Framework

We’ve divided the framework into five spheres of action and matched a total over 200 levers and over 400 best-practice measures among the framework and the spheres of action. For each crossing point of the spheres of action (i.e., value optimization) with the framework (i.e., infrastructure) we can provide various best-of class measures and levers.

Figure 3: Dimensions and Optimizers of the Toolkit

Figure 3: Dimensions and Optimizers of the Toolkit

With this tool, we are able to match existing or already-conducted measures from previous cost initiatives with our database and identify efficiency opportunities in the form of current white spots within our IT Cost Efficiency Framework. The initial analysis and lever identification phase of cost efficiency programs is sped up through the framework and the best-practice and lever-matching tool from our toolkit

  1. Tracking of measures and program progress

Once a new measure for implementation is identified from existing levers or ‘best-of class’ measures out of previous projects, a new measure is created with its estimated savings potential and sustainability estimation. The measures can be accumulated in the database to generate a quick overview of the identified savings potential. Furthermore, a comparison with the total or annualized savings target can be conducted. The status of each measures can be updated independently, using a multi-user interface and an implemented rights management. Simple reports for the technical project management team can be generated from the dataset.

  1. Management Dashboard in Microsoft PowerBI

We’ve experienced in several cost cutting programs at our clients a separation of data gathering and data displaying tools. It’s mainly a split of tools: In Data gathering and consolidation we’ve experienced mainly  Microsoft Excel. Manually generated reports are then exported from Excel into Microsoft PowerPoint, QlikView or Microsoft PowerBI. The main shortage experienced is the data consistency between the report dates, as the data source is often a consolidated Excel sheet with various sources and no automatic versioning.

To tackle these challenges, we’ve built a web-based dashboard on top of our database to display real-time data and graphs. Currently, we are using Microsoft PowerBI for the dashboard, but this can be replaced with any other BI or dashboard solution according to our customers’ needs. Our current dashboard can be fully customized according to our clients’ needs, with individual logos, corporate identity, or individualized graphs and heatmaps.

Figure 4: Schematic Illustration of the Dashboard

Figure 4: Schematic Illustration of the Dashboard

Conclusion:

Our Capgemini IT Cost Efficiency Toolkit including the Capgemini IT Cost Efficiency Framework can be established quickly at the client site to speed up and boost efficiency programs.

We support our client in:

  • Quickly identifying cost levers and cost measures with our 200+ best-practice levers framework
  • Tracking the effects on current measures and the program status
  • Generating board reports and display them live in a PowerBI frontend.

For more information on our toolkit contact:

Lars Stommer

Who needs high-code developers? Citizen development is here for Financial Services

Capgemini
2021-07-23

If it is properly controlled and governed, citizen development can solve the desperate shortage of dev resources. Picture this: your people have a great idea to accelerate how you deliver on new business requirements. All you need to make it happen is some professional developers. Easy, right? Unfortunately, not. Finding highly skilled developers in the current marketplace is a challenge for every enterprise and in every vertical. Sadly, you aren’t the only business in the world with an IT need … and for low-stakes, non-critical business processes, the waiting list is especially long. So, this is where citizen development comes to the rescue.

“a ready-to-go army – people who don’t need business analysts to translate what the business requirements are, because they are the business requirement!”

Of course, we need to be careful to not over-romanticize the solution. There are legitimate concerns about how you keep control when you let the business into the sacred realms of IT.  However, they need to be put into context; with strong governance financial services, organizations have a ready-to-go army of dev resources waiting to be unleashed – people who don’t need business analysts to translate what the business requirements are, because they are the business requirement!

How it can work

Done well, this is about empowering your employees, safely, to drive business forward. Citizen development enables all layers of your organization to create their own apps – without needing in-depth IT knowledge. They are given access to the right tool sets to create individual application experiences within a team that’s governed by experienced IT oversight. A shared IT backlog burden between business and IT can be shrunk at speed, by using each other’s knowledge to close the gap. Business users no longer need to rely on their “standard” office tools, but can also create the tools they need without waiting for IT to help them:

  • To keep control, we provide a low-code center of excellence (CoE) enablement team to market your low code environment within the organization in a federated model, while a centralized team handles value enabling and, on the platform, multiple business and IT teams can focuses just on delivering value.
  • The solution also unlocks the potential of any platform to become rapid by promoting reuse. Keeping the use case for the rapid application development in place is key to determining the architectural agility of your organization.

What you’re going to see happening

The gap between business and IT closing

By sharing the IT backlog burden between business and IT, you can start using each other’s knowledge to keep up the application creation process and improve business productivity – enabling IT to really focus on the core business critical systems for financial services firms.

An increase in company agility

The future is here. Big tech players are jumping on the bandwagon, so it’s easy to integrate with their existing IT landscape. By removing the dependency with IT, businesses can be agile to the situation and adopt quick changes to shorten the time to value.

Your workforce empowered

Today’s workforce is tired of constraint. They love freedom and agility, and they have untapped skills to deploy that will improve their own job satisfaction. By giving them the freedom to realize their great ideas, you empower them to empower your organization.

Be in control

With a community of expertise (CoE) structure you are in full control. A CoE guides your citizen developers to follow the platform guardrails, and coaches them to reuse existing components to ensure quality and integrity.

Be bold: adopt citizen development

At Capgemini, we have consultants who are masters on various low-code platforms. We do a free assessment and advise you on the right tool set based on your comfort level and preferences.

Instead of looking at citizen development as just a productivity enhancer, we think it’s about unlocking your people and your business to enjoy the full potential of rapid application development.

It’s time for a (r)evolution

By implementing a culture of citizen development, you can get the future you want and throw off the constraints imposed by scarcity of development resources. By giving your users, and therefore your business, tools to create their own agile workspace, you do much more. In short, it’s time for a (r)evolution.

Authors


Vincent Fokke 
Chief Technology Officer (CTO) Capgemini FS Benelux at Capgemini

Guido Bosch
Solution Architect at Capgemini

Lean IT: Data driven steering of IT organizations

Capgemini
2021-07-23

In new (agile) as well as hybrid (bimodal) IT Organizations we’re facing a lot of common misconceptions about organizational steering: “We are agile, we don’t need steering metrics” is just one of them, but by far the most common.

Especially agile collaboration methods – such as ‘scrum’ – are designed to measure constantly the performance of a team in order to be able to adjust the team structure or the workload if the performance changes unfavorably. Other methods such as ‘lean startup’ go one step further – they critically measure each facet of the target product, if the performance is not following an initial hypothesis, the product will be altered or pivoted.

“So why should we work less accurate in organizational steering than at the base where the actual work is carried out?”

In short: There is no rational reason.

With the broad emergence and availability of easy to implement AI technologies, comprehensive data collections on all our activities in work and private life, it was never easier than today to implement intelligent and data driven systems. The main problem is the image – KPI systems facing negative connotations for managers and employees as they intent to ‘measure everything, stating nothing’. Examining the KPI systems implemented in IT companies in the 90s or later showing strong focus on the measuring process itself or input factors rather than on organizational output factors. Further, everything identified as measurable was highlighted as ‘key performance indicator’, without any judgement on the statistical significance or significance in terms of steering impact on the organization.

Today, we don’t need 50+ key performance indicators. We need a few, telling us where we need to adjust the organization and how to adjust it – to modify the organizational output. IT organizations are today highly leveraged through external suppliers, various levels of on- and offshore sub-suppliers, non-comparable levels  of system customizing and business-adjusted individual system landscapes. Focusing on popular, allegedly comparable, input factors such as total IT costs, total headcount split into service, development etc., gives zero indication on the effectiveness and the efficiency of the IT Organization, and results mainly in perplexity about the next actions. Just imagine, your Java developer costs 20% more than the ‘industry benchmark’ – but is the benchmark developer also a full-stack developer and saves you 6-digits in license costs through efficient use of cloud infrastructure?

“Ok, same product, different packaging?”

In short: Same ingredients, different concoct, superior outcome.

In output oriented metrics KPIs are mainly not as easy comparable as input KPIs, as they are individually fitted to the organization with the aim to increase the efficiency and steerability of the analyzed organizations.

THE DATA DRIVEN STEERING COOKBOOK

Step  one – determination of the main functional, personal and technical issues driving the organization.

Therefore, we search for high-level assumptions such as “We have a problem with the time-to-market of our projects”, then identify the relevant factors for the time-to-market, such as labor availability, technological expertise, infrastructure provisioning etc.  We cluster the main assumptions in form of measurable metrics, such as ‘internal resource utilization’, ‘external resource utilization’
, and split them down if needed on factor level – e.g. ‘sickness absence’ or ‘employee fluctuation’. Statistical models especially in time-series modelling are then applied at the existing data of factors to identify correlations, statistical significance and if needed other statistical metrics. One outcome might be that an increase of absence leaves leads with a time-shift of 3-6 months to an increase in employee fluctuation. This is usually followed by projects running completely out of budget and another 3-6 months later by a rapidly declining customer satisfaction.

Step two – implement intelligence.

As we see in the example, the input factors are tangled closely to each other – the complication is to build a model for a small set of key metrics considering the input factors to measure the organizational output and generate indications for management interventions. Conducting a dependency analysis on >50 factors let us identify usually around 10-25 factors per organization which are crucial for the efficiency and output of the organization. As an organization has several, often contradicting, goals, we categorize the key factors according to their relevance for the achievement of an individual goal.

Per each goal, we using these factors to build up forecasting metrics based on the gathered data and the past correlations, we can use statistical models (e.g. ordinary least squares,  ARIMA, etc.), or artificial intelligence based models such as k-nearest neighbor, random forest, etc. – depending on the set of data. Additional data pre-processing steps might be needed in order to reach the expected level of data consistency.

Step three – set up the data driven management cockpit.

After the definition of the measuring metrics and the calculation of the as-is KPIs and its forecast, the gathered information should be displayed to the management. We recommend building a management dashboard with 5-10 high level KPIs and a drill-down possibility wherever necessary. An economical use of drill-downs is recommended. The recipient on CxO-level should see immediately the status and where his action is required – we suggest an appropriate use of graphics with clear color-based indications.

The underlying technology stack is depending on the IT infrastructure and the existence of in-house tools and competencies. Out of our previous experience we can recommend Python (with Pandas, tensorflow, scikit-learn, etc.)  as open source software for the data science and predictive part, full data pipelines from processing to proceeding with large amount of data could be implemented with Knime. The deployment is recommended as containers in a cloud environment. Due to the easy configuration and implementation, we use usually Microsoft PowerBI as out-of-the-box frontend solution.

Organizational and customers benefit

The major benefit is not only to understand the cause-effect relationship of several organizational factors, but also the gain in advanced steering competence of the organizations CxO-level. Data driven steering enables:

  • the CIO to allocate more resources in software systems before they crash – based on the prediction of incidents and predictions on the usage
  • the CFO to adjust the financial planning before projects are running completely out of budget
  • the board to prevent fluctuation

All displayed factors lead to a higher and more professional level of customer-centric service provisioning through the IT organization.