Skip to Content

Mapping attributes of an ethical AI Journey: from concept to full deployment

Aatish Thakerar, Sergi Capape, Danial Khan, Joel Brocklehurst
1 Sep 2021

Thinking about how AI can become less of a one-time experiment and more of a long-term deployment is a worthy endeavour, and one that is likely to become ever more commonplace in the years ahead.

In past weeks, as we have sought to outline the journey from an AI Proof of Concept (PoC) to a scaled-up, industrial-style deployment, we have focused on very specific pillars: an introduction to our series, followed by the value, quality, operational feasibility, and security aspects of facilitating such a transition.

Operational Feasibility of AI Scaling

Growing any AI-backed deployment requires a closer look at the operational feasibility – and as applications in AI are rooted in the Core Technologies leveraged, the journey will always start there.

In concert with technology, it is vital that any widescale applications of AI have a corresponding Live Support apparatus that can effectively cope with the employee demand. This is conceptually extended to Disaster Recovery, with the end goal to minimise operational loss when balanced against cost. Lastly, the Operating Model coupled with the Change Management framework must be considered to enable the PoCs the opportunity to mature and release value back into the business.

Fig. 1: Overview of Operating Model and Change Management framework

Source: Capgemini
Source: Capgemini

Securing your Deployment

Much of the operational feasibility is dependent on the security of the technology involved. However, such a simple assumption does not consider the comprehensive task of actively securing the deployment, a key component to building trust in any public technology of the future.

The first task of any effort to secure industrialised AI deployments is to verify the authenticity of the model in question, namely the continuity of the model’s design/architecture. Model security is especially important where AI is to be used in critical and/or real-time systems, such as medical condition auto-classification or autonomous driving.

The second task is to protect your AI’s overall architecture. Most Public Cloud environments used for such deployments today allow users to undertake these activities via a simple dashboard, making the security of large-scale AI far easier for less technical users.

The final task of securing AI is perhaps the most contentious: the ethics and explainability (ability to transparently justify the AI outputs) of Machine Learning. Without a proactive, inclusive approach to building an ethical AI, and the gradual compilation of trust that comes with such a process, the widespread adoption of AI in business, industry, and society is likely to be inhibited often and contested endlessly.

Assessing the Value of Applied AI

With a proper understanding of ethical artificial intelligence, and the role that widespread trust in AI technology plays in its adoption, it is far easier to generate methodologies reliable enough to assess value of AI applications; without this, an organisation will struggle to become an AI-powered enterprise.

AI applications move through three distinct phases: pilot, optimisation, and production. Rarely will we see tangible business benefit delivered outside of the production phase, but it is critical to be able to justify value of an application to move it from pilot, to optimisation to production, where it can begin to provide business benefit.

Utilising measures at all three stages of the development lifecycle enables organisations to not only measure AI effectiveness, to justify additional spend to grow AI capability, but also to assess and select use cases. Ability to select the correct use cases enables AI spend to be focused on driving tangible business value, and proving it’s worth.

Data Quality Fuels AI Growth

Much of the worth of AI, then, ultimately depends on the quality of the data that fuels it. Poor data quality can lead to bias, spurious correlations, misidentification of trends and many other negative effects in the AI model’s results. The AI hierarchy of needs represents the stages that successful data-driven organisations navigate when developing AI initiatives with prospects of industrialisation. One should not progress to a higher level if the requirements of lower, foundational levels are not satisfied.

Aim for a proof of concept that fulfils all stages (data collection, data flow, analytics and AI) and iterate; build the pyramid, then grow it.

Fig. 2: The AI hierarchy of needs

Source: Capgemini
Source: Capgemini

A Long-Term View of AI

Bringing together these four aspects of industrialising AI – value, quality, operational feasibility, and security – it’s clear that there is no single, agreed consensus on how an AI can industrialise. That said, we hope that this series has brought clarity to those stakeholders – technical and non-technical – looking for a good place to start. Thinking about how AI can become less of a one-time experiment and more of a long-term deployment is a worthy endeavour, and one that is likely to become ever more commonplace in the years ahead.