Skip to Content

Industrialising Artificial Intelligence: Scaling AI through an optimal operational set up to enable agility, speed, and business value.

2 Jun 2021

In world that is rapidly evolving to an “as-a-service” economy, firms must look inward to ensure that their operational frameworks are suited to compliment the need to rapidly scale Proof-of-Concepts (PoC) into widely applied AI solutions.

In what follows, we will provide a commentary on the five key components to consider when reviewing your own operation.

The five key components

Source: Capgemini
Source: Capgemini

As applications in AI are rooted in the technologies leveraged, let us start our journey with core technology. As IT Services spend approaches $1000Bn globally, there are an array of vendors able to provide the core end-to-end functionality required to satiate the technical needs of a business looking to improve their AI ventures. However, the presence of a wide array of choices often results in the inability to identify the trees from the forest. Businesses must ensure that PoCs are built primarily using a limited (if not a single) set of core technologies. Supporting multi-vendor stacks at the PoC stage often result in significant costs and delays associated to integration with little of its corresponding benefits. It is only when scaling that we advise that additional technologies are incorporated, and even in those cases we recommend that business remain aware of which services of each vendor used are dependent on the successful operation of the AI.

In concert with the core technology, it is vital that any widescale applications of AI have a corresponding live support apparatus that can effectively cope with the employee demand. Though this can remain ad-hoc in the PoC stages – as the limited technology coupled with informed users allow better self-service – it is mandatory when in the context of any widescale application of AI. For business looking to sell said AI as a product/service, it is also important that the live support model incorporates multiple channels of communication and several monetisation frameworks suitable for the product/service being provided.

Last in our trio of technology focused commentaries is disaster recovery (DR). The end goal of any DR apparatus it to minimise operational loss to a safe state balanced by the cost required to achieve said state. The vital factor for AI applications is an understanding on the sensitivity of the data used, and the differing responsibilities associated with recovering key components of the application versus the core data itself.

Our final stop on this journey is the operating model itself, which we will couple with change management to highlight how these must be considered together. To ensure that PoCs are given the opportunity to mature, it is advised that a structured transition exists between the micro-operating models fostered within individual teams/incubators, and the macro-operating model driving the relentless paces observed in production environments. As this transition occurs, it is crucial for the change management process be suitable to provide a view on the KPIs, people, and sentiment to those seeking success.