Let’s look at how the logic of the argument outlined above plays out in practice.
At Capgemini, we’ve been working with a large insurance company based in France. Each year, around 50,000 dentists from across the country produce in the region of 250,000 dental care quotes, which the company needs to assess according to their health cover, and then provide a response to the customer. To date, there have been no pre-defined templates for dentists to make their quotes. To handle the workload, the insurer has employed 25 full-time employees in five regional centers across France.
The previous manual process (see Figure 7) worked as follows: the insurer’s employees would read the quotes, input them into the legacy system, process the validation of the payback, and then communicate the outcome to the customer making the submission. On average, each quote took nine minutes to be processed.
Our intermediate proof of concept (PoC) has introduced an RPA solution in which data from the majority of quotes is read and extracted via optical character recognition (OCR), just before a data checking ensured by human operators. The rest of the process – input, validation, and feedback – is now handled via RPA. As a result, individual task time now stands at three minutes – one-third of the original time – and only needs the intervention of 12 full-time employees (just under a half) (see Figure 7).
We’re now helping our client to go a stage further. An artificially intelligent supervisor, which we call our “AI orchestrator,” is being introduced to manage the hand-off between the machine – the RPA system – and human intervention. This orchestrator uses a machine learning algorithm to classify operations that can be managed by the machine (around 85% of the total) and those that will need to be processed by people (the remaining 15%). The learning process of such an orchestrator can be unsupervised, supervised, or supported by reinforcement, depending on the nature of the data and the task to be automated. Here, it is supervised by the results of quality controls.
The AI orchestrator won’t always make correct decisions about individual operations, and may allocate an operation to a person that could have been handled by the machine. In such instances, it’s likely that the process efficiency will rise – but of course, the cost and time associated with that task will also rise. It’s a trade-off – and over time, everything can be precisely calculated with our framework.
In the case of our insurance client, the framework we have established to handle machine-driven and human tasks has enabled us to anticipate that our AI orchestrator model will increase time savings to 80%. In other words, a task that previously took nine minutes on average could now be completed in just one minute. Meanwhile, staff engagement can be reduced still further, to just six full-time employees. What’s more, overall quality is expected to rise by more than 87%.
Implementation of these PoC exercises has been rapid. Once the model is established, the main task is to coach the AI orchestrator using data sets as described earlier in this paper – and the more data, the better.
A detailed technical paper on this application by Taoufik Amri and Antoine Grappin discusses this topic in more detail, and it due for publication in 2019.
Figure 7. AI orchestration for a French insurance company