Skip to Content

Monitor data proactively to increase model resilience

Padmashree Shagrithaya
July 22, 2020

During the pandemic, some companies have questioned the value of their predictive models: The predictions can seem to bear no relation to reality. The companies are also not sure how to use the models during the recovery period, because it seems that data collected during the crisis is too disordered to generate accurate predictions.

Look at the data, not the model

The term “model drift” is often used to describe this problem, but I prefer “concept drift” or, especially, “data drift”, because the issue arises from data rather than from the model itself. The disordered data collected during a crisis bears no relation to the data with which the model has been trained.

To ensure that models can continue to deliver value during a crisis, Capgemini has developed an approach that separates the disordered data into two sets. One set can be analyzed to understand what has been going on during the crisis. The other set can be used by the model to predict what will happen during the recovery period.

Example: Predicting consumer demand during and after a crisis A retailer selling healthcare products had built a model to forecast demand. During the pandemic, demand for certain products such as hand sanitizer and painkillers suddenly soared, while demand for travel-related products such as insect repellent and sun lotion plummeted. The model’s earlier predictions had little validity and nor could the retailer see how to predict demand in the recovery period. With our approach, the retailer was able to separate out the COVID-related “noise” in the data from the data that could be used to predict the future. This meant the model could continue to provide useful predictions both during and after the crisis.

As well as ensuring the model keeps working during a crisis, this approach has another major benefit: It can be used proactively to make the model resilient. Monitoring data continuously using automated processes gives better results than the usual approach of checking retrospectively for divergence between predictions and actual outcomes. It’s also more cost-effective than regular tuning regardless of need.

To find out how your model can always deliver value, even in a crisis, contact the authors:
Padmashree Shagrithaya, VP & Head – Analytics, AI and Visualization
Chandrasekhar Balasubramanyam, VP – Insights and Data