Machine Learning is rarely sufficient, in isolation, to segment your target population into (for example) low risk and high risk groups, especially in highly imbalanced fraud-like problems. This is exacerbated when the emphasis is on identifying the negligible risk population, rather than the potential frauds. Following on from Lee Brown’s blog, in this post I want to talk about how in an assurance scoring framework you can combine multiple data science techniques to identify the negligible risk cohort that can be fast tracked through processing, allowing investigators to focus their resources elsewhere.
The figure below shows how these techniques can be chained together into an assurance scoring framework.
Firstly, business rules can be used to solve a variety of problems, and also incorporate business users intuition. I’ve found that a successful approach is to think of the outcome of a machine learning classification algorithm as providing a high risk and a low risk bucket. With the assurance scoring focus being on identifying low risk applications, business rules can then be used to push cases out from the low risk bucket into the high risk bucket. A good example is if an application has key information missing that you would normally expect to be provided, you might always want to investigate these cases further. Another example is when new data sources add features that the good/bad label for a training set has no knowledge of, and therefore no algorithm can profile, business rules can begin to account for that. Incorporating business rules also has the advantage that it can dissuade some of the myths around the ‘black-box’ machine. No longer are you purely relying on a complex algorithm to determine risk, you are also able to instil business knowledge explicitly. This can really help from a business change perspective when you want to convince users of the credence of the results.
But that’s only the first step of going beyond machine learning. You can use historical data for anomaly detection. By looking at the average number of applications expected for a particular postcode (say) the volume of applications can be monitored. If the number of applications spikes then these can all be pushed out of the low risk bucket for further investigation. Anomalies come in all shapes and sizes and it’s the business context that will really determine what an anomaly is.
You can then go a step further, by using analytics to identify any low risk applications that have hidden links to high risk applications and flagging those cases for further investigation. These final graph traversals work best when using probabilistic matching methods to link people and locations together. And by taking a cautious approach, where you don’t need to be 100% certain of a particular match (because these cases will be investigated further, separately) you can boost the confidence that the low risk bucket really is low risk.
In this way you can build an assurance scoring framework that works on the basis of survival. If an application is classified as low risk by a machine learning algorithm, passes any business rules, is not anomalous and is not linked to any applications classified as high risk, you can be pretty confident that a light touch process is appropriate for that application.
The Capgemini Assurance Scoring framework does all this and has the capability to do much more, at it’s core is machine learning, and rightly so – using the best algorithms to classify risk. But it goes that step further to make sure that the low risk cases really are low risk. This blog forms part of a series which will explore many aspects of assurance scoring. In the meantime you can find more details about this framework from our recently published brochure or from our talk at the PyData London Conference next week.