Capping IT Off

Capping IT Off

Opinions expressed on this blog reflect the writer’s views and not the position of the Capgemini Group

How to Evaluate Bank Credit Risk Prediction Accuracy based on SVM and Decision Tree Models

Categories : Digital StrategiesData

Introduction:

Credit risk analysis and credit risk management are imperative to financial organizations as this information exposes the credit worthiness of borrowers and helps lower the risk of default on debt. To understand the risk involved, credit providers normally collect vast amounts of information on borrowers and apply various predictive, statistical, and analytic techniques to analyze or to determine risk levels. In this paper we evaluate the credit risk prediction accuracy based on different binary classifications (SVM & Decision Models) and Machine Learning algorithms.

Description:

The large number of decisions involved in the lending business, as well as the amount of model-based data and prediction sensitivity, make it necessary to rely on models and algorithms rather than human discretion.

Machine Learning models are typically used to generate numerical “scores” that summarize the creditworthiness of consumers. A type of machine learning, supervised learning is an algorithm usually used to predict values from known data sets. Supervised learning includes two categories of algorithms:

  • Classification, which is based on categorical response values where the data can be separated into specific classes or categories

  • Regression, that is based on continuous-response values

Despite the many classification algorithms available, we devise the credit risk prediction models using the Two-Class Support Vector Machine (SVM) and the Two-Class Decision Boosted Tree and then evaluate and compare the final results.

Support Vector Machine:

A support vector machine (SVM) is a supervised learning technique that analyzes data and isolates patterns applicable to both classification and regression. The classifier is useful for choosing between two or more possible outcomes that depend on continuous or categorical predictor variables. Based on training and sample classification data, the SVM algorithm assigns the target data into any one of the given categories. The data is represented as points in space and categories are mapped in both linear and non-linear ways.

Boosted Decision Tree Model:

A boosted decision tree is an ensemble learning method in which the second tree corrects for the errors of the first tree, the third tree corrects for the errors of the first and second trees, and so forth. Predictions are based on the entire ensemble of trees together rather than on one, individual tree model.

Cost-Sensitive Credit Risk Prediction Experiment:

The objective of this experiment is to predict the cost-sensitive credit risk of a credit application using binary classification. We use Microsoft Azure Machine Learning Studio to create the experiment. The classification problem in this experiment is cost-sensitive because the cost of misclassifying the positive samples is five times the cost of misclassifying the negative samples.

High level steps of the experiment:

Data:

  • We use the German Credit Card data set from the UC Irvine repository.

  • This data set contains 1000 samples with 20 features and 1 label. Each sample represents a person. The 20 features include both numerical and categorical specimens. The last column is the label, which denotes the credit risk and has only two possible values: high credit risk = 2, and low credit risk = 1

  • The cost of misclassifying a low-risk example as high is 1, whereas the cost of misclassifying a high-risk example as low is 5.          

Data Processing:

  • We split the original data set into training and test sets of the same size using the Azure ML Split module.

  • To handle the cost sensitivity involved, we generated the new data set from the existing trained data set by replicating the each high-risk transactionfive times.

Feature Engineering:

  • To normalize the ranges of all numeric features, we use the Normalize Data module along with “tanh” transformation. A “tanh” transformation converts all numeric features to values within a range of 0-1, while preserving the overall distribution of values.

  • The Two-Class Support Vector Machine module handles string features, converting them first to categorical features and then to binary features having a value of 0 or 1

Model:

  • We use a Two-Class Support Vector Machine and Two-Class Boosted Decision Tree algorithms.

  • The Train Model applies the algorithm to the data and creates the actual model.

  • We use Score Model to produce scores using the test examples.

Evaluate:

  • We use the MS Azure Evaluate Model module to compare examples that have the same misclassification cost.

  • We looked at two different instances of the Evaluate Model module for both the SVM and Decision Tree Models, as an Evaluate Model can compute the performance metrics for up to two scored models only.

Results:

The diagram below shows the final results (Visualize Output of the Select Columns Module) of the experiment:

  • The first column lists the machine learning algorithm used to generate a model.

  • The second column indicates the type of training set.

  • The third column contains the cost-sensitive accuracy value.

Conclusion:

The above experiment results indicate that the model created using the Two-Class Support Vector Machine and trained on the replicated (cost-sensitive) training data set provided the best accuracy.

The SVM algorithm works well on simple data sets when the goal is speed over accuracy. Decision Tree Models, however, are memory-intensive learners and therefore not suitable for large datasets.

Detailed information on choosing different algorithms can be found at  MS Azure ML Documentation.

References:

https://azure.microsoft.com/en-us/documentation/articles/machine-learning-algorithm-choice/

https://azure.microsoft.com/en-us/documentation/articles/machine-learning-algorithm-cheat-sheet/

https://gallery.cortanaintelligence.com/Experiment/Binary-Classfication-Credit-risk-prediction-5

About the author

Malleswara Rao
Malleswara Rao

Leave a comment

Your email address will not be published. Required fields are marked *.