Telco Customer Churn Deep Dive

In my previous post, I went over the significance of drilling down into the classification report of a model’s results when evaluating a model’s performance in relative terms. This was because the overall accuracy score is not fully representative and could be misleading due to class imbalances particularly when there is a massive bottom line difference between Type I and Type II errors.

We’ll start off with a couple models and go over the results to see what recommendations we can make for business functions in charge of managing customer turnover.

XGBoost

The XGBoost model with some simple preprocessing and hyperparameter optimization results in a not too shabby accuracy of 81.54720%. However, notice the low precision and recall scores for churn customers. Higher accuracy is needed for this class since the cost of losing customers is 50x the cost of retaining existing customers.

Random Forest (w/ undersampling)

Another model to try is random forest classification. Here I also used the same preprocessing method prior to undersampling as this proved more effective than oversampling using SMOTE. The accuracy on the test set improved to almost 88.5%.

Notice that precision and recall jumped to averages of 0.88.

Random Forest model accuracy and evaluation

The model accuracy further improves if the majority class size is reduced further. However, this also increases the frequency of Type I and II errors for the majority class.

Feature Importance

So we’ve improved the model’s accuracy and can predict which customers will churn. So what?

Let’s use explainable AI to unveil what’s happening under the hood and translate it into communicable terms.

‘Total Charges’ is the most important feature followed by ‘Contract’, ‘tenure’, etc. The standard deviation of the feature importances are quite high (as shown by the black vertical lines in each red bar) meaning the variance of a particular feature importance between trees is high i.e. ‘Total Charges’ could play a much more significant role in some cases versus others.

SHAP

Using SHAP we can identify how features impacted the model’s prediction in specific cases. Below is a SHAP plot for an accurate churn prediction. In this example, ‘Total Charges’ were quite low and increased the prediction towards 1 while high tenure reduced the prediction slightly downwards.

Accurate Churn Prediction

The next plot shows an accurate non-churn prediction. The total charges again played a significant role though on the opposite side. Typically, customers with higher total charges churn less often.

Accurate non-Churn Prediction

How can we prevent churn?

Data Analytics/Science/Intelligence

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store