Reducing False Positives for AML: Part I – Framing the Question
08/12/2019 by Chris St. Jeor Financial Crimes
Every financial institution is its own living and breathing organism working to overcome a unique set of challenges. However, all financial organizations share one monster of a problem: keeping up with the number of alerts generated through their anti-money laundering solutions. The problem they face is figuring out how to deal with the high rate of false positives generated by their solutions.
Those familiar with the financial sector know the responsibility placed on financial institutions to participate in the Bank Secrecy Act and anti-money laundering regulations. AML transaction monitoring primarily consists of complicated business logic that tracks demographics and financial transactions in an attempt to identify potential money laundering practices. If any combination of financial transactions falls within the complicated business logic, then the transaction, or group of transactions, is flagged as an alert and must be considered for further investigation.
The problem? The sheer volume of transactions happening daily leads to thousands of routine financial transactions being flagged each month that ultimately result in unproductive alerts. Billions of dollars are spent each year to determine which alerts require investigation – and the problem is only getting worse. Research by McKinsey & Company shows that resources dedicated to AML compliance at significant banks in the US have increased tenfold over the last 5 years.
While rule-based monitoring is the current industry best practice, it does not need to remain the only solution to AML monitoring. Financial institutions can use the data created through the alert generation process and leverage predictive models. These models will identify which alerts can be ignored and which should be investigated.
This article is the first of a three-part series that will outline how financial organizations can use predictive analytics, machine learning, artificial intelligence (whatever you like to call it) to sift through the thousands of alerts generated and to identify – with a high level of confidence – which alerts need to be investigated. This series is broken out in the following order:
The first step in any analytics problem is to determine the question you are trying to answer. Or maybe better put: Determine the type of answer you are going to provide. For your problem, the question you are trying to solve is how to predict whether an alert generated by your AML solution results in a productive alert – an alert that warrants an investigation. While the problem on the surface may seem complicated with several contributing factors, the question itself is quite simple. Will this alert lead to a productive alert? Yes or no?
This type of analysis is called binary supervised learning – having two discrete possible outcomes. For these types of binary problems, you can use a variety of modeling approaches, each ranging in predictability and interpretability. The goal for this type of analysis is to identify known variables or features that you can use to predict an unknown outcome, answering, “Will this alert be productive?”
Other times you may be dealing with continuous or ordinal supervised learning. An example of continuous supervised learning would be predicting the price of a house, where the prediction can be any numeric value. An example of ordinal supervised learning could be predicting customer satisfaction reviews, each prediction taking a discrete value from a scale of 0 to 5.
Once you have determined the type of question you are answering; the next step is to decide what you want to get out of your model. While you have a host of binary predictor models available to use, it is not enough to make a prediction – what you will do with the prediction is critical in determining the model type to use.
When selecting the type of model to use, you need to start with a fundamental question: What is more important? Predictability (the accuracy with which you can make your predictions) or interpretability (the ability to explain why you made the prediction you did)?
In the world of analytics, there is often much discussion about the differences in predictive power between different types of models. Far too many projects die on the vine because the creators forget to account for the differences in interpretability across models. While it would be ideal to choose a model that has high predictability and interpretability, you, unfortunately, cannot always have your cake and eat it too. So you must decide what is more important for the end use of your model – the accuracy of the prediction or the interpretability?
A simple way to view this relationship between model predictability and interpretability is on the analytic spectrum chart in figure 1. While it would be great to have a model that fits in the top right quadrant of the chart, in reality, models usually fit somewhere in the top left or bottom right quadrants of the graph. Models like a decision tree and logistic regression exist in the top left, and gradient boosting and neural network models lie in the bottom right of the spectrum.
To help create some context around this idea, let’s play a little game, I like to call storytime.
Let’s say you are heading to Vegas for the weekend and you want to lay some money down on a Dallas Cowboys game. You are a huge Cowboys fan (Who knows why? Go Chargers!), but let’s be honest, this is your money on the line, and you want to make sure you are betting on the correct outcome. All you care about is whether the Cowboys are going to win. It doesn’t matter if Dak throws six picks or if Zeke runs for three touchdowns. All that matters is that your money comes back to you with interest. In this situation, you would want to use a highly predictive model, like a gradient boosting model, and ignore the interpretability of the model entirely.
For this scenario, let’s pretend you are the manager of the data science team at a large hospital. You are trying to build a model to identify patients who are at risk of developing high blood pressure, which will help you better care for your patients. Just telling a patient, they have a 75% chance of developing high blood pressure is not enough. You need to be able to tell them why they are at risk and what they can do to avoid it. For this type of problem, you want a model that can be easily interpreted and used to create an actionable plan to help your patients avoid the problem altogether. Logistic regression would be a great candidate for this type of scenario.
In summary, when selecting the model to use, you first need to decide what is most important: the ability to understand the prediction or the accuracy of the prediction? Answering this question upfront helps set the project up for success. Always remember, a model is only as good as the end user’s ability to turn it into action.
Because financial services are highly regulated, the interpretability of the model in your exercise is essential. AML business logic, known as scenarios, have been well thought out and have been the standard for a long time. So, when regulators come in to audit your solution, you need to be able to explain why you determined not to investigate an alert generated by a specific scenario. Having a clear and transparent model will be critical when trying to demonstrate to regulators the value in incorporating predictive models as part of your AML solution. The ability to clearly explain the reason your model made the prediction did will help build trust, support, and buy-in.
Now that you have framed the question and identified the general type of models you will want to use; you are ready to explore your data and begin the modeling process in SAS Viya. Click here to get started!