Tobias Bumm/ December 14, 2020/ Uncategorized

Financial service providers and companies have the obligation to protect the financial system from criminal abuse. But the processes to detect money laundering and fraud are often characterized by a high level of manual, repetitive and data-intensive work. Thanks to Artificial Intelligence (AI), the financial sector has cost-effective opportunities to more effectively counter financial crime.

Banks are treading a fine line

While AI can have great contributions to the fight of financial crime, one of the biggest flaws with AI at present are false positives. Those are transactions that did alert and, after being investigated, prove not to be suspicious and are therefore falsely alerted. As frustrating as it may be to have exhausted resources on an investigation that could have been resolved at the onset of detection, worse are the unexposed false negatives. Those are transactions that do not alert but should have alerted. Banks are treading a fine line here to make the right decision.

By using AI-solutions, most compliance alerts can be processed quickly and can also be (pre-) decided. That way, a part of the workload, which rests on the compliance employees is significantly reduced. However, one critical problem with using AI for decision making processes is that the machine learning algorithms, though achieving high-level of precision, are not easily understandable for how a recommendation is made. Customers might, reasonably, want to know why their funds are frozen or their credit application was denied. It is therefore very important that the models can be explained and do not represent a black box.

Opening the Black Box?

Explainable AI (XAI) is an important development and has been guiding AI to make models explainable, transparent, and comprehensible. It enables humans to understand the models to effectively manage the benefits that AI systems provide, while maintaining a high level of prediction accuracy. In order to provide model transparency, many institutions are experimenting with machine-based approaches combined with transparency techniques such as LIME or Shapley values. The latter example is a unified approach to explain the output of any machine learning model. Therefore, every feature used in the model is given a relative importance score: a SHAP value. Those SHAP values break down a prediction to show the impact of each factor and can for example explain why a customer’s credit application got denied.

However, those models and applications are constantly being developed and improved. But compliance officers and employees will continue to play a major role in all processes in the foreseeable future.


More about the topic: