Financial Crime Management's Broken System

Create a vendor selection project & run comparison reports
Click to express your interest in this report
Indication of coverage against your requirements
A subscription is required to activate this feature. Contact us for more info.
Celent have reviewed this profile and believe it to be accurate.
25 October 2018
Joan McGowan

Is AI the Answer?

I had the opportunity to present the AML keynote session at NICE Actimize’s Engage Client Conference in New York last week. My session looked at the status of financial crime management, and what to do about it. All in the room agreed that the traditional siloed approach to transaction monitoring and compliance is broken.

A byproduct of this approach is the persistence of high false positives, and, more critically, missed false negatives. The stats are alarming.

  • AML and fraud false positive alerts hover between 85% and 99%.
  • Several large banks employ between 4,000 and 8,000 staff dedicated to AML compliance and, with an average employee making 10 to 30 errors per 100 opportunities, error rates and problems are compounded.
  • Time wasted on investigating a false positive is usually between 5 and 30 minutes and, if scored for further scrutiny, the investigation can take hours and sometimes days. Analysts, investigators, and compliance staff are not the only ones involved in the process; account managers are often pulled into an investigation and can spend hours each week helping to resolve false alarms.

The main cause of out of control false positive numbers is the failure of banks to monitor and accurately analyze vast amounts of data. Although transaction monitoring systems are one of the biggest consumers of data across the bank, they were not designed to mine or compute big data. Systems are siloed and struggle to handle data quality, data complexity, and extensive data flows, and do not have the capabilities to crawl the internet to link hidden connections.

So, is artificial intelligence the answer?

There was a palatable nervousness in the room when I posed this question. The promise of AI is recognized by all, but which techniques and how to implement them are concerns. There were a handful of banks in the room that have already begun robotic process automation (RPA) initiatives to help triage false positives alerts and to assist in KYC checks, where automation is used to bring in data, run compliance checks, and capture and record information.

However, AI is much more than RPA. It is a combination of advanced technologies that imbue computer systems with some of the cognitive intuition of an investigator. AI can bring opportunities to rationalize, streamline, and increase efficiencies across resourcing, processes, and operations. The expected benefits are lower costs, increased efficiency, enhanced quality, sustainability, and perhaps better morale when processes shift away from repetitive tasks to higher-value activities.

Computing power and machine learning are particularly well-suited to managing large volumes of structured and unstructured data for more precise identification of patterns of suspicious behavior. RPA of low-level alerts will free up analysts to focus on evidence gathering, forensics, and quicker resolution of high-risk activities. Natural language processing and generation can add efficiency by automating cognitive tasks such as name parsing and multi-lingual analysis, and by turning raw data and investigator narrative into higher quality suspicious activity reports.

But there are important ethical and regulatory concerns to take into consideration. Specifically, how far can we trust AI? The transparency of AI techniques and underlying algorithms is critical and will be achieved through an effective model management policy and a strategy that fosters strong governance over the design, development, and delivery of AI initiatives.

I guess the real question to ask is, how can you not afford to embrace AI?


  • Hello Joan,
    Agreed with your article:
    To complement yours, suggestion is:
    "One of the best ways for the banks to fix the system & introduce / embrace the AI technology is to actively enroll with their local regulator (eg: UK, SG regulators are actively promoting the FinTech concept & allow the POC) and tie-up with the best of FinTech's like Data Robotics, Ayasdi or Digital Reasoning; here the bank will bring their use-case & collaborate with the FinTech firms and this is how they can embrace this new culture and further draw upon the AI goverance strategy team at their end.

    Do you agree with my thoughts pl:

    Venkatesh Balasubramaniam

Insight details

Insight Format
Geographic Focus
Asia-Pacific, EMEA, LATAM, North America