Skip to main content Skip to navigation

The State of Financial Crime 2024: Download our latest research

How AI & machine learning help prevent and detect fraud

Fraud Knowledge & Training

Artificial intelligence and machine learning: Are they interchangeable?

Artificial intelligence (AI) is an umbrella term referring to all the methods by which machines imitate human cognition – for instance, decision-making, data analysis, and the ability to solve problems. One of the most common methods is machine learning (ML), which is frequently used as a synonym for AI. But while related, their definitions are distinct: AI is the parent category, and ML is a subtype. According to the Massachusetts Institute of Technology (MIT), machine learning is an alternative to traditional programming, “letting computers learn to program themselves through experience.” 

Types of machine learning

There are three main kinds of machine learning:

  • Supervised learning – the system learns to interpret data based on past examples humans have provided. After humans supply it with labeled data examples – for instance, pictures of adult and child faces – it attempts to identify new data it receives correctly. If it’s incorrect (identifying a child as an adult, for instance), humans provide feedback to improve its accuracy the next time.
  • Unsupervised learning – the system uses an algorithm to discover data points with characteristics that are similar to each other. After a series of calibrations, it identifies groups based on these similarities, sometimes called clusters. For instance, it might identify a large group of new customers between the ages of 97 and 100. While humans can, of course, find a specific group using pre-defined search parameters, machine learning can discover multiple new groups by identifying unexpected similarities humans overlook.
  • Reinforcement learning – the system, sometimes called an agent, learns to problem-solve by trial and error. Humans give the agent feedback based on whether it succeeded or not, and they help review the agent’s various trial steps, reinforcing the most efficient actions taken. This process can be useful in helping humans discover new solutions to problems they are struggling to solve (or solve more effectively) themselves.

The benefits of AI and machine learning for fraud detection 

AI and machine learning can help human fraud teams maximize their efficiency in a cost-effective manner. In a 2021 publication, the FATF examined AI’s power to help firms analyze and respond to criminal threats by providing automated speed and accuracy and helping firms categorize and organize relevant risk data. 

It emphasized how machine learning can detect “anomalies and outliers” and “improve data quality and analysis”. For example, deep learning algorithms within machine learning-enabled tools could perform a task repeatedly, learning from the results to make accurate decisions about future inputs. The FATF suggested several ways to implement AI and machine learning tools, including transaction monitoring and automated data reporting.

For instance, firms may be able to use AI to:

  • Intuitively set fraud transaction monitoring thresholds based on an analysis of risk data. When a customer approaches or breaches an established threshold, machine learning tools may be able to decide whether to trigger a fraud alert based on what is known about the customer’s profile or financial situation. 
  • Detect groups of customers with characteristics indicating they’re at a higher risk of being the victims or perpetrators of fraud. 
  • Uncover instances of fraud in adverse media searches using natural language processing (NLP).
  • Provide alert prioritization, allowing higher-risk alerts to rise to the top for review and reduce time wasted on false positives.
  • Detect anomalies efficiently, going beyond individual rules to comprehensive data analysis. AI-enhanced anomaly detection pinpoints atypical or abnormal behaviors by looking at multiple weak signals that combine to identify a higher risk than they would alone.

From there, human analysts can perform deeper investigations and decide whether to take further action on the customer’s activity.

Using AI and ML in fraud management: Best practices 

In 2022, the Wolfsberg Group highlighted five best practices to ensure AI and ML are used responsibly in managing financial crime risk. Each system relying on artificial intelligence should demonstrate:

  1. Legitimate purpose – Firms must clearly define AI tools’ scope and make a governance plan that accounts for the risk of their misuse. This includes risk assessments that account for the possibility of data misappropriation and algorithmic bias. Indeed, model governance is not new, so organizations need not start from scratch. Existing governance and model risk management frameworks can be adapted to cover AI models. They should help organizations to understand and effectively manage the specific risks relating to the use of AI.
  2. Proportionate use – AI and machine learning are powerful tools, but their benefit depends on the humans using them. Firms are responsible for using artificial intelligence’s power appropriately. This includes managing and regularly assessing any risks to keep them proportional to financial crime-fighting benefits like risk-based alert prioritization and detecting hidden relationships or fraud risks.
  3. Design and technical expertise – Because of the complexity and potential tied to AI/ML, it’s vital that teams using them – and those that oversee them – have an adequate grasp of its functions. Experts designing the technology should be able to explain and validate the technology’s outputs reasonably. This should make it possible to clearly define its objectives, grasp any limitations, and control for any drawbacks such as algorithmic bias. As per the FATF’s definition, explainability should “provide adequate understanding of how solutions work and produce their results” and is imperative for investigators’ decision-making process and adequate process documentation.
  4. Accountability and oversight – Governance frameworks should cover the entire lifecycle of AI and provide evidence of effective oversight and accountability. Even for vendor- or partner-provided AI, firms retain responsibility in using it as a tool. Aside from adequately training teams in the appropriate use of artificial intelligence, firms should put adequate ethical checks and balances in place to ensure the technology and its use align with their values and regulatory expectations. It’s also important for teams to understand that ultimate accountability remains with compliance officers and the firm to provide effective and compliant AML/CFT programs.
  5. Openness and transparency – It’s important for firms to balance regulators’ transparency expectations surrounding AI with their confidentiality requirements, especially regarding consumer data or information that could tip off potential subjects of an investigation. An ongoing dialogue with regulators and clear communication with customers can help. In keeping with this best practice, firms should also ensure the AI they use provides clear, documented reasons for its risk detection decisions. This explainability will give analysts a confident basis for continued research, ensuring a clear audit trail is in place. 

Mitigating risk with AI explainability

To ensure they can meet the five Wolfsberg Group best practices, firms must ensure explainability is a part of their chosen AI risk management solution. This helps avoid the risky “black box” phenomenon: that is, using an AI system’s decisions without understanding why it made them. The need for explainability is a basic condition to enable trust and ensure the responsible use of technologies. As per the FATF’s definition, explainability means that technology-based solutions or systems are “capable of being explained, understood, and accounted for.”  

The goal of explainability is more than just meeting regulator expectations. Investigators that understand the AI tools at their disposal can make informed decisions quickly, responsibly, and efficiently. Clear explanations also enable firms’ processes to be continually assessed, improving both their effectiveness and fairness and mitigating unforeseen problems like algorithmic bias.

One of the most useful and practical ways to explain AI decisions is to use a model called an “ensemble approach.” This approach layers many smaller AI functionalities together, each of which can be pinpointed and explained as part of the overall decision. This granularity helps keep things understandable by humans – rather than relying on a “black box” system performing complex functions without clear segmentation.

Outsourced vs on-site anti-fraud solutions

When settling on a fraud solution, there can be some debate between building a program in-house and outsourcing to a vendor. Some firms may feel more comfortable with the idea of building their own, whether because of perceived cost-effectiveness, internal control, ability to fine-tune, or sheer familiarity. But for many firms, the time, energy, and resources spent building fraud solutions would ultimately have been better spent elsewhere. Given the rise of efficient, specialized, and cost-effective solutions using AI and ML, firms may want to consider looking externally for risk management tools since these dedicated solutions can serve their needs and even be tailored to their unique risk profiles and business practices.

Firms should look for solutions that help automate their fraud compliance processes, including onboarding and identity verification, screening and monitoring, and transaction monitoring. For firms that already have established in-house systems and want to upgrade with minimal upheaval, hybrid systems can be an effective solution. For example, purpose-built AI (PBAI) can overlay an existing transaction monitoring system, enhancing it without requiring a total overhaul. Since PBAI uses an ensemble model, it is explainable. It can thus be a cost- and a risk-effective way for firms to upgrade legacy tools, resulting in improved efficiency, quicker decision-making, and more comprehensive risk management infrastructure. 

Key takeaways

Artificial intelligence and ML can perform certain tasks at a higher level of efficiency than even the most experienced analyst. Though some worry this might mean humans aren’t needed at all, many researchers and regulators emphasize that technology is a tool to enable and supplement human expertise rather than a replacement. Humans can still be held legally responsible for AI-informed decisions and should take steps to correct any errors that conflict with human rights. Beyond this, AI/ML can free human teams for higher-value work that’s out of reach for technology – like performing a complex investigation of high-risk activities the system has presented to them or confirming how best to action a risky alert. 

As with any powerful tool, artificial intelligence and machine learning should be implemented responsibly, with best practices ingrained in the process from day one. With these in place, firms can be better equipped than ever to face the rapidly-changing AML/CFT risk landscape.

Demo Request

Discover how AI and machine learning can reveal new risks to firms.

Request a Free Demo

Originally published 30 May 2023, updated 20 February 2024

Disclaimer: This is for general information only. The information presented does not constitute legal advice. ComplyAdvantage accepts no responsibility for any information contained herein and disclaims and excludes any liability in respect of the contents or for action taken based on this information.

Copyright © 2024 IVXS UK Limited (trading as ComplyAdvantage).