Skip to main content Skip to navigation

How to leverage agentic AI for scalable AML compliance

As AI becomes increasingly prevalent in both financial services and daily life, regulators contend with the weaponization of a technology that can be used both to protect against and to perpetrate financial crime on a large scale. 

AI is also lowering the cost of committing highly personalized fraud and scams, making low-value targets across borders increasingly attractive to criminals. According to UK Finance, in the UK alone, investment scam losses increased by 55% in the first half of 2025 (with an average loss of £15,000 per victim), and romance fraud losses increased by 35% (totaling ~£20.5 million). 

Financial institutions are increasingly recognizing that, as long as they are hindered by legacy compliance systems, criminals will win the AI arms race. In this context, global attitudes towards AI regulation make for interesting reading. 

On a global level, governments have called for the responsible, secure, and explainable use of AI. And while AI regulation remains in its infancy, clear focuses are emerging, such as striking a balance between mitigating the potential harms of AI and not stifling innovation. As of November 2025, only a handful of jurisdictions, including the European Union, China, and Japan, have adopted laws to formally regulate AI. As a counterpoint, numerous countries – including Australia and Canada – have only voluntary codes in place to manage AI risks or are still developing rules related to AI. In the US, a proposed ten-year moratorium on AI regulation was eventually cut from the federal budget bill, but it remains indicative of the country’s current deregulatory impulse. 

To gauge industry sentiment, our State of Financial Crime 2026 survey asked respondents: “Which approach to AI regulation do you personally perceive to be the most effective?” Globally, 59% of respondents favor an “innovation-focused” approach, versus just 22% who prefer a “precautionary/regulation-first” approach.

However, when breaking down the numbers regionally, a divergence appears. Despite the less centralized or prescriptive AI regulatory environment enjoyed in the US and Canada, North American respondents were the least inclined to favor an innovation-led approach, with 56% supporting it. This contrasts with 62% across Europe and 60% in Asia Pacific, who favor innovation. Conversely, North America showed the highest preference for a balanced approach (21%) or a regulation-first approach (24%).

This suggests that firms operating in environments with less prescriptive regulation may actually feel more exposed to the risks of new AI models. Lacking clear governmental guardrails, North American firms appear to be signaling a greater need for clarity and structured guidance on AI explainability, governance, and safeguards.

With AI regulation in flux, even as the stakes of misfiring systems continue to rise, customized compliance programs informed by deep regulatory expertise, utilizing high-quality proprietary data, and prioritizing AI explainability are giving firms a competitive and regulatory edge.

The State of Financial Crime 2026

Get insights on financial crime trends from our global survey of 600 senior decision-makers and expert guidance from our Financial Crime Compliance Strategy team. Coming February 2026.

Pre-register now

Originally published 17 December 2025, updated 17 December 2025

Disclaimer: This is for general information only. The information presented does not constitute legal advice. ComplyAdvantage accepts no responsibility for any information contained herein and disclaims and excludes any liability in respect of the contents or for action taken based on this information.

Copyright © 2026 IVXS UK Limited (trading as ComplyAdvantage).