Skip to main content Skip to navigation

The financial crime risk management industry is at an inflection point. Criminal networks have increased their use of artificial intelligence (AI) tools by an estimated 800% over the last two years, operating without the regulatory obligations or budgetary constraints that bind compliance teams. At the same time, the US regulatory landscape is shifting toward an effectiveness-first model, with proposed anti-money laundering (AML) reforms explicitly encouraging innovation.

For compliance leaders, this combination of pressure and opportunity raises a clear question: how do you adopt AI fast enough to keep pace while still meeting the standards of explainability, governance, and risk management that regulators demand?

In a keynote opening our North American Future of Compliance summit, Todd Raque, a financial crime expert with experience across regulators, financial institutions, vendors, and advisory firms, set out a practical playbook. This article captures his core argument: that effective AI adoption depends on embedding risk management from day one, planning small but iterating often, and treating people as the linchpin of transformation.

The opportunity behind the AI challenge

The instinctive framing is that AI is something compliance teams must defend against. That is partly true. Criminal use of generative AI, deepfakes, and synthetic identities is now a measurable threat. But the more important framing for compliance leaders is the opportunity on the other side of the same technology shift.

Technology has finally caught up with where compliance programs need to go. Legacy systems were built for a world of static rules and siloed workflows, where most analyst time was spent aggregating data, clearing low-quality alerts, and reconciling handoffs between teams. AI now enables connecting those silos, surfacing higher-confidence risk signals, and operating closer to real time.

The regulatory direction of travel reinforces the case for action. US AML reform is moving toward judging programs on their effectiveness, not just their existence, and the notice of proposed rulemaking openly encourages innovation. The unanswered question is not whether to act, but how exam teams will evaluate what “effective” looks like in practice. Compliance leaders who engage now have an opportunity to help shape that definition for their institutions.

Embed risk management from design, not as a tollgate

The most common pitfall Todd has observed in AI proof-of-concept work is treating risk management as a final approval step rather than a co-design partner. When compliance and risk teams are brought in at the end, two things tend to happen: results fall short of regulatory expectations, and the time required to course-correct stretches the project well past its original timeline.

“I always talk about full contact governance. It’s not just what has to be done, but being an active participant in owning how it’s done.”

Todd Raque, Senior Vice President, Deputy BSA Officer, Citizens Financial Group, Inc.

The alternative is what Todd calls “full contact governance,” where risk management is an active participant in designing the solution, not just a gatekeeper for its release. This applies whether the AI capability is built in-house or sourced from a vendor, and it produces three concrete benefits:

  1. Faster and more defensible deployment: When risk management helps define the use case, the data inputs, and the testing criteria up front, the result is fewer late-stage rewrites and a clearer audit trail.
  2. Better calibrated outcomes: Embedding risk expertise early ensures that detection logic, thresholds, and human-in-the-loop checkpoints reflect the institution’s actual risk appetite, not a generic vendor default.
  3. A more agile program: Once risk management is part of the design conversation, future calibrations, model updates, and new use cases become incremental adjustments rather than full re-reviews.

The principle applies across the maturity spectrum, from traditional machine learning (ML) models all the way to agentic AI. The use case will vary; the governance discipline should not.

Watch The Future of Compliance North America on-demand

Access every session from our North American compliance summit, covering this year’s theme: Unlocking opportunity through intelligent design.

Watch now

Plan a little, iterate a lot

For compliance leaders trained to specify requirements in detail before any code is written, the idea of planning lightly and iterating frequently can feel uncomfortable. But it is increasingly the operating model of the most successful technology adopters, and it can be applied to compliance without sacrificing rigor.

The reasoning is straightforward. No vendor, regulator, or institution currently has a complete answer to what AI-first compliance should look like. Maturity varies widely, use cases are still being defined, and the regulatory framework is itself evolving. In that environment, large multi-year programs that try to specify the entire solution up front are more likely to drift, overrun, or deliver something already out of date.

A more productive approach for compliance leaders includes:

  • Starting from a specific pain point: Reducing false positives in one screening workflow, automating a single remediation pattern, or improving alert triage are all valid entry points. Begin where the business case is clearest.
  • Building short feedback loops: Define what success looks like at the start of each iteration, measure against it, and use those results to inform the next cycle rather than waiting for a year-end review.
  • Documenting as you go: Every iteration should produce updated model documentation, validation evidence, and a clear record of what changed and why. This is the audit trail that turns iteration speed from a regulatory risk into a regulatory strength.

This is not a license to skip controls. It is a way of running disciplined experiments inside a defined governance perimeter, so that the institution learns faster than its risk landscape evolves.

People as the linchpin of transformation

The third strand of Todd’s keynote, and arguably the most important, is that technology adoption is fundamentally a people challenge. Compliance teams that succeed with AI tend to share three characteristics: they lead with purpose, they invest in AI literacy across the function, and they redesign roles around human judgment rather than removing it.

“It’s all about human judgment, supporting human judgment, not replacing roles. Most programs are lean, and it’s about building capacity.”

Todd Raque, Senior Vice President, Deputy BSA Officer, Citizens Financial Group, Inc.

Adoption fails when analysts and investigators feel their roles are under threat, when training is treated as an afterthought, or when AI outputs land on a desk without explanation. It succeeds when teams understand how decisions were reached, when their expertise is positioned as the senior partner in the human-AI relationship, and when the work itself becomes more interesting as routine tasks fall away.

The goal is not to replace human judgment but to support and elevate it. Most compliance programs are already lean. AI is the route to capacity, not headcount reduction, and the productivity gained should be reinvested in the higher-value work that only experienced analysts can do.

Move toward a human + agentic compliance model

ComplyAdvantage Mesh provides 24/7 automated case remediation, freeing compliance teams from low-risk tasks and escalating only ambiguous cases, ensuring human expertise is focused on issues that require intuition and moral judgment.

Get a demo

 

Originally published 15 May 2026, updated 15 May 2026

Disclaimer: This is for general information only. The information presented does not constitute legal advice. ComplyAdvantage accepts no responsibility for any information contained herein and disclaims and excludes any liability in respect of the contents or for action taken based on this information.

Copyright © 2026 IVXS UK Limited (trading as ComplyAdvantage).