Skip to main content Skip to navigation

In our State of Financial Crime 2026 survey, 100% of compliance leaders said they want to move their programs further up the AI maturity curve, from manual processes toward agentic excellence. Only 33% have actually made the move. The biggest sticking point in our research is the leap from contained pilots to genuinely AI-driven operations, the point where compliance functions start running differently rather than just running with new tools.

That gap has a cost. AI-enabled financial crime climbed by close to 900% in 2025, according to industry research, with threat actors using generative AI for deepfake fraud, synthetic identity creation, and faster iteration on attack vectors. Compliance teams that delay are not holding still. They are falling behind.

In the third panel of our North American Future of Compliance summit, two ComplyAdvantage customers who have made the leap shared what worked. Frederick Reynolds, Deputy General Counsel for Risk and Compliance at Marqeta and a former federal prosecutor and FinCEN Deputy Director, joined the conversation as a self-described AI evangelist. Cassandra Schwedfeger, BSA/OFAC Officer and Director of Regulatory Enablement at Bilt, joined as the former skeptic. Both are now operating at the levels of the maturity curve where most peers want to be.

This article captures their playbook: why waiting is the costliest option, how to build organizational buy-in before deploying technology, how to evaluate vendors past the shiny-object phase, and what tangible outcomes look like once AI is embedded.

The case for moving now

The defensive case for AI in compliance often gets framed last but lands hardest. Threat actors are already deploying generative AI at scale: deepfaked customer onboarding, synthetic identities pieced together from leaked data, social engineering scripts iterated in seconds. Compliance functions running on manual review cycles or static rule sets are not competing on a level playing field.

The capability case has also shifted. Where AI was a novelty 18 months ago, mostly useful for drafting emails and generating images, the depth of analysis available now puts it closer to a mid-level employee than a junior analyst. It can perform sophisticated investigative analysis, produce policy documents, and structure decisions in ways that previously required senior expertise.

The combination of those two shifts is what closes the wait-and-see option. Institutions that move now build the operational muscle, the documentation history, and the regulator dialogue that will be expected within two exam cycles. Institutions that wait will be doing it under regulatory pressure on someone else’s timeline.

Building the culture before deploying the technology

Both panelists were clear that the early failures of AI adoption are rarely technology failures. They are change-management failures. Compliance teams need to know that AI is an addition to their roles, not a threat to them, and that experimentation will be supported rather than punished.

At Marqeta, that started with the senior leader publicly trying and failing. Frederick set himself a target of finding ten manual tasks per day to attempt with AI, then sharing the wins and the misfires openly with the team. The cultural signal was unmistakable: this is not a side project, it is how we work now.

At Bilt, Cassandra framed the shift as a collective experiment: the team would troubleshoot together, learn together, sometimes mess up together, and share the wins together. The instinct among compliance professionals trained on hands-on diligence is often to be sheepish about using AI, as if it were a shortcut rather than a tool. Both teams had to actively flip that default.

Frederick took the flip one step further at Marqeta:

“I actually want people to apologize if you don’t use AI. If you spend hours and hours doing something you could have done in thirty minutes with AI assistance, that’s when I want you to apologize.”

Frederick Reynolds, Deputy General Counsel for Risk and Compliance, Marqeta

That inversion of the default expectation is what allows AI to move from a side experiment to the core operating model of the team.

Avoiding the shiny-object trap

The other dominant failure mode is buying AI capability that turns out not to exist. Almost every compliance vendor now markets itself as AI-driven. Cassandra’s team has been through enough vendor evaluations to recognize the pattern.

“It is so incredibly easy to fall victim to shiny object syndrome. You meet with a vendor, they have all these wonderful promises and colorful slide decks, and it feels like a total dream come true. But what I found in some cases is that the product isn’t even actually AI-driven.”

Cassandra Schwedfeger, BSA/OFAC Officer and Director of Regulatory Enablement, Bilt

The countermeasure is structured due diligence. Test with your own sample data, not the vendor’s prepackaged demo data. Get the actual operators in the room, not just the buyer. Ask what is underneath the AI label: static rules, basic automation, or genuinely model-driven inference. Trust, but verify.

Frederick added a related filter: the difference between AI-native products and AI bolted onto an existing system. AI-native solutions are designed from the ground up around model capabilities. AI bolt-ons are legacy products with a generative layer on top, and they typically inherit the constraints of the underlying architecture. For first-mover compliance teams, that distinction matters more than the marketing tier.

Watch The Future of Compliance North America on-demand

Access every session from our North American compliance summit, covering this year’s theme: Unlocking opportunity through intelligent design.

Watch now

From pilot to operating model

Once the cultural foundations are in place and the right tools are selected, the operating outcomes can move quickly. Two examples from the panel show what that looks like in practice.

At Bilt, the team applied AI to sanctions screening, a workflow every compliance program runs but few celebrate. By tuning the screening process to reduce noise, then introducing an AI agent to perform the first review with a human in the loop for verification, the team turned a high-volume, low-reward workload into a fast, accurate process. The analysts’ experience of the work changed first. The metrics caught up shortly after.

At Marqeta, AI was deployed in monitoring and testing. The team had been using statistical sampling because full-population testing was operationally impossible at scale. With AI in the workflow, they moved to 100% population testing, producing stronger quality assurance results across the board. The financial outcome was striking: the team had been allocated two additional headcount for the year to manage growth, and they were able to give them back. Existing staff absorbed the additional volume and increased throughput in parallel.

The same logic extended to Frederick’s annual policy review, which had previously been a multi-week project of rereading statutes, regulations, guidance, and policy documents. With AI doing the heavy parsing, the review compressed by roughly 10x, freeing senior compliance capacity for higher-value work.

The unifying pattern: AI does not just speed up existing processes. It changes what the team has the capacity to do. Compliance functions move from clearing alerts to monitoring patterns, from reviewing transactions to advising product teams, from reacting to regulators to engaging them proactively.

What good looks like

The panel agreed that the next 12 to 24 months will reward compliance leaders who pair technical investment with three less obvious disciplines.

A long-term, sustainable-growth mindset. AI is a layer on a compliance program, not a replacement for it. Move quickly, but only deploy capabilities that have been tested, governed, and signed off internally. The point is to grow without giving the growth back through a future enforcement action.

A team selected for creativity and curiosity. The compliance professionals who thrive with AI are the ones asking how to do things better, not the ones defending how things have always been done. Hire and develop for that mindset.

A regulator dialogue built on data, not promises. The most useful framing is to avoid what Frederick calls the human perfection trap. Regulators sometimes assume manual review is the correct baseline. The honest comparison is the actual error rate of the manual process against the actual error rate of AI with a human in the loop. Show the data, and the conversation moves from skepticism to encouragement.

The technology is here, and the regulators are warming to it. The question is whether compliance leaders will move now, on their own terms, or be forced to move later, on someone else’s.

Transform your AML compliance with AI-powered solutions

A cloud-based compliance platform, ComplyAdvantage Mesh combines industry-leading AML risk intelligence with actionable risk signals to screen customers and monitor their behavior in near real time.

Get a demo

Originally published 15 May 2026, updated 15 May 2026

Disclaimer: This is for general information only. The information presented does not constitute legal advice. ComplyAdvantage accepts no responsibility for any information contained herein and disclaims and excludes any liability in respect of the contents or for action taken based on this information.

Copyright © 2026 IVXS UK Limited (trading as ComplyAdvantage).