Skip to main content Skip to navigation

Now available: The State of Financial Crime 2025

AI in AML compliance: Navigating US regulations

AML Compliance Knowledge & Training

Artificial intelligence (AI) and machine learning (ML) systems can have a transformative effect on anti-money laundering and countering the financing of terrorism (AML/CFT) programs. Analyzing customer risks at onboarding, monitoring transactions for indicators of suspicious activity, and maintaining up-to-date customer risk profiles are cornerstones of effective AML compliance – and each can be significantly enhanced with the use of AI. 

Financial institutions (FIs) across the US are increasingly capitalizing on the efficiency and accuracy gains offered by AI and ML technologies. However, as they do so, these systems are increasingly falling within the scope of regulatory oversight. This article explains:

AI regulations in the US and their impact on AML

As of early 2025, there is no unified national approach to AI regulation in the US. Instead, its AI governance framework is a patchwork of individual state laws and federal initiatives. These tend to outline principles for the use of AI and encourage firms to participate in voluntary agreements, but stop short of enforcing comprehensive rules. However, they offer a guide to how regulators view the use of AI and indicate the direction future AI legislation may take. 

Federal AI regulations

At a federal level, several pieces of legislation and executive orders have been introduced that tackle some aspects of AI governance. In general, these have recognized the benefits of AI in financial services, including its ability to enhance AML compliance, while emphasizing the need for fairness and transparency in its use. Key examples are: 

  • The National Artificial Intelligence Initiative Act: Although it is not explicitly devoted to regulating the use of AI, this legislation does include measures around risk management, privacy, and security. It was introduced in 2020. 
  • A Blueprint for an AI Bill of Rights: Issued in 2022, this executive order offered guidance on using AI ethically, covering subjects like testing, algorithmic discrimination protections, data privacy, explainability, and opt-out measures. 
  • A Roadmap for Artificial Intelligence Policy: Written by a bipartisan Senate AI working group, this noted the importance of protecting workforce rights, privacy, and transparency while encouraging AI innovation. 
  • National AI R&D Strategic Plan: This urged legislators to capitalize on the opportunity to adopt emerging technologies, establish a comprehensive data privacy framework, and mitigate long-term risks. 
  • Bipartisan Task Force Report on AI: Published in 2024, this was a comprehensive report and series of recommendations on the use of AI across multiple sectors, including financial services. Recommendations included safeguarding data quality and security, improving regulatory expertise with AI, ensuring AI adoption conforms to existing consumer protections, and making sure regulators did not impede small firms from using AI tools. 
  • Executive Order on Advancing US Leadership in AI Infrastructure: As one of his final acts as President, Joe Biden passed an executive order demanding that the development of AI infrastructure adhere to five principles: US national security and AI leadership, economic competitiveness, clean energy, community support and cost-effectiveness, and labor standards and safeguards. 
  • Executive Order on Removing Barriers to American Leadership in AI: Introduced in the early days of the second Donald Trump Presidency, this promotes the development of AI systems “free from ideological bias” and revokes certain previous initiatives, including one Biden-era executive order intended to address AI risks. 

Some federal agencies have also guided firms on how to use AI. In a 2021 speech, the head of the Federal Reserve stressed the importance of explainability when using AI to avoid a “black box” approach, where firms rely on decisions made by an AI model without understanding how they were made. 

Likewise, in 2024, the acting chairman of the Office of the Comptroller of the Currency (OCC) stated that AI tools were essential to combat new fraud typologies centered around the use of deepfakes. He also called for FIs to maintain strong AI governance and oversight frameworks, including regular testing, to prevent outcome bias. 

Focusing more explicitly on financial crime, the US Treasury Department’s 2024 National Strategy for Combatting Terrorist and Other Illicit Financing highlighted the transformative potential of AI-based technologies in enhancing FIs’ AML compliance by focusing on how AI can analyze vast amounts of data to uncover patterns related to illicit financing. 

State AI regulations

In the absence of overarching federal AI regulations, many individual states have passed or started to consider their own legislation. As of September 2024, according to the National Conference of State Legislatures (NCSL), 48 states and jurisdictions within the US have at least begun work on bills related to AI in some way. Some of the most consequential laws passed include: 

  • The Utah Artificial Intelligence Policy Act: This requires all firms to disclose whether they use generative AI (GenAI). It also makes them liable for any violations of consumer protection law committed through the use of GenAI. 
  • The Colorado AI Act: This covers issues related to algorithmic discrimination across financial services, insurance, health, welfare, and employment. The Act is due to come into effect in February 2026. 
  • The California Generative AI: Training Data Transparency Act mandates firms that use GenAI to publish explanations of the data used to train their models. Governor Gavin Newsom vetoed a broader bill focused on the safety testing of AI models and legal liability for AI developers even after it was passed by state legislators. 

Upcoming AI regulation updates in the US: 2025 and beyond

The US is likely to continue to take a lighter-touch approach to AI regulation than other jurisdictions. At a February 2025 summit in Paris, the US (along with the UK) notably did not sign an international agreement on an “open, inclusive, and ethical” approach to AI because of the government’s concerns it could stifle American competitiveness. As signaled by the appointment of the country’s first “AI and crypto czar,” you can expect a business-friendly approach to AI oversight in the short and medium term. 

Several pieces of in-progress AI legislation have been introduced in either the US Senate or the House of Representatives. While most of these are expected not to be passed, two bills promoting AI research and development – the AI Advancement and Reliability Act and the CREATE AI Act – have gained bipartisan support, indicating that AI progress will continue to be high on the agenda for the US administration. 

In June 2024, the US Treasury issued a Request for Information (RFI) to understand how FIs use AI, especially for AML compliance purposes. While not a definitive commitment to any regulatory agenda, this indicates a willingness to shape future regulations around firms’ needs and challenges. 

The State of Financial Crime 2025

Get ahead on current compliance trends and upcoming regulatory priorities with our fifth annual state-of-the-industry report.

Download your copy

Tips for effective AML compliance in an evolving AI landscape

Although your use of AI is not yet regulated in the same way as other elements of your compliance setup, future regulatory developments could impact your business. Existing government measures and regulatory trends point towards a clear set of best practices you should use to guide your adoption of AI, future-proofing your business and mitigating against costly changes to your tech infrastructure. To do this, you can: 

  • Adopt explainable models: In our State of Financial Crime 2025 survey, 91 percent of firms said they were comfortable compromising explainability for greater automation. However, regulators will expect your firm to demonstrate why and how it made decisions on compliance cases, which makes the use of explainable AI (XAI) a priority. Aside from helping to avoid regulatory action, XAI can also build customer trust, engineer constant improvement to compliance processes, and enhance operational efficiency. 
  • Assess where AI can add the greatest value: Following a risk-based approach should be an essential part of all your compliance procedures, including your use of AI. You should use AI and ML to automate repetitive, low-risk work while retaining human expertise for more complex and higher-risk decision-making, which requires compliance expertise and contextual analysis. In practice, these tasks will often be linked, meaning you should aim to combine targeted AI adoption with effective hiring and staff training. 
  • Prioritize integrated AI adoption: According to our survey, between 40 and 50 percent of FIs use AI in an ad-hoc, rather than fully integrated, capacity for various screening and monitoring processes. However, with siloed data and platforms coming out as firms’ number one limitation to their compliance capabilities, it’s clear that FIs should prioritize a carefully considered, integrated approach to AI. This will improve the efficiency and consistency of compliance outcomes and will save time and money in many cases. 
  • Carry out comprehensive testing: Given regulatory expectations around frequent testing of your compliance program, you should follow similar best practices when it comes to AI by establishing a schedule of regular audits, ideally by a third party or independent team within your organization. 
  • Adopt agentic AI: As autonomous systems capable of learning from previous experiences and making decisions within a defined scope of work, agentic AI tools can enhance operational efficiency by taking on some first-line compliance tasks. For example, you can use agentic AI to analyze and prioritize alerts generated by your screening solutions, ensuring high-risk cases are escalated to human experts while reducing the burden on analysts. 

Boost AML compliance with AI-based solutions

ComplyAdvantage provides FIs of all sizes with AI-powered AML screening and monitoring tools designed to protect them from exposure to financial crime, satisfy regulatory requirements, and support business growth. As part of this, our solutions have been developed with explainability and model risk management in mind. 

“ComplyAdvantage believes that responsibly developing and managing AI is not only the right thing to do but also leads to better products that engage AI. Responsible AI is best when viewed as part of a best practice and thereby improves outcomes for our clients and their customers. In this way, it is aligned with business needs and not an external force acting on existing processes and competing with priorities.”

Chris Elliot, Director of Data Governance, ComplyAdvantage

ComplyAdvantage uses AI to: 

  • Supply accurate risk data in real-time: Our market-leading global risk intelligence empowers firms with data sourced straight from regulators, refreshed automatically, and verified by experts. Compliance teams use this information for effective sanctions, adverse media, and politically exposed person (PEP) screening. 
  • Monitor transactions for enhanced risk detection: Our transaction monitoring solution can detect hidden patterns of suspicious financial activity, allowing you to investigate promptly and understand new crime typologies. Where existing rulesets cannot detect suspicious behavior, our AI capability fills in the gap to precision. 
  • Generate insights to improve compliance performance: With alert prioritization, comprehensive data dashboards, and real-time performance insights, you can better understand and optimize your team’s compliance workload and performance. 

Optimize your compliance tech stack with AI

ComplyAdvantage’s automated screening and monitoring tools help firms protect their customers, build regulatory trust, and make compliance a business advantage.

Get a demo

Originally published 26 February 2025, updated 26 February 2025

Disclaimer: This is for general information only. The information presented does not constitute legal advice. ComplyAdvantage accepts no responsibility for any information contained herein and disclaims and excludes any liability in respect of the contents or for action taken based on this information.

Copyright © 2025 IVXS UK Limited (trading as ComplyAdvantage).