Skip to main content Skip to navigation

Adverse Media Screening with AI and Why Keywords Aren’t Enough

Adverse Media Knowledge & Training

Adverse media aka negative news is no longer an issue that can be ignored by financial institutions (FIs). Using adverse media screening with AI has a powerful impact for any FI. It has the capacity to shield them from facilitating financial crime with investigatory power far beyond the average analyst merely armed with a limited list of keywords.

Why Adverse Media Screening With AI?

Machine learning, a rapidly improving subset of artificial intelligence (AI), enables the ability to scan news articles at a rate, depth, breadth and accuracy that is unmatched by regular consumer search engines (or even more professional media search tools).

The major regulatory advisory bodies and regulators (FATF, FCA, FinCEN, et al) have long recommended that adverse media screening be included in an FIs regulatory compliance toolkit. This poses a challenge for FIs to handle the amount of information that could be identified during negative news searches. However, employing state of the art machine learning technology can provide a notable reduction in investigation time for compliance officers.

Regulatory scrutiny is much easier to bear with an AI-powered adverse media tool in place. Showing a strong resolve towards pre-empting financial crime offsets the liability exposure for an FI. It’s also an indicator to regulators that FIs take their obligations seriously and are looking to act above and beyond minimum regulatory requirements.

What Came Before?

Before AI became available to adverse media screening tools, compliance officers used a manual research process with keywords. It was inefficient, poor use of resources, and resulted in unacceptable misses, but there was little alternative.

Machine learning changed all of that. Now it’s possible to analyze news and isolate the actual entities that have been identified as having adverse information without manually checking each article.

Going through an automated process and using AI with adverse media also means that duplicate profiles are a less frequent occurrence. Human analysts often err on the side of caution and create duplicate entries just in case the entities are different – meaning that when customers are being screened, multiple profiles need to be reviewed in order not to miss out on information. All of this human interaction with data takes up a great deal of time and resources, slowing down the customer experience and costs the business money due to inefficiency.

While the database would be relatively high quality if produced with manual researchers, the size, reach and recency are minimal compared to what machine learning can deliver. What’s more, human bias can creep in resulting in missed or misunderstood issues.

Additionally, it’s labor-intensive manual work with little thanks attached, experienced analysts are rarely the ones who would be doing this work. Regardless of experience, it’s unlikely a researcher would be able to speak the dozens of languages needed to properly understand all major risk regions. And that’s not to mention the limitations of keywords in the first place.

Why Keywords Aren’t Enough

Keywords are exactly what they sound like. Words of note that, in this scenario, indicate adverse information about an entity.

Unfortunately, keywords are context-sensitive. If you search for ‘fraud’ but the word used is ‘fraudulently’ then depending on how focused the search parameters are, the context could be missed.

To be effective you need to search for all permutations of ‘fraud’ so ‘defraud’ ‘fraudulent’ ‘defrauded’ etc will all need to be included.

But journalists often have preferred phrasing which you won’t always be aware of, so you need to include every synonym for ‘fraud’ too. It ends up creating an absurdly long search string. And in Google, you can only search for up to 32 words.

Keywords are also not required for adverse information. Making them hard to defend in comparison to screening for adverse media with AI. A good adverse media tool should be able to detect that an entity is implicated with a crime that any human would notice – but a keyword search often does not. If an article describes the crime without naming it then it’s unlikely to be picked up by a Google keyword search, or any other tool that uses keywords as part of their process.

Keywords had their time as an adverse media screening solution. But they could never have been more than a stop-gap measure. Adverse media screening using true machine learning is the only way forward to fully, and efficiently, know the risk associated with your customers.

adverse media information

Originally published 07 May 2020, updated 17 November 2021

Disclaimer: This is for general information only. The information presented does not constitute legal advice. ComplyAdvantage accepts no responsibility for any information contained herein and disclaims and excludes any liability in respect of the contents or for action taken based on this information.

Copyright © 2024 IVXS UK Limited (trading as ComplyAdvantage).