An Agentic LLM Framework for Adverse Media Screening in AML Compliance
#Adverse Media Screening#AML Compliance#Large Language Models#Retrieval-Augmented Generation#Financial Institutions#Risk Assessment#Politically Exposed Persons
📌 Key Takeaways
Researchers developed an agentic LLM framework to automate adverse media screening in AML compliance
The system uses LLMs with Retrieval-Augmented Generation to reduce false positives
The framework implements a multi-step approach including web search, document retrieval, and risk scoring
Testing showed the system can effectively distinguish between high-risk and low-risk individuals
📖 Full Retelling
Researchers Pavel Chernakov, Sasan Jafarnejad, and Raphaël Frank introduced an innovative agentic LLM framework for adverse media screening in AML compliance on arXiv on December 29, 2025, aiming to overcome the limitations of traditional keyword-based approaches that generate high false-positive rates or require extensive manual review in financial institutions. The paper presents a system that leverages Large Language Models (LLMs) with Retrieval-Augmented Generation (RAG) to automate adverse media screening processes, implementing a multi-step approach where an LLM agent searches the web, retrieves and processes relevant documents, and computes an Adverse Media Index score for each subject. This automated approach significantly improves upon traditional methods that rely on simple keyword matching, addressing a critical pain point in financial compliance operations. The researchers evaluated their framework using multiple LLM backends on a diverse dataset comprising Politically Exposed Persons (PEPs), individuals from regulatory watchlists, and sanctioned persons from OpenSanctions, alongside clean names from academic sources. The results demonstrated the system's ability to effectively distinguish between high-risk and low-risk individuals, showing promising potential for improving compliance processes while reducing the burden on human reviewers.
Institution that provides financial services for its clients or members
A financial institution, sometimes called a banking institution, is a business entity that provides service as an intermediary for different types of financial monetary transactions.
Estimation of risk associated with exposure to a given set of hazards
Risk assessment is a process for identifying hazards, potential (future) events which may negatively impact on individuals, assets, and/or the environment because of those hazards, their likelihood and consequences, and actions which can mitigate these effects. The output from such a process may als...
A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the c...
--> Computer Science > Artificial Intelligence arXiv:2602.23373 [Submitted on 29 Dec 2025] Title: An Agentic LLM Framework for Adverse Media Screening in AML Compliance Authors: Pavel Chernakov , Sasan Jafarnejad , Raphaël Frank View a PDF of the paper titled An Agentic LLM Framework for Adverse Media Screening in AML Compliance, by Pavel Chernakov and 2 other authors View PDF Abstract: Adverse media screening is a critical component of anti-money laundering and know-your-customer compliance processes in financial institutions. Traditional approaches rely on keyword-based searches that generate high false-positive rates or require extensive manual review. We present an agentic system that leverages Large Language Models with Retrieval-Augmented Generation to automate adverse media screening. Our system implements a multi-step approach where an LLM agent searches the web, retrieves and processes relevant documents, and computes an Adverse Media Index score for each subject. We evaluate our approach using multiple LLM backends on a dataset comprising Politically Exposed Persons , persons from regulatory watchlists, and sanctioned persons from OpenSanctions and clean names from academic sources, demonstrating the system's ability to distinguish between high-risk and low-risk individuals. Subjects: Artificial Intelligence (cs.AI) ; Computation and Language (cs.CL); Information Retrieval (cs.IR) Cite as: arXiv:2602.23373 [cs.AI] (or arXiv:2602.23373v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2602.23373 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Sasan Jafarnejad [ view email ] [v1] Mon, 29 Dec 2025 19:35:45 UTC (178 KB) Full-text links: Access Paper: View a PDF of the paper titled An Agentic LLM Framework for Adverse Media Screening in AML Compliance, by Pavel Chernakov and 2 other authors View PDF TeX Source view license Current browse context: cs.AI < prev | next > new | recent | 2026-02 Change to browse by: cs cs.CL cs.IR...