Earlier this month, I attended the ACAMS Assembly, the annual conference of financial crime professionals and technology providers. Alongside the conversations about the shifting regulatory landscape, emerging fraud typologies, new sanctions regimes, and the growing impact of cryptoassets, a dominant theme was the role of artificial intelligence in countering financial crime. Criminals’ use of AI is supercharging fraud and money laundering, and the growth in financial crime is outpacing our ability to defend against it. Today, AI remains the only cost-effective tool capable of closing the gap against these evolving threats; however, the deployment of advanced technologies to combat financial crime lags far behind the speed at which bad actors exploit them.
As a result, much of the conversation centered on the newest development in artificial intelligence—Agentic AI. Risk and IT executives are asking whether agentic architectures are meaningfully different from machine learning and GenAI, what real impact they might have over current solutions, and whether they are mature enough to serve as a practical deterrent. At Celent, we view Agentic AI as the most promising path for banks to harness the power of GenAI to cost-effectively handle the spike in fraud and the deluge of new regulations, as well as reap a return on their many ongoing GenAI experiments.
The Promise And Reality of Agentic AI
Agentic AI is demonstrating early promise in developing scalable, autonomous business solutions that can effectively handle the complexity and nuance of modern financial workflows. These systems are designed not just to process high volumes but also to make contextual decisions—adapting to real-time data and evolving risk factors across diverse operations such as risk assessment, fraud detection, and regulatory compliance. Agentic architectures offer the potential to unify investigative operations, embed compliance throughout workflows, and support both efficiency and regulatory standards.
While GenAI copilots have found use in investigative case management (drafting narratives, summarizing documents, and handling rote investigative tasks), agentic systems now layer specialized agents—each trained for distinct activities like transaction analysis, entity research, quality review, and workflow audit. These agents not only collaborate and share insights but also adapt their approach as new data and decisions influence investigative outcomes.
It is still early days for both GenAI and Agentic. Banks are rightly cautious about deploying it in risk and compliance operations. Agentic AI carries the concerns of both GenAI (hallucinations, explainability, governance) and traditional AI (model performance and drift, embedded bias). However, by integrating coding skills, database access, and retrieval-augmented generation (RAG), agentic systems overcome many of the transparency and reasoning challenges associated with earlier AI solutions in compliance.
Agentic AI in Action
Celent recently profiled SymphonyAI’s Sensa Agent Flow, one of the first agentic AI systems in production within the financial crime investigations space. Sensa Agent Flow exemplifies the agentic architecture, enabling banks to automate investigative tasks using multiple specialized agents, each designed to reflect the expertise and workflow of a domain specialist. The system’s low-code environment allows compliance teams to configure workflows, insert human-in-the-loop breakpoints, and adapt agent behavior to the institution’s own policies and procedures. Benefits highlighted by pilot deployments include:
- Accelerated investigation timelines (e.g., full case adjudication in hours versus days or weeks).
- Marked reduction in false positives and improved true positive retention in sanctions screening.
- Transparent, fully auditable records of agent reasoning and workflow steps, supporting
- regulatory review and ongoing improvement.
As adoption grows, banks are taking a cautious, governance-driven approach implementing agentic systems with robust RAG architectures, audit trails, and continuous human oversight where appropriate. Regulators are increasingly engaging with banks and vendors to define acceptable guardrails and validate model integrity.
Upcoming Research
Agentic AI is on our research agenda for Q4. The agenda includes case studies on how banks are applying the technology, views on regulatory readiness, and an evaluation of new agent-based capabilities from vendors. It will also be a major theme in our annual Risk Previsory, where we look at the most topical issues in risk and compliance technology for 2026.
On the heels of our Solutionscape on fraud prevention software providers, we will be publishing our first report on digital assets in Q4, specifically on Blockchain Analysis Tools that banks are using to track money moved through cryptocurrencies. We will also publish our inaugural Technology Capabilities Matrix (TCM) for Enterprise Risk Management.
Finally, we are very excited about this quarter’s series of KYC Vendor Assessments. Rather than treating this complex set of processes and software providers as a single entity, Neil Katkov will break down the KYC space into 5 areas, each with its own set of software providers and assessments.
