This article was first published by Law360 on 12 September 2025.
Monzo Bank's onboarding, risk assessment and transaction monitoring were unable to keep up with its rapid growth from 600,000 users in 2018 to over 5.8 million by 2022.
The result was systemic failings in anti-financial crime systems, which resulted in the U.K.'s Financial Conduct Authority handing out a £21.1 million fine ($28.6 million), the largest ever issued to a challenger-bank for anti-money laundering failures, on July 8.[1]
The Monzo fine has intensified the spotlight on how firms manage financial crime risks. This article explores how artificial intelligence can assist with compliance, where current systems fall short, where AI's pitfalls may lie, and what lawyers and businesses can do to adapt and use AI to their advantage.
Background
Between 2020 and 2022, Monzo onboarded over 34,000 high-risk customers without proper checks. This allowed obviously false addresses, such as "10 Downing Street" and "Buckingham Palace," to go unchallenged.
The FCA highlighted that the bank failed to adapt controls as its customer base expanded, leaving it vulnerable to modern fraud methods such as synthetic identity fraud, where bad actors blend real and fake details to create plausible personas.
The Monzo case raises the question of whether banks and other businesses are too reliant on manual anti-money laundering review — a process that may be prone to inconsistency and error. Could automated systems — specifically, those that are AI-driven — have assisted in preventing such mistakes?
AI's Helping Hand
The rapid advancement and widespread integration of AI over the past few years has moved its role in financial crime prevention from theory to reality. Financial institutions have begun to utilize AI across AML and counterterrorism financing functions to manage customer and transactional reviews; reduce errors, including false positives; and improve accuracy.
There are various ways in which AI may assist in helping to prevent situations such as those faced by Monzo and to generally improve firms' compliance capabilities.
These include the following:
Adverse Media and Sanctions Screening
One of AI's most powerful features is its capacity to rapidly process vast volumes of data in real time, drawing insights from sources such as news outlets, regulatory updates, social media and online content.
A previously labor-intensive process of scanning for negative news about clients, related parties or beneficial owners, can be reduced to a task that may take AI seconds. This can also be adapted to specific tasks such as sanctions screening, with AI systems able to crossreference names and entities against global sanctions lists.[2]
The result is an onboarding system that can be incredibly efficient and dynamic, constantly
updating a client's risk profile when new information emerges.
Transaction Monitoring
Standard rule-based monitoring systems, which have been in place for decades, are generally calibrated to flag transactions over specific risk-based thresholds. Each institution has its own set of rules, which are keyed off their AML and broader financial crime risk assessments.
These thresholds are, however, vulnerable to bad actors simply breaking transactions into small chunks — known as smurfing — so as to avoid reporting.
While the U.K.'s Money Laundering, Terrorist Financing and Transfer of Funds (Information on the Payer) Regulations 2017 — to which banks are subject — require firms to consider whether transactions are related when carrying out their due diligence checks, the volume and rapidity of transactional activity can pose operational challenges.
AI's ability to analyze billions of transactions allows transaction screening models to learn and detect, as well as predict, whether suspicious patterns have arisen or may arise, and whether particular activity may not look normal for a client.
For example, this may be detecting an unusual number of small deposits totaling just under a firm's transaction flagging threshold, or identifying an increase in payments from a foreign bank not commonly associated with a particular client.[3]
AI, if calibrated appropriately, can detect patterns that manual checks simply cannot, at a
staggeringly faster rate, helping to mitigate compliance breaches.
Network Analysis
Much of financial crime involves interlinked accounts, companies and individuals who have specifically set up schemes to avoid detection. AI systems can be deployed to map so-called hidden relationships by analyzing a broad range of data points, such as accounts sharing the same addresses, transactions flowing through similar companies, or individuals with the same names or telephone numbers.
These will determine whether seemingly unlinked entities or individuals are in fact linked, or indeed one and the same.
In turn, this enables compliance teams to spot potentially suspicious links and to score or flag networks that may need surveillance.[4]
Risk Scoring
There is always a danger that a firm's risk scoring of a customer remains static and does
not reflect evolving risks unless a specific trigger event occurs.
AI systems allow client profiling on a real-time basis, enabling risk scores to be updated and
adjusted automatically if, for example, a transaction pattern changes dramatically; links to
high-risk jurisdictions or industries emerge; or external intelligence, such as sanctions
updates or adverse media, arises.
AI can therefore permit automatic and continuous updates, improving efficiency by reducing
the need for manual monitoring, freeing humans to focus on higher-risk cases and other
tasks.[5]
Where AI Could Have Helped
As well as highlighting how weak or outdated compliance controls can cause significant
issues for businesses, recent enforcement cases also highlight how AI could have helped
prevent them.
While Monzo's breaches were not caused by a lack of AI, they nevertheless show areas
where AI could have made a difference, such as automated identity and address verification
when onboarding clients and continuous risk scoring when a client's behavior changed.
Another example is the situation NatWest faced in 2021 when it was fined £264 million
($350 million at the time) after a small jewelry business was able to deposit £365 million in
laundered cash between Nov. 8, 2012, to June 23, 2016, at various NatWest branches
across the country.[6]
The deposits did not correlate to the stated business activities or expected revenues of such
a small company, yet the inconsistency was not escalated at any level. Although the failures
were partly due to relationship managers exercising too much discretion in how customer
activities were handled, the FCA's final notice against the bank also highlighted weaknesses
in some of its automated systems.
AI may have been able to provide an additional layer of protection, such as comparing the
client's deposits against comparable industry clients or by connecting cash deposits at
multiple branches to one single account, revealing the scale of the scheme much earlier.
Beyond Banking
AI may also assist with compliance in the digital asset space, with cryptocurrency platforms
facing scrutiny from regulators and law enforcement for weak AML and "know your
customer" controls. Cryptocurrency platforms such as Binance Holdings Ltd. have faced
enforcement action for facilitating transactions linked to money laundering and sanctions
evasion.[7]
The nature of cryptocurrency has added extra compliance issues, with users who can hide
their identity behind anonymous addresses and have the ability to move money
instantaneously across the world. AI may have a role in being able to analyze related crypto
wallets to spot money laundering methods and even follow funds where bad actors try to
cover their tracks.[8]
Legal Perspective
While AI has potentially significant benefits, it also brings new challenges in relation to
firms' decisions about clients and transactions.
Under Principle 11, the FCA requires that firms deal with regulators openly and
cooperatively, and must take reasonable care to organize and control affairs responsibly,
including effective risk management, under Principle 3.[9] In light of these principles, it is
critical that firms are able to articulate how their risk systems operate in practice.
If AI tools have been deployed, these must be explainable to the regulator, including why
the firm decided to use a particular tool, how it was programmed, how relevant risk
parameters were applied, what information was fed into the tool and what it produced.
Therefore, in situations where AI makes a particular decision about a client or transaction, a
firm must be able to show its work, as well as the outcome of particular decisions.
Furthermore, to ensure compliance with Article 22 of the General Data Protection
Regulation, humans will still need to provide input and oversight into compliance decisions.
Article 22 required that a "subject shall have the right not to be subject to a decision based
solely on automated processing, including profiling, which produces legal effects concerning
him or her or similarly significantly affects him or her."
Therefore, potentially significant decisions — such as whether to onboard a client, whether a
bank account can be opened on their behalf, whether funds should be frozen or whether a
suspicious activity report should be submitted in response to any suspicious activity — will
require human oversight and the ability to override any decision made by AI.
What Lawyers Should Do Next
Due to recent advances in technology and AI implementation, lawyers, whether in-house or
in private practice, need to be aware of how AI can assist and shape compliance. There are
a few suggested ways lawyers can prepare for the advances, including the following:
- Ensure you and your team are part of any AI implementation from the start.
- Understand how the AI is trained and what sort of data it draws on.
- Ask simple questions such as “can we explain its decision to a regulator?”
- Make sure clear audit trails are kept to maintain GDPR compliance.
- Draft oversight policies and methodologies that can clearly explain when, how and
why humans should review any decisions made by AI. - Keep full records of any AI system changes, how the changes are explained to
relevant staff and how the changes may affect results.
For businesses, the real challenge is to make sure that AI's role in compliance is an enabler,
not a liability, which will include implementing AI into existing controls while retaining
human judgment and governance, which regulators will expect. This can be facilitated by:
- Ensuring quality data is available, as even the best AI systems will be undermined
and unable to produce excellent results with poor data; - Enabling human skills and AI skills to combine, such as by allowing AI to complete
initial reviews and triage issues, but leaving investigations and final judgment calls to
trained and experienced staff; - Consistently piloting and testing new AI systems and updates to ensure weaknesses
are eradicated and the best AI systems are implemented; - Maintaining an audit trail of every significant decision and system change to comply
with regulatory requirements; and - Keeping an open dialogue with regulators: Businesses should engage regulators
when introducing a new AI system and explain its purpose, safeguards and oversight
mechanisms. Clear communication will build trust and prevent misunderstanding,
especially when some regulators are themselves slow to adapt to technological
changes.
Conclusion
Neither AI nor any other technological advancement will eliminate financial crime entirely.
However, AI can equip firms with the scale, speed and analytical capabilities needed to keep
pace with increasingly sophisticated threats.
The real challenge lies in ensuring these tools are used transparently and ethically by both
lawyers and businesses. On a practical level, this means engaging with regulators early and
staying informed about technological developments, rather than resisting them or being
intimidated by their complexity.
For firms, success depends on combining AI's strengths with human oversight. This balance
is essential to avoid becoming the next headline for a compliance failure.
Ultimately, AI is here to stay. The focus now must be on how we use it responsibly,
intelligently and collaboratively.
[1] FCA fines Monzo £21m for failings in financial crime
controls: https://www.fca.org.uk/news/press-releases/fca-fines-monzo-21m-failingsfinancial-
crime-controls.
[2] https://www.bai.org/banking-strategies/ai-can-take-your-aml-sanctions-and-adversemedia-
screening-program-back-to-the-future/.
[3] https://www.pwc.com/mt/en/publications/financial-crime-news/ai-and-transactionmonitoring-
the-future-of-financial-surveillance.html.
