According to UK Finance's 2025 Annual Fraud Report, over £1.2 billion was lost to fraud in 2024 alone. This figure has, at least in part, increased over the years due to fraudsters adopting emergent technologies (including artificial intelligence (AI)) that impersonate individuals, bypass controls and exploit weaknesses in AML/KYC verification systems.
For regulated firms operating in the financial services sector, the risk of AI-driven fraud is fast becoming a critical compliance and reputational risk that requires urgent attention.
Monzo's £21 Million Fine
In July 2025, the FCA fined Monzo Bank £21 million for serious anti-money laundering (AML) failings between 2018 and 2022 (a record financial penalty for a neobank). The FCA identified that Monzo had onboarded over 34,000 high-risk customers in breach of a voluntary requirement (VREQ), and allowed account openings using implausible addresses such as "10 Downing Street" and "Buckingham Palace".
Whilst AI was not explicitly cited in Monzo's case, the FCA did highlight that the bank had failed to adapt controls as its customer base expanded, which left it vulnerable to modern fraud methods – including those powered by AI. This underscores a broader expectation that firms must evolve their systems and controls in response to new technology-driven fraud.
Real AI Fraud
These risks are truly cross-sector. In 2024, HSBC reported a 40% rise in synthetic identity fraud using AI-generated images and documents and the UK's National Cyber Crime Unit warned of increasing deepfake scams targeting executives. Firms such as Revolut revealed that its AI-powered systems prevented over £15 million in fraud attempts in early 2025.
Recently, an energy executive of a UK firm was tricked into transferring £200,000 into a fraudulent bank account, with the culprit being a fraudster using AI to generate a voice deepfake. Banks such as Santander, HSBC, NatWest and Barclays have all expressed their concerns with the rise of AI-facilitated fraud which is becoming more accessible and sophisticated.
Another high-profile incident in early 2025 saw a deepfake video call trick an employee at Arup's Hong Kong office into transferring £20 million to criminals impersonating senior executives. This incident underlines how convincing AI-generates voices and videos can be, bypassing trust-based controls and exposing firms to financial and reputational losses.
Regulators React
Jessica Rusu, Information and Intelligence Officer at the FCA recently noted in a speech to the AI and Digital Innovation Summit that "we do not need new regulatory rules to give us oversight of AI in financial services". The regulator is of course mindful of the risk of over-regulation and the chilling effect it can have on innovation and, consistent with the Government's growth agenda, attracting investment into the UK. However, Ms Rusu's comments equally indicate the regulator's expectation that firms will adapt their existing compliance frameworks to ever-evolving risks, including the misuse of AI by bad actors.
It is clear, therefore, that firms cannot treat AI as a tech novelty, they must instead integrate AI risk management into core compliance and fraud prevention strategies. Organisations must govern their own use of AI and implement robust controls to defend against external threats.
The Economic Crime and Corporate Transparency Act 2023 (ECCTA)
In parallel with the financial services regulatory regime, the ECCTA introduces a new "failure to prevent fraud" coming into effect on 1 September 2025. Large companies can be held liable if an "associated person" (including employees, agents, or subsidiaries) commits fraud for their benefit – unless they can show they had "reasonable fraud prevention procedures" in place.
As fraudsters exploit AI to facilitate fraud and avoid detection, the introduction of the ECCTA gives firms all the more reason to implement better and effective fraud controls, ensuring effective, detailed and documented frameworks are put in place.
What should firms do?
To stay ahead of AI-driven fraud, firms should:
- Fraud prevention framework: implement a fraud prevention framework that demonstrates risk monitoring and training.
- Implement AI-focused fraud risk assessments: identifying where AI-enabled tactics could exploit onboarding, payments or communications.
- Upgrade verification and controls: adopt layered authentication and third-party fraud detection software.
- Train employees: educate staff, specifically in high-trust roles, to recognise signs of AI impersonation.
How we can help
Our Business Crime team advises a wide range of clients – from startups to asset managers and can assist with:
- Conducting AI-specific fraud risk reviews.
- Designing and implementing fraud prevention procedures.
- Managing FCA enquiries, enforcement action and regulator communications.
If you would like to assess your firm's readiness for AI-enabled fraud or strengthen your compliance framework, please contact us.
