At the International Bar Association conference in Toronto this week, UK Justice Minister Sarah Sackman raised an interesting question: not just can AI make judicial decisions, but should it? Her call is for a public debate on the ethical boundaries of AI in law.
It's clear that AI tools have significant potential uses within the English court system. Applications such as complex case file summarisation, real-time transcription and intelligent listing scheduling could deliver tangible improvements—reducing delays and increasing access to justice.
However, while law firms jostle to buy (or build) tools and develop use cases, the English court system itself has seemed slower to harness the potential of AI. In other jurisdictions, real time transcription with translation into several languages is already a reality. In others, tools which allow parties to use a pleading to predict case outcomes and provide recommended settlement ranges are in use.
Talking to IBA delegates, the Minister seemed confident that change fuelled by AI is coming to the justice system. However, she advised caution around the extension of AI’s remit into actual adjudication:
"There is no reason why AI should not assist with the drafting of contracts or researching legal questions, but looking beyond that could AI one day calculate damages more accurately than a judge does now? Possibly. Could it adjudicate? Technically yes, but the question is not whether machines can decide or will be able to decide because I am pretty sure they will be, but whether they should and what we might lose if they do."
She urged the legal sector to reflect on the ethical implications of the change that AI could bring.
For many lawyers with experience of the work required to reach trial and an appreciation of the considerable time and care that goes into producing a reasoned judgment, the prospect of an AI tool replacing human adjudication remains a challenging concept.
Questions often posed to experts at legal tech conferences (and debated between colleagues) include whether AI adjudication is possible, how reliable it might be and when it might be coming. But with such a focus on what might be possible, perhaps we overlook the arguably more essential ethical question: Should this even be done?
We have already seen in case law the risks and potential consequences of relying on AI tools for legal research and advice. Only last week, the judiciary issued updated guidance for judicial office holders on the responsible use of AI in courts and tribunals. This replaced the original guidance which had been issued only 6 months prior. This demonstrates the pace of change in this area and perhaps also reveals the level of concern around appropriate adoption and use of technology within the judiciary.
The guidance outlines key risks associated with AI, emphasising the importance of understanding the capabilities and limitations of the technology. It also highlights best practices for AI use, including in respect of maintaining confidentiality, verifying accuracy and recognising bias and hallucinations. While the use of AI for administrative tasks and for summarising text is encouraged, reliance on AI for legal research and analysis is warned against.
In that context, AI adjudication of disputes feels a more distant potential reality. Sackman suggests that this could be with us in the next ten years… and only time will tell. One thing is for sure; AI is here to stay and the debate over the effective, appropriate and ethical use of it must continue.

/Passle/5cc80bfbabdfe80ea0d70502/SearchServiceImages/2025-10-21-09-27-13-054-68f751f1a94b3b874acc4e99.jpg)
/Passle/5cc80bfbabdfe80ea0d70502/MediaLibrary/Images/2025-11-04-13-40-32-059-690a0250ba091b57e8299e13.jpg)
/Passle/5cc80bfbabdfe80ea0d70502/SearchServiceImages/2025-10-14-12-55-06-545-68ee482ad14ff766185075ad.jpg)