Court of Appeal grapples with fake citations and the inevitability of AI Lawyers
Two years ago, I attended an application to set aside possession in which a Litigant in Person argued that a section 21 notice was defective unless all historic tenancy documentation relied upon was served alongside it. They cited Brown v Sunley Homes Ltd [2017] in support of this position. The proposition was untrue, and the case relied upon in support was a ‘hallucination’ of Google Bard.
I was reminded of this as the Court of Appeal handed down the joined decisions in the “Ayinde” and “Al-Haroun” cases[1]. These were two cases placed before the Court of Appeal not for any substantive cause, but because in both cases it appeared lawyers had used AI and similarly produced fake citations in support of their cases.
These cases at first blush appear to be a cautionary tale about the dangers of using AI and judging by the excoriating tone of the bench they very much are. However, the judgment also looks to grapple with two fundamental realities of the changing legal landscape. AI use is increasing exponentially, and the difference is becoming harder to spot.
AI is increasingly everywhere and you are now inevitably using it in your everyday life and practice even if you don’t realise it. In Ayinde the barrister, while denying having proactively used AI, admitted it was possible that she had relied on the google summary that now appears at the top of search results which is generated by AI. We are probably all in the same boat as an absence of clear labelling of generated content and it’s increasingly imbedded nature make it more difficult to pick it out.
The Court focused on the most dramatic form of AI error, “hallucination”, where an AI model being asked to undertake research, which large language models are not designed for, makes up evidence in support of its generated prose. These problems are likely teething issues of AI, as models get better and more advanced logic style models replace generative designs, the reality is these “hallucinatory” errors will likely fade. However, what is a far more pressing concern is as AI can generate accurate and compelling content, there is an opportunity for misinformation that the Court will find harder to spot.
In Ayinde the Court, in support of its conclusion that the citations were fake, set out how it investigated the cases online and discovered no trace of them. However, what if there had been some trace, a blog or article referencing the same case. What if when the query was first raised a case, generated by the same technology, had been produced to support the proposition. We like to think someone would surely have spotted it, but can we be so sure.
It is trivially easy to ask an AI language model to write a fictious piece of case law for you. As an experiment I asked one to produce Brown v Sunley Homes to see if my Litigant could, had he instead had a malicious motivation, gone further. The model dutifully produced a document, and with a few further prompts the document morphed into something quite convincing. There were some superficial errors. There was no Arnold LJ on the Court of Appeal in 2017, and the Law Report it was allegedly from did not exist. However, it was remarkably convincing in other aspects, and I must admit I found myself wondering if I would have been fooled had it simply been thrust into my hands by an opponent during a quick hearing with little time to consider.
We all like to think that we could spot a fake and that we wouldn’t end up in the position of the unfortunate professionals in Ayinde and Al-Haroun. Of course, you can protect against errors using AI, and the Court was clear that the offending lawyers in these cases had made clear errors of judgment in the way they used AI. The reality though is that simply ignoring AI or thinking that not using it will keep you safe from these issues is equally problematic.
Whether it is an opponent taking advantage of the same technology, a malicious actor seeking to dupe the Court or simply a google search with improperly labelled material, any lawyer today needs to be plainly aware of AI and the realities of it. That’s before discussing the obvious commercial benefits, after all could you compete with a firm charging half your fees because they can draft twice as quickly thanks to AI supplementation.
The Court of Appeal’s decision is in many ways already out of date. AI use is already prolific and even were some sort of action taken to curtail or ban its use it seems very difficult to see how this could be practically enforced. A working group set up this week to consider how to tackle AI in law has a difficult job ahead of it. However, it is very unlikely that its use will be outright banned, and even if some attempt were made it would be simply unenforceable. Any lawyer practicing today who doesn’t get a grip on the reality of AI risks being left behind, or worse sharing the fate of those in Ayinde and Al-Haroun.
[1] R (on the application of Frederick Ayinde v The London Borough of Haringey and between Hamad Al-Haroun v Qatar National Bank QPSC and QNB Capital LLC [2025] EWHC 1383 (Admin)