The rise of artificial intelligence (AI) tools such as ChatGPT has sparked both excitement and concern across the legal industry. While AI promises increased efficiency and innovation, recent court decisions in the UK and the US highlight the serious risks lawyers face when relying on AI-generated material without appropriate verification.
Ayinde, R v The London Borough of Haringey: A Warning from the High Court
In Ayinde, R v The London Borough of Haringey, the High Court confronted a startling misuse of AI-generated legal research. In that case, the lawyers submitted legal arguments that relied on five fabricated cases – among them, one purporting to be from the Court of Appeal. These citations bore all the hallmarks of real legal authorities: proper names, plausible citations, and a style that appeared entirely authentic. But none of them existed. The citations were false; the cases were fictional.
The court was particularly struck by the fact that these made-up cases were not even necessary. The legal arguments advanced were straightforward and could easily have been supported by actual case law or statutory authority. As the judge remarked in relation to one submission: “The problem with that paragraph was not the submission that was made, which seems to me to be wholly logical, reasonable and fair in law; it was that the case of Ibrahim does not exist, it was a fake.”
This was not an act of desperate lawyering designed to manufacture a winning argument out of thin air. Rather, it was a needless and dangerous shortcut. The invented cases played the same role that genuine authorities could have filled—raising the fundamental question: why fabricate at all?
While the court did not make findings on the lawyers’ motives, it issued a wasted costs order against the solicitors and the barrister representing the Claimant. The local authority – ironically attempting to offset its own litigation failures – suggested that the fake cases may have been generated by a tool such as ChatGPT. The court did not rule on whether AI was involved, but acknowledged that this was the most plausible and perhaps the most charitable explanation.
More broadly, the judgment criticised the very notion of outsourcing legal research to generative AI. The judge was clear: it is negligent for a lawyer to rely on AI-generated research without rigorous verification.
This case serves as a clear message: while technology may assist in legal work, it cannot replace the critical human function of legal judgment, due diligence, and professional integrity.
Lessons from Across the Atlantic
Similar cautionary tales have emerged in the United States. In a widely reported case, two New York lawyers were fined $5000 in an aviation injury claim after submitting court documents containing six non-existent cases generated by ChatGPT. The judge condemned the lawyers’ actions as a bad faith effort to mislead the court, highlighting the fundamental duty of legal professionals to independently verify all cited authorities. In his written opinion, he clarified that while there is nothing “inherently improper” about using AI to support legal work, lawyers remain fully responsible for ensuring the accuracy of their submissions.
In another case, a federal judge revoked a lawyer’s admission to the bar and ordered him to pay a $3,000 fine after they submitted an application relying on AI-generated content riddled with fabricated citations. The judge concluded that “as attorneys transition to the world of AI, the duty to check their sources and make a reasonable inquiry into existing law remains unchanged”.
A recent Reuters investigation has further highlighted the risks of using generative AI in legal practice. The report focused on a high-profile incident involving Morgan & Morgan, a major US personal injury law firm, which found itself in hot water after two of its lawyers submitted court filings containing fake case citations. The fabricated authorities – produced by an AI tool – were submitted in a lawsuit against Walmart involving an allegedly defective hoverboard toy. A federal judge in Wyoming threatened sanctions, and one of the implicated lawyers admitted in court that he had relied on AI-generated results, which “hallucinated” case law that did not exist.
Even Michael Cohen, former lawyer to Donald Trump, inadvertently submitted fake case law to his own legal team after relying on Google Bard. Though no sanctions were imposed, the court called the incident “embarrassing.”
Despite the controversy, AI tools are increasingly embedded in the workflows of many firms. A Thomson Reuters survey found that 63% of lawyers have used AI in their work, and 12% use it regularly. While AI’s potential to reduce research time and improve efficiency is enticing, its ability to fabricate plausible-sounding but non-existent authorities poses a serious risk. Experts point out that AI generates outputs based on patterns in data – not on verified truth. These so-called “hallucinations” can easily deceive a user who lacks the skill or time to verify results manually.
The Need for Caution: Professional Duties Remain Paramount
These cases collectively highlight a critical lesson: AI may be a powerful tool, but it is no substitute for the lawyer’s own analysis and accountability. Rule 1.3 of the Solicitors Regulation Authority’s Code of Conduct makes it clear that solicitors must act with honesty and integrity. Delegating legal research or drafting to an AI system does not relieve lawyers of their responsibility to ensure accuracy, truthfulness, and ethical compliance.
Before relying on AI-generated legal arguments or citations, lawyers must verify every source, cross-check authorities, and understand the material thoroughly. The use of generative AI should supplement, not supplant, human legal reasoning.
Broader Lessons for the Legal Profession
These incidents raise broader issues around legal education, training, and firm policies. Law firms should develop clear internal guidelines for the use of AI tools, provide training on AI’s limitations, and consider implementing review protocols when AI-generated content is involved. Equally, legal education must evolve to include AI literacy as a core competency for the next generation of lawyers.
This also points to the potential need for regulatory bodies to provide clearer guidance on the ethical use of AI in legal practice, including whether there should be mandatory disclosures when AI tools are used in drafting or research.
Innovation with Responsibility
While these cautionary tales are serious, they should not stifle innovation. On the contrary, the legal profession is already adapting. The SRA has recently authorised the UK’s first AI-only law firm – Garfield.law. Garfield helps businesses recover small debts in the county court and plans to expand into housing disrepair claims. Described by its founder as “access to justice delivered through responsible AI,” Garfield isn’t a generic chatbot – it’s a hybrid expert system built to follow the Civil Procedure Rules precisely. Users must approve every step, and fees are minimal, such as £2 for a polite chaser and £7.50 for a letter before action. It’s a tightly controlled, rules-based use of AI that stands in stark contrast to recent scandals involving hallucinated case law.
This milestone reflects an openness to new models of legal service delivery and demonstrates that, when implemented responsibly, AI can enhance access to justice and streamline legal processes.
Conclusion
The integration of AI into legal practice is inevitable and potentially transformative. However, the recent cases in the UK and US serve as a stark reminder that technology must be used with caution, care, and competence. AI should be seen as an assistant, not a replacement, for the lawyer’s fundamental duties to the court and to the client.
In a world increasingly shaped by AI, the enduring values of the legal profession – integrity, diligence, and accountability – are more important than ever.
To receive all the latest insights from gunnercooke to your inbox, sign up below