UK judges warn lawyers AI-generated fake legal citations could land them in jail
By isabelle // 2025-06-09
 
  • UK lawyers were warned by the High Court after submitting AI-generated legal arguments with fake case citations, risking justice system integrity.
  • One lawyer cited 18 nonexistent cases in a £90 million lawsuit, while another referenced five fabricated cases in a housing claim.
  • A judge warned that AI misuse could lead to criminal charges, including perverting justice, which is punishable by life imprisonment.
  • AI tools like ChatGPT frequently invent false information, with OpenAI admitting hallucination rates as high as 79% in open-ended responses.
  • Similar AI legal scandals have emerged globally, including in the U.S., Australia, and Canada, raising concerns about AI distorting truth in law and beyond.
Two UK lawyers have been formally warned by the High Court after submitting AI-generated legal arguments riddled with fictitious case citations in a scandal that threatens the very integrity of the justice system. High Court Judge Victoria Sharp issued a dire warning: Misuse AI, and you could face criminal prosecution for contempt or even perverting the course of justice, an offense punishable by life imprisonment. The cases, which emerged in London courts, reveal a disturbing trend of lawyers blindly trusting AI tools like ChatGPT to draft critical legal documents, only to later discover the technology had "hallucinated" entire case laws out of thin air. One lawyer cited 18 non-existent cases in a £90 million lawsuit against Qatar National Bank, while another referenced five fabricated cases in a housing claim against the London Borough of Haringey. Both incidents were referred to professional regulators, igniting fierce debate over AI’s role in law and the erosion of public trust in legal institutions.

The AI deception unravels

The first case involved claimant Hamad Al-Haroun, who admitted using "publicly available AI tools" to generate legal research for his lawsuit. The AI not only invented fake cases but also misquoted real ones, rendering his arguments worthless. Shockingly, his solicitor, Abid Hussain, confessed he had relied on his client and not his own legal training to verify the research. Judge Sharp called this "extraordinary," noting that lawyers, not clients, are ethically bound to ensure accuracy. In the second case, barrister Sarah Forey cited phantom precedents in a tenant’s housing claim. Although she denied intentionally using AI, she acknowledged she may have "inadvertently" absorbed AI-generated summaries while browsing online. The court scolded her for failing to cross-check cases in the National Archives or her Inn of Court’s law library, which is a basic professional duty. Judge Sharp’s ruling cautioned: "There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused." She emphasized that AI tools like ChatGPT "are not capable of conducting reliable legal research" and often produce coherent but incorrect responses, including fabricated sources and quotes. The warning comes as AI "hallucination" rates skyrocket. OpenAI admits its flagship ChatGPT invents false information 51% to 79% of the time when answering open-ended questions, which is a terrifying statistic for a tool being used to shape legal outcomes. Vectera, an AI monitoring firm, found even the best chatbots hallucinate between 0.7% and 2.2% of the time, with errors exploding when generating long-form text.

Global legal chaos fueled by AI

This isn’t just a UK problem. In New York, attorney Jae Lee cited a fake abortion malpractice case conjured by ChatGPT, leading to her referral for disciplinary action. Another US firm was fined $5,000 after submitting AI-generated "gibberish" in an airline lawsuit. Similar scandals have erupted in Australia, Canada, and Denmark, where judges have caught lawyers peddling AI-invented rulings. Beyond courtroom farces, this scandal exposes a darker truth: AI is being weaponized to distort reality, whether in law, media, or government. The parallels to corporate and government corruption are unmistakable. If lawyers, who are oath-bound to uphold truth, can be duped by AI, what hope do ordinary citizens have against AI-generated propaganda, forged documents, or manipulated evidence? Sources for this article include: RT.com DailyMail.co.uk TheGuardian.com NYTimes.com