The High Court of England and Wales has issued a stern warning to legal professionals following multiple incidents involving the use of false information generated by artificial intelligence in court documents, the New York Times reports.
Lawyers found submitting fabricated material may face serious disciplinary consequences — including potential criminal prosecution.
In a judgment released Friday, Dame Victoria Sharp, President of the King’s Bench Division, alongside Justice Jeremy Johnson, expressed concern that existing professional guidance had failed to adequately address the misuse of AI tools. The judges cited two recent cases in which AI-generated legal citations and quotations, later discovered to be fictitious, had been submitted as part of official court filings.
“There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,” Sharp wrote.
She emphasized that legal practitioners who present fabricated information could be charged with contempt of court or perverting the course of justice — both criminal offenses in the UK.
The ruling noted that AI tools such as ChatGPT are “not capable of conducting reliable legal research” and that their seemingly authoritative outputs may contain “confident assertions that are simply untrue” or reference sources that do not exist.
The High Court detailed two instances that prompted its intervention. In one, a claimant suing two banks admitted to using publicly available AI tools and legal databases to generate citations. Of the 45 references submitted, 18 were found to refer to nonexistent cases or misquote existing judgments. The claimant’s lawyer admitted to relying on the client’s research without independently verifying the material and has since referred himself to the Solicitors Regulation Authority.
In the second case, a woman representing a homeless client filed documents that included five fictitious case citations. The opposing counsel discovered the citations did not correspond to real legal precedents. The lawyer denied knowingly using AI but conceded to submitting similarly inaccurate citations in another case. She said she may have relied on AI-generated search summaries from internet browsers but could not verify the origin of the materials.
Both cases raised red flags for the High Court, including the presence of Americanized spellings in British filings and what the judges described as a “formulaic style” characteristic of AI-generated text.
Judge Sharp’s ruling, though not setting a formal precedent, serves as a public signal to the legal community amid growing global concern over the reliability of generative AI in professional environments.
Instances of similar misconduct have been recorded internationally, including in legal proceedings in California, Minnesota, Texas, Australia, Canada, and New Zealand. In these cases, AI tools produced misleading summaries, fabricated legal quotes, and referenced cases that did not exist.
The High Court stressed that while AI can serve as a useful tool in legal practice — such as for drafting or summarizing documents — it should be approached cautiously, particularly when it comes to factual verification and legal citation.
The ruling reinforces that legal professionals have a duty to ensure the accuracy of materials submitted to the court.
“The mere fact that a source is AI-generated does not absolve a lawyer of responsibility,” the judgment said.
Professional misconduct, including the submission of misleading or fabricated evidence, can result in sanctions ranging from suspension to disbarment, and in severe cases, criminal prosecution.
As AI tools become more prevalent, the legal profession — like many others — faces the challenge of integrating new technologies while safeguarding foundational principles such as integrity, transparency, and accountability.
The court concluded that further guidance from legal regulatory bodies is needed and encouraged heightened diligence when using AI in any form for legal research or case preparation.