Anthropic just admitted its own AI created a fake court citation, and it ended up in a real legal filing.

Lawyers representing the generative AI company Anthropic have issued an apology to a U.S. federal court after submitting a court filing that included a fabricated legal citation generated by Anthropic’s own AI model.

The incident occurred in a case before the U.S. District Court for the Northern District of California, where Anthropic is currently involved in a lawsuit brought by major music publishers alleging widespread copyright infringement by the company’s Claude AI models. The error was revealed when opposing counsel and the court flagged a citation in a legal brief that referred to a nonexistent court decision.

According to court documents, the false citation was included in a motion filed by the legal team representing Anthropic. The citation was intended to support a legal argument, but upon further examination, it became clear the referenced case did not exist in any legal database or court record. It was later confirmed that the flawed citation was generated by Anthropic’s own generative AI tool.

In a filing submitted shortly after, Anthropic’s lawyers acknowledged the mistake and issued an apology to the court:

“We regret the oversight and take full responsibility for the inclusion of an inaccurate citation. The reference originated from a generative AI tool and was not verified through standard legal research methods prior to submission.”

They further assured the court that new internal review protocols would be implemented to prevent similar mistakes in future filings.

This adds to a growing list of legal missteps involving generative AI. In recent months, multiple instances have surfaced of lawyers relying too heavily on tools like ChatGPT or other large language models for legal research, only to discover that some of the citations or rulings cited do not actually exist.

The most widely publicized case occurred in 2023, when two New York lawyers faced sanctions after citing fictional cases in a federal lawsuit, having relied on OpenAI’s ChatGPT for legal references. That case led to heightened scrutiny over AI use in legal work and prompted many firms and courts to begin issuing their own guidelines for responsible AI usage.

The irony of this latest incident is that Anthropic, an AI company developing one of the world’s most powerful large language models, Claude, was tripped up by the very technology it seeks to defend in court. Anthropic has positioned itself as a leader in safe and responsible AI development, often emphasizing transparency and caution in how its models are deployed.

The filing mishap is particularly awkward given the nature of the lawsuit itself, which revolves around whether Anthropic’s AI unlawfully trained on copyrighted lyrics owned by major music publishers including Universal Music Group, Concord, and ABKCO. The publishers allege that Anthropic’s Claude model reproduced copyrighted material without permission and are seeking damages and injunctive relief.

Legal experts say the Anthropic incident should serve as a caution, not only about AI’s limitations but also about the importance of human oversight.

“This is a classic example of automation bias—trusting the output of a machine without verification,” said Stanford Law School professor Mark Lemley. “Whether you’re representing an AI company or a bakery, the duty of diligence and accuracy in legal filings is the same.”

The court has not yet indicated whether it will impose any sanctions or require further clarification from Anthropic’s legal team.


Distribute your music for FREE with RouteNote!