Скопировано

When AI “Hallucinations” Force Legal Apologies

19.05.2025 23:34:00
Дата публикации

In the U.S., a lawyer representing AI firm Anthropic had to issue a public apology after chatbot Claude generated false citations to nonexistent articles. The incident occurred in a copyright infringement case filed by Universal Music Group and other publishers.

The core issue lies in evidence submitted by Anthropic staffer Olivia Chen, which relied on Claude’s output—including fabricated scientific references with fictional authors and titles. The court viewed this as a serious violation.

Federal Judge Susan van Keulen demanded clarification. In response, Anthropic’s lawyers admitted they failed to catch Claude’s “hallucinations,” calling it an honest citation mistake, not intentional deceit.

It’s not the first time AI has misled lawyers. Just a week earlier, a California court criticized law firms for fake AI-generated citations. In January, an Australian lawyer was caught using unreliable ChatGPT legal data.

Meanwhile, legal tech startups like Harvey—which uses generative AI—are raising major investments, reportedly negotiating a $250 million round.

The case raises questions: Can AI be trusted in sensitive legal contexts? Where is the line between convenience and accountability?

Even advanced AI systems remain fallible—especially in law, a domain where precision is essential.

With legal battles between AI developers and content owners intensifying, such cases argue for stricter regulations and higher standards.


(text translation is done automatically)