When AI Goes Off Script, the Boss Pays the Bill: US Court Sends a Message to Law Firms

A courtroom in San Francisco has drawn a firm line in the sand: if artificial intelligence slips into error, the blame doesn’t stop at the junior lawyer who used it—it travels upward.
In a sharply worded order, U.S. Magistrate Judge Peter Kang held that supervisory lawyers cannot sidestep responsibility when filings go wrong, even if the mistake originates from a subordinate experimenting with AI tools. The ruling centered on Lenden Webb, head of a small California-based firm, whose office submitted a legal brief containing a fabricated citation.
The penalty wasn’t symbolic. Webb was reprimanded, fined $1,001, and directed to undergo training focused on oversight duties and the ethical use of artificial intelligence in legal work. The message was unmistakable: delegation does not dilute accountability.
At the heart of the controversy was a filing prepared by junior attorney Katherine Cervantes. The document included a citation that looked legitimate at first glance—real case name, real case number—but on closer inspection, the pieces didn’t belong together. The referenced ruling simply didn’t exist.
Cervantes later explained that she had used Westlaw AI, an artificial intelligence-powered research tool developed by Thomson Reuters. According to her account, something went awry while copying material into the brief—an early misstep in her first attempt at AI-assisted legal research.
The court, however, was less interested in technical glitches and more concerned with human oversight. Judge Kang pointed out that regardless of how the error surfaced, there was a fundamental failure to verify the law being cited. Even a basic read-through could have exposed the mismatch.
The ruling pushes into relatively uncharted territory—what happens when AI becomes part of the workflow, but errors slip through? Courts across the United States have already flagged problems with lawyers leaning too heavily on generative tools. This decision goes a step further, making it clear that responsibility scales with seniority.
Webb acknowledged in court that he had not reviewed the submission closely, despite being aware that AI-generated content can contain inaccuracies. His name, however, was on the filing—enough for the court to conclude that oversight was not just expected, but required.
Meanwhile, Thomson Reuters has distanced its technology from the incident. After reviewing the matter internally, the company stated there was no evidence its AI systems produced the faulty citation, emphasizing that such tools are designed to assist—not replace—legal judgment.
The broader signal from the bench is hard to miss. As AI tools seep deeper into legal practice, they are not rewriting the rules of responsibility. If anything, they are tightening them.

Print Friendly, PDF & Email
Scroll to Top