Federal Judge Requires Lawyers to Certify Human Drafting in Response to AI-generated Filings

Citation copied to clipboard!

In a groundbreaking move, a federal judge in Texas has mandated that attorneys appearing before him must sign a pledge affirming that their legal filings were drafted by humans and not solely by artificial intelligence (AI) tools. U.S. District Judge Brantley Starr of the Northern District of Texas issued the requirement on Tuesday, in what is believed to be a first-of-its-kind directive in the federal court system.

Judge Starr explained that the measure was implemented to caution lawyers about the potential dangers of relying solely on AI-generated content, as these tools can fabricate cases and misleading information. He emphasized that attorneys must verify the accuracy of any AI-generated data by cross-referencing it with traditional legal databases.

The judge, in a notice posted on his court’s website, acknowledged the significant capabilities of generative AI platforms like ChatGPT but cautioned against their use in legal briefings. He highlighted that these platforms are prone to hallucinations and bias, even manufacturing fictitious quotes and citations.

Starr further emphasized the fundamental disparity between lawyers and AI platforms, noting that while attorneys swear an oath to uphold the law and act in the best interest of their clients, AI lacks any sense of duty, honor, or justice. AI operates solely based on programming code rather than moral conviction or guiding principles.

The judge revealed that the inspiration for the mandate came during his attendance at an artificial intelligence panel discussion hosted by the 5th Circuit U.S. Court of Appeals. At the event, panelists demonstrated how AI platforms could generate spurious legal cases. While Starr initially contemplated a complete ban on AI usage in his courtroom, consultations with legal experts, including Eugene Volokh, a law professor at the UCLA School of Law, influenced his decision to implement the certification requirement.

Volokh supported the judge’s action, highlighting that lawyers who rely on various databases for legal research might make assumptions about the reliability of AI platforms. The new mandate serves as a reminder that such assumptions can be erroneous.

This requirement comes shortly after another federal judge in Manhattan threatened to sanction an attorney for including citations to fabricated cases generated by ChatGPT in a court brief. Attorney Steven Schwartz, from Levidow, Levidow & Oberman, expressed regret for relying on the AI tool and claimed to be unaware of the possibility of its contents being false.

While Judge Starr noted that the New York case did not directly motivate the implementation of the requirement, it did serve as an additional impetus for its finalization. He also asserted that neither he nor his staff will employ AI in their work, ensuring that no algorithm dictates the outcome of any legal cases.

The certification mandate has sparked discussions surrounding the responsible and ethical use of AI in legal proceedings, prompting legal professionals to reconsider their reliance on these tools and reinforcing the importance of human oversight and verification.

Print Friendly, PDF & Email
Exit mobile version