A recent move by the 5th U.S. Circuit Court of Appeals to introduce a groundbreaking rule regulating the use of artificial intelligence (AI) in legal proceedings is facing significant resistance from a cadre of attorneys. In a series of publicized letters, lawyers from prominent law firms argued vehemently against the proposed regulation, branding it as both “unnecessary” and perplexing.
The New Orleans-based court’s proposed rule, unveiled in November, seeks to govern the use of generative AI tools such as OpenAI’s ChatGPT by attorneys and litigants appearing before the court without legal representation. The rule would mandate a certification process, requiring users to affirm that if an AI program was employed in preparing legal filings, thorough reviews of citations and legal analyses were conducted to ensure accuracy.
Despite the court’s concerns about AI, critics argue that existing legal frameworks are more than sufficient to handle any potential issues arising from the technology. In a letter, David Coale from Lynn Pinker Hurst & Schwegmann acknowledged the “alarming tendency” of generative AI to “hallucinate” and “make stuff up.” However, he asserted that current court rules and ethical standards already prohibit the citation of ‘fake law.’
The Institute for Justice, a libertarian public interest law firm, contended in its letter that lawyers are already professionally obligated to provide accurate and well-vetted arguments and citations. They asserted that the court possesses inherent authority to enforce this professional obligation.
While a majority of the 16 letters received by the 5th Circuit criticized or dismissed the proposal, some dissenting voices were heard. Lawyers from Carlton Fields expressed dissatisfaction, arguing that the rule falls short in addressing the “fundamental dangers” of using generative AI for legal analysis. They urged the court to go further by requiring lawyers to certify the complete non-use of such technology.
Gary Sasso, head of Carlton Fields, emphasized the importance of judges receiving the best possible work product from legal counsel, rather than relying on generic applications.
However, Layne Kruse and Warren Huang from Norton Rose Fulbright noted in their letter that many traditional legal research tools, including LexisNexis and Westlaw, have already begun incorporating AI-related features to enhance their offerings. They cautioned that the proposed rule might discourage attorneys from using tools that could benefit both clients and the court itself.
As the legal community remains divided on the issue, it is evident that the intersection of AI and the legal profession continues to spark heated debates and concerns among practitioners.