Former Trump Associate Blames AI for Fabricated Legal Citations in High-Stakes Trial

In a surprising twist during the ongoing civil fraud trial involving former U.S. President Donald Trump, Michael Cohen, Trump’s former fixer and attorney, revealed in recently unsealed court documents that artificial intelligence (AI) was responsible for erroneous case citations that made their way into an official court filing.

The revelation stems from Cohen’s sworn declaration in a federal court in Manhattan, where he admitted to unknowingly providing his attorney with fake case citations generated by Google Bard, an AI chatbot developed by Alphabet Inc’s Google. These citations found their way into a motion filed by Cohen’s attorney, David Schwartz, seeking an early end to Cohen’s supervised release after his imprisonment for campaign finance violations.

U.S. District Judge Jesse Furman, upon reviewing the motion earlier this month, discovered that three of the cited court decisions did not actually exist. Consequently, he instructed Schwartz to justify why he should not face sanctions for referencing non-existent cases.

Cohen, disbarred almost five years ago, asserted in the filings that the misleading citations originated from his own online research. He expressed surprise that Schwartz included them in the submission without verifying their authenticity, admitting, “I did not keep up with emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like ChatGPT, could show citations and descriptions that looked real but actually were not.”

“I deeply regret any problems Mr. Schwartz’s filing may have caused,” Cohen stated in the court filing.

In response to Cohen’s revelation, Judge Furman has granted Schwartz and prosecutors until Wednesday to provide their responses. The Manhattan U.S. Attorney’s spokesperson declined to comment, and Schwartz’s lawyer did not respond to requests for comment.

This incident underscores the broader challenges faced by courts nationwide in dealing with the rapid proliferation of generative artificial intelligence programs, such as OpenAI’s ChatGPT, and raises questions about how to regulate their use in legal proceedings. Earlier this year, two New York lawyers faced sanctions for including six fictitious case citations generated by ChatGPT in a legal brief.

Cohen, recently a witness in New York state Attorney General Letitia James’ civil fraud case against Trump, is expected to play a pivotal role in the state criminal case against Trump, accusing him of falsifying business records to conceal reimbursements to Cohen for a $130,000 payment intended to silence porn star Stormy Daniels before the 2016 presidential election.

The unfolding situation adds another layer of complexity to the ongoing legal battles surrounding the former president, prompting renewed scrutiny over the use of AI in the legal profession.

Print Friendly, PDF & Email
Exit mobile version