Artificial intelligence may feel like a neutral sounding board, but U.S. attorneys are increasingly cautioning clients that what they tell a chatbot could later resurface in court. The warnings intensified after a federal judge ruled that conversations with an AI assistant were not protected by legal confidentiality rules in a securities fraud case.
As reliance on generative AI grows, lawyers are urging people not to treat chatbots like trusted advisers—especially when discussing legal exposure or potential criminal liability. Unlike communications with attorneys, which are typically shielded by attorney-client privilege, exchanges with AI tools may be discoverable by prosecutors or opposing parties.
Law firms across the United States have begun issuing advisories, client emails, and even contractual clauses highlighting the risk. Some agreements now caution that sharing legal advice with an AI platform could undermine confidentiality protections. The concern is straightforward: once sensitive information is shared with a third party—even an automated one—the legal privilege safeguarding it may no longer apply.
The heightened concern stems from a case involving a former executive accused of securities and wire fraud. He used an AI chatbot to prepare materials related to his defense, which he then shared with his legal team. Prosecutors argued that these AI-generated documents should be disclosed because they were created outside direct attorney involvement. A judge agreed, ordering the production of dozens of documents generated through the chatbot and emphasizing that no attorney-client relationship exists between a user and an AI platform.
Not all courts are treating AI chats the same way. In another case, a judge ruled that a self-represented litigant’s chatbot interactions could be considered part of her own litigation preparation and did not need to be turned over. Still, the differing outcomes have only added to uncertainty, prompting lawyers to advise caution.
Compounding the issue, privacy policies for major AI providers generally state that user inputs may be shared with third parties under certain circumstances. They also encourage users to consult qualified professionals before relying on AI for legal advice—another reminder that these tools are not substitutes for counsel.
To minimize risks, some firms suggest using closed or enterprise AI systems, though their legal protections remain largely untested. Others recommend explicitly stating in prompts that research is being conducted under legal direction, which may strengthen arguments for privilege. Even so, attorneys say such measures are not foolproof.
Until courts develop clearer standards, legal professionals are falling back on a familiar principle: when it comes to sensitive matters, speak only with your lawyer. In the AI age, that caution now extends to the keyboard.


