AI and Privilege After United States v. Heppner

Mar 31, 2026

AI is rapidly becoming embedded in legal workflows, but recent case law is drawing clear boundaries around how far that integration can go. In United States v. Heppner, courts signaled that convenience with AI tools can come at a serious costpotentially stripping away attorney-client privilege altogether.

AI, the always-on assistant that drafts, summarizes, strategizes, and occasionally hallucinates its way into your legal workflow. But as United States v. Heppner makes clear, treating AI like your lawyer or even your lawyer’s assistant can come at a steep cost. In this case, that cost was privilege itself.

Welcome to the new frontier of legal risk: where convenience meets confidentiality, and where one careless prompt can unravel protections that took decades of doctrine to build.

The Case That Changed the Conversation

In United States v. Heppner, a federal court in the Southern District of New York addressed a first-of-its-kind question:

Are communications with a generative AI tool protected by attorney-client privilege or the work product doctrine?

The answer: no, at least not under the circumstances presented.

The defendant had used a publicly available AI tool to generate defense-related materials without direction from counsel. These AI-generated documents were later seized by the government, and the defendant argued they should be protected.

The court disagreed and in doing so, set a powerful precedent.

Why Privilege Failed

The court didn’t create new law. Instead, it applied traditional privilege principles to a modern tool and found they didn’t fit.

1. No Attorney Involvement

Attorney-client privilege requires communication between a client and a lawyer. Here, the AI tool was neither.

  • The AI was not a licensed attorney

  • The interaction was not directed by counsel

  • The tool could not form an attorney-client relationship

2. No Reasonable Expectation of Confidentiality

This was the real turning point.

The court found that using a public AI platform—with terms allowing data collection, training, and potential disclosure destroyed any expectation of confidentiality.

In other words:
If the platform can see it, store it, or share it you’ve likely waived privilege.

3. AI as a “Third Party”

Perhaps the most important takeaway:

Communicating with AI is legally treated like disclosing information to a third party.

And under long-standing doctrine, voluntary disclosure to a third party waives privilege.

What About Work Product?

The defendant also argued that the materials were protected as work product, documents prepared in anticipation of litigation.

The court rejected this too.

Why? Because:

  • The materials were not prepared by or at the direction of counsel

  • They did not reflect attorney strategy

  • They were generated independently by the defendant using AI

Result: No privilege. No work product protection. Full exposure.

What This Means for Founders, Operators, and Legal Teams

This decision is less about AI and more about how courts will treat AI under existing legal frameworks.

1. AI Is Not Your Lawyer

No matter how sophisticated the tool feels, it does not create privilege.

The privilege protects communications with your lawyer—not your AI.

2. Confidentiality Is Fragile

Uploading sensitive information into AI tools can:

  • Waive attorney-client privilege

  • Undermine work product protection

  • Potentially destroy trade secret status

3. “Feels Private” ≠ Legally Private

This is the dangerous gap.

AI interfaces feel conversational, secure, even advisory. But legally, they are often:

  • Third-party platforms

  • Data processors with broad rights

  • Systems that retain and reuse your inputs

A Critical Nuance: Could the Outcome Have Been Different?

Yes and this is where things get interesting.

The court hinted that privilege might apply if:

  • The AI use was directed by counsel

  • The tool functioned as an agent of the attorney

  • Appropriate confidentiality safeguards were in place

This opens the door to a key distinction:

Consumer AI vs. enterprise, lawyer-directed AI workflows

Practical Guidance: How to Use AI Without Waiving Privilege

For startups, Web3 teams, and legal operators, the takeaway isn’t “don’t use AI.” It’s “use it intentionally.”

Build Guardrails Early
  • Prohibit uploading sensitive legal or strategic information into public AI tools

  • Train teams on what not to input

Use the Right Infrastructure
  • Prefer enterprise AI tools with:

    • Confidentiality commitments

    • No-training clauses

    • Data isolation

Keep Counsel in the Loop
  • Ensure AI use is directed or supervised by legal counsel where privilege matters

  • Treat AI like a tool of the lawyer, not a substitute

Assume Disclosure Risk

If you wouldn’t send it to a third-party vendor…
don’t put it into AI.

The Bigger Picture: Law Isn’t Changing, Application Is

Heppner is not a story about courts reinventing privilege.

It’s a story about courts refusing to bend doctrine to fit new technology.

Instead, they are asking a simpler question:

Did you share confidential information with a third party?

If the answer is yes, privilege likely disappears, AI or not.

Final Takeaway

AI is transforming how legal work gets done. But it is not changing the fundamentals of legal protection.

And Heppner makes one thing clear:

Speed without structure is risk.
Convenience without confidentiality is exposure.

For anyone building, advising, or operating in high-stakes environments—this is no longer theoretical.

It’s operational.

Learn more