Stop Vibe Lawyering
Claude is not your lawyer (but your lawyers should be using AI)
Stop Vibe Lawyering
The Thesis: Founders, operators, and even in-house lawyers are typing their most sensitive legal problems into consumer chatbots and treating the output like privileged counsel. I see it with clients. I see it with friends. I see it with my doctor (he should know better). It is creating risk that most of them have not thought through.
The Framework: Everything you type into a public AI tool is discoverable. A federal court just confirmed it. Judges applied the old privilege rules exactly as written.
The Takeaway: Stop vibe lawyering. Start ensuring your lawyers use AI under proper guardrails. The difference between those two things is the difference between a lot of self-inflicted problems and sage legal advice that is privileged work product.
1. Everyone I Know Is Doing This
I have a friend who typed the details of an employment dispute including things he did in violation of his non-compete into ChatGPT and asked it to draft a separation agreement. A client who fed Claude the facts of a regulatorily risky plan and asked whether they had exposure or could “work some loopholes.” A founder who used an AI chatbot to figure out whether a prospective employee’s non-solicitation agreement had any teeth. None of them talked to a lawyer first. All of them created a written record of their legal exposure that lives on a third party’s servers.
Vibe coding is the practice of describing what you want to a chatbot and shipping whatever code it gives you. You do not read it line by line. You do not fully understand it. You vibe with it. For a side project, fine. For production software, dangerous. Vibe lawyering is the same instinct applied to legal problems: type your facts or your darkest secrets into a chatbot, get something that reads like legal analysis, and move forward as if you consulted your lawyer. People are doing this for employment disputes, equity negotiations, regulatory questions, IP conflicts, investment deals, strategic partnerships, and contract interpretation. They are doing it because the output is just good enough to feel like advice.
Founder, friend and family takeaway: If you have typed the details of a live legal issue into a consumer AI tool, you have created a discoverable document. If a dispute arises and the other side requests your comms, that chat log is fair game including everything you thought/admitted/gamed out. This is true whether the dispute is with an employee, an investor, a co-founder, a competitor, or a regulator.
2. A Federal Court Recently Confirmed What Should Have Been Obvious
In February 2026, Judge Rakoff of the SDNY ruled in United States v. Heppner that thirty-one documents from a defendant’s Claude conversations were not privileged and not protected as work product. Heppner had been charged with securities fraud tied to an alleged $150 million scheme. Before his arrest, after receiving a grand jury subpoena and retaining counsel, he used the consumer version of Claude to research legal issues and generate reports on defense strategy. He did this without his lawyers directing any of it, then shared the outputs with his defense team.
The court applied ordinary privilege rules and found that Heppner failed on the first two elements, “if not all three.” Claude is not an attorney and owes no fiduciary duty. Anthropic’s privacy policy permits the company to collect inputs, train on them, and disclose data to regulators and parties in litigation, so there was no reasonable expectation of confidentiality. And Heppner was not seeking legal advice from counsel; he was seeking it from a chatbot that tells you, in its own words, that it is not counsel.
Heppner is a criminal case, but the principle carries over to civil litigation or regulatory investigations. The privilege analysis does not change because the stakes are an employment claim instead of a fraud indictment. If an employee sues your company and their lawyer requests all documents related to the dispute, your Claude chat where you asked “can I fire someone for being very annoying?” is producible (this happened). If a regulator brings an action and you typed your version of the facts into a chatbot to gauge your liability, that transcript is discoverable.
Strategic takeaway: Rakoff applied the same rules courts have applied for decades. The novelty is the technology, which seems like it can almost be your lawyer. If you share sensitive information with a third party that owes you no duty of confidentiality and whose terms of service permit disclosure, you have waived privilege. The fact that the third party is software rather than a person changes nothing.
3. You Can’t Unring This Bell
The most common version I see of this scenario goes like this: someone types their legal problem into Claude or ChatGPT, gets an output, then forwards that output to their lawyer. They assume that because the material ended up with counsel, it is now privileged. It is not. Nonprivileged communications do not become privileged because someone later transfers them to a lawyer. This is well-established law. A document that was not protected when created does not gain protection by landing in a privileged inbox. It is analogous to another point we often have to remind people of: CCing a lawyer on an email does not alone make that email privileged. You would be surprised how many otherwise very intelligent people run afoul of this.
Rakoff confirmed this in Heppner. The work product claim failed for the same reason: Heppner created the documents on his own, not at counsel’s direction. Even though the documents later influenced defense strategy, they did not reflect counsel’s strategy at the time of creation.
A founder types “Partner X is threatening to sue over for IP breach, here are the facts, do we have exposure” into a consumer chatbot. That chat is not legal work product. Forwarding the output to your lawyer does not make it legal work product. If litigation happens, opposing counsel can read every word of it.
Founder takeaway: The time to involve counsel is before you generate the material, not after. You cannot launder unprivileged material into privilege by forwarding it to your lawyer.
4. The Right Way to Use AI for Legal Work
The Heppner opinion is worth reading for what it did not hold as much as for what it did. Rakoff noted that the outcome might have been different if counsel had directed Heppner to use Claude. Under the Kovel doctrine, which extends privilege to non-lawyer professionals when an attorney engages them to assist in providing legal advice, a well-structured AI engagement might qualify. If your lawyer directs you to use a specific AI tool as part of the legal representation, with appropriate confidentiality protections and enterprise-grade data handling, that tool might function as counsel’s agent.
Every GC reading this should also pay attention to what Rakoff left open. He did not say AI tools can never be part of privileged work. He said that unsupervised use of a consumer chatbot is not privileged work. That gap is where your AI policy should live.
AI is a powerful tool for legal work. Lawyers who use it to draft, research, and pressure-test arguments are faster and more thorough for their clients. The best practice is to stop vibe lawyering and start using AI through counsel, on platforms with enforceable confidentiality protections, at counsel’s direction.
When your lawyer uses an enterprise AI platform under a data processing agreement to help research your matter, that work is done at counsel’s direction and within the scope of the representation. When you open Claude in your browser at 2 a.m. and start typing out the details of everything you have done or thought that might be bad because you want to save on legal fees, you are vibe lawyering. Same AI. Same capabilities. Different legal consequences depending on who directs the work and how the data flows.
Vibe coding broke a lot of production apps before the industry figured out where the guardrails belong. Vibe lawyering is going to break a lot of privilege claims before founders and GCs figure out the same thing. I wrote this because I keep having the same conversation: someone shows me a Claude chat where they laid out all the facts of a legal scenario and asked for legal analysis, and I have to explain that they just created a discoverable document. The privilege framework that protects your most sensitive communications has not changed. Rakoff applied it to new technology the same way he would have applied it to a phone call with a stranger. The question is whether your AI use is structured to be protected or structured to be produced. You have to decide that before you hit enter.



Thanks for sharing this. Prior to any litigation arising, isn't the problem "solved" by just deleting the conversation afterwards and not sharing it with anyone? The document is theoretically discoverable but if litigation happened 1 year later the convo would be long gone and not destroyed in violation of subpoena. Curious what you think?