ChatGPT as Opposing Counsel
July 24, 2025 | By Rad Wood
For years, the topic of how AI might transform the legal profession has sparked ongoing discussion. Much of the conversation—rightly so—focuses on what the future looks like for younger attorneys still gaining experience, and on the types of tasks that may be delegated to AI. These are meaningful questions; ones we are still in the process of answering.
In the past few months, I’ve encountered an issue related to AI in the legal profession that has yet to receive the same level of scrutiny: the use of AI, particularly ChatGPT, by non-lawyers addressing legal issues. Of course, AI tools such as ChatGPT can provide fast, efficient, and correct legal information—but it isn’t always accurate.
The frequency of AI errors is both high and alarming. Stanford University’s Institute for Human Centered Artificial Intelligence found that hallucinations abound when used for legal specific queries with LLMs like Chat–rates range from 69% to 88%. Even AI tools built for the legal industry lawyers use, like Lexis+ AI (17% hallucination rate) and Westlaw’s AI-Assisted Research (34% plus hallucination rate), often return incorrect information.
The difference is that a well-seasoned attorney does not (and should not) use AI simply to obtain an answer. Instead, they treat AI as a resource for research, summarization, or testing the strengths of their arguments. Their expertise in the law is the backstop against hallucinations.
But when a non-lawyer plugs an agreement into an AI tool such as ChatGPT, the outcomes can be disastrous. Just this past week, I encountered three of these situations. One of these involved a client with a new board member. That board member approached a long-time executive to inquire why the company’s operating agreement contained “Clause X.” The long-time executive, who was head of legal compliance and knew the operating agreement by heart, responded, “Clause X is not in the operating agreement,” to which the board member responded, “Yes, it is. I put the operating agreement into ChatGPT and it highlighted this provision.” The executive then told the new board member to ask ChatGPT directly if the provision existed. When the board member did so, ChatGPT admitted to fabricating its answer because, “Clause X is common in operating agreements.” Fortunately, this initial AI error did not result in any injury.
However, the next two examples did lead to severe issues. In this second example, a co-founder put his restricted equity agreement and intellectual property assignment agreement into ChatGPT. ChatGPT, as the co-founder told me, confirmed his suspicions that his co-founder was re-trading the deal. After the co-founder explained the deal documents to me, I responded that what he described sounded similar to standard founder agreements. I added that if he wanted additional protections, we could likely negotiate those terms, but it didn’t sound like a re-trade. Unfortunately, by that point, it was too late. He had already sent messages to his co-founder based on ChatGPT’s feedback. They were in the process of breaking up.
In this final example, another founder asked ChatGPT questions related to debt and personal guarantees. ChatGPT provided several ideas that didn’t align with how these concepts operate in the real world. The feedback from ChatGPT, in turn, upset the founder and caused more worry than seemed necessary. Eventually, that worry passed on to other members of the team.
Kevin recently shared a similar experience with me. A client of his utilized ChatGPT to review an agreement and was frustrated when it pointed out that a specific transfer restriction was missing. The reason the restriction was not in the agreement, however, was because it existed in a different agreement that the parties were already bound to. Not knowing this, the client solely relied on ChatGPT. Unfortunately, the resulting confusion held up the deal by several days.
While these informal uses of AI by non-lawyers may not be as egregious as the over 120 examples of court filings by lawyers with AI hallucinations (with over 58 so far in 2025), they still pose a risk to the practice of law and have materially changed how we lawyers have to navigate our communications with clients.
As Dr. Sriraam Natarajan, a professor of computer science at The University of Texas at Dallas points out, these AI tools do not “think” but instead “predict” and “mimic.” Yet, the average user anthropomorphizes AI and does not realize this very important difference. So, when ChatGPT tells a founder that their IP docs are incorrect or suggests changes to the terms of a debt financing, it is not doing so with years of legal practice nor understanding of how these agreements function within the context of the deal. Rather, it is providing a prediction that is highly based on the prompt that it receives.
Thus, my message to all the startup founders out there is to be mindful when utilizing AI. AI can be a powerful, helpful aid when used correctly. Sometimes, their mistakes can even be entertaining, kind of like the time Kevin got really into magic tricks for a while. But ultimately, you cannot rely on AI to think through a legal issue for you. It can predict, mimic, and provide feedback that sparks new ideas, but it can also hallucinate—and you certainly don’t want to rely on those hallucinations during critical legal negotiations or disputes.