An AI told her to fire her lawyer. Now it's being sued.
ChatGPT might have passed the bar., but it's just out that it's not licensed to use it.
There’s a lawsuit working its way through federal court in Chicago right now that I think every person who uses AI, and certainly every business that operates anywhere near it, should be paying attention to.
Nippon Life Insurance Company of America filed a complaint this week against OpenAI (case number 1:26-cv-02448, if you want to follow along), and the core allegation is this: ChatGPT practiced law without a license, and it cost them $300,000 defending a case they had already won.
Here’s the story.
A woman named Graciela Dela Torre had a long-term disability claim against Nippon. It settled in January 2024, signed release, dismissed with prejudice, the whole thing. Done. A year later, she decided she wasn’t happy with the outcome, so she went back to her attorney and asked about reopening the case. Her attorney told her the release was valid, the case was closed, and there was nothing to be done. Instead of accepting that, she uploaded the attorney’s response to ChatGPT and asked it directly whether she was being gaslighted. ChatGPT said yes, helped her fire the attorney, drafted a motion to reopen the case, and then, when a judge denied that motion, apparently kept right on going, generating filing after filing after filing. Over 60 documents across two lawsuits, by Nippon’s count, most of them produced with ChatGPT’s help. One of those filings cited a case called “Car v. Gateway,” which doesn’t exist anywhere except in ChatGPT’s output. (Classic.)
Now, before we get into the legal theories, I want to say this clearly: Graciela Dela Torre is not the villain of this story, in my opinion. She was an unrepresented person, confused and frustrated about a legal outcome she didn’t fully understand, who reached for the most confident, accessible, always-available resource she had. And ChatGPT, trained on more legal documents than any lawyer alive has ever read, told her what she wanted to hear, with complete authority and zero awareness of what it was doing to her actual situation. That matters, and we’ll come back to it.
The three claims Nippon is making are:
tortious interference with contract,
abuse of process,
and the one that really matters here, unauthorized practice of law.
ChatGPT, they argue, has demonstrated it can pass the bar exam, but it has not been admitted to practice law in Illinois or anywhere else in the United States. And what it did, drafting motions, conducting legal research, advising a client to fire her attorney, formulating litigation strategy, is, by the complaint’s reading, exactly what it means to practice law. OpenAI’s response so far has been that the complaint “lacks any merit whatsoever,” which is more or less what you’d expect them to say. They also updated their terms of service back in October 2024 to bar users from seeking legal advice through the platform, and Nippon’s lawyers have very neatly turned that against them, arguing that the update proves OpenAI knew the risk existed long before they addressed it.
A professor at Stanford’s CodeX center published a fascinating piece this week arguing that Nippon’s lawyers may actually be swinging at the wrong target. The unauthorized practice of law claim is attention-grabbing, but UPL statutes were designed to handle humans holding themselves out as attorneys, not this. The more durable legal frame, he argues, is product liability.
OpenAI marketed ChatGPT as capable of passing the bar exam, deployed it to millions of consumers navigating real legal situations, and shipped it without any architectural guardrail that would prevent it from crossing what he calls the “uncrossable threshold,” the line between providing legal information and making a specific legal conclusion about a specific person’s specific situation. ChatGPT crossed that line the moment it told Dela Torre her attorney’s advice was wrong. That wasn’t just information anymore; it was received as a legal conclusion, rendered without knowing her jurisdiction, her case history, or anything about the procedural constraints governing her situation.
The October 2024 terms-of-service update, in this framing, is being seen as evidence. It shows that OpenAI recognized the foreseeable risk and chose a policy patch over an actual design fix, yet the harm followed anyway. As the piece puts it, a terms-of-service prohibition is not a design safeguard; it is a disclaimer, and disclaimers don’t enforce thresholds; they shift blame. (Worth reading in full if this interests you, it’s at law.stanford.edu.)
Which brings me back to where I started.
The disclaimers say one thing, the product behavior says another, and the gap between them is where real harm happens. Dela Torre didn’t fall through a crack in the system, she fell through a gap that was designed into the product, and that OpenAI apparently knew was there.
We’re going to see a lot more of this as AI gets woven into more consequential decisions (legal, medical, financial), and the companies building these tools are going to have to decide whether a policy update is actually a safeguard or just a way of pointing the finger somewhere else when things go wrong.
I don’t know how this case ends, but the question it’s asking isn’t going away.
If you want to understand how your brand shows up, or doesn’t, in AI Search, and what to do about it before agentic commerce goes mainstream, schedule a 1:1.
Dax Hamman is the CEO & Co-Founder at FOMO.ai, the author of 84Futures, and dax.fyi.


