"DoNotPay" - FTC Litigation Against AI Litigation tools.
- Steven Barge-Siever, Esq.
- Jul 1
- 4 min read
Updated: Jul 1
By Steven Barge-Siever, Esq.
Upward Risk Management LLC
In January 2025, the Federal Trade Commission finalized a settlement with DoNotPay, the AI legal startup that once promised to eliminate lawyers altogether. The company must pay $193,000 and can no longer claim its chatbot performs like a real lawyer - unless it can prove it.
That enforcement came more than a year after a separate media firestorm: in early 2023, DoNotPay announced it would send an AI into a courtroom to argue a traffic case via earpiece.
It wasn’t long before state bar officials pushed back. Jail threats followed. The company backed off.

These stories, taken together, reveal something deeper than a startup gone rogue. They highlight a fundamental truth that every founder in legal tech - and every AI builder touching a licensed profession - must understand: AI can process. But AI cannot practice.
The Limits of Automation in Professional Domains
DoNotPay’s original pitch was simple: let an AI argue your traffic ticket in court. Then it got bolder: let it generate court filings, offer legal advice, and represent you in small claims cases.
But legal practice isn’t just about generating words or filling out forms. It’s about judgment, ethics, and accountability - qualities that no AI possesses and no court recognizes.
The backlash was swift because the concept struck at the very heart of licensed practice.
Unauthorized Practice of Law (UPL) laws exist to protect the public from unqualified advice, not to gatekeep innovation. When AI tools cross into the domain of representation - offering advice, interpreting the law, or simulating courtroom behavior - they risk violating these laws and triggering serious penalties.
And it’s not just law.
An AI can recommend medications - but it can’t prescribe.
An AI can review architectural plans - but it can’t stamp them.
An AI can prefill an insurance application - but it can’t place coverage on your behalf.
In short: AI can simulate tasks. But it cannot carry responsibility.
The Human-in-the-Loop Imperative
Every attorney we’ve spoken with - whether litigators or legal tech adopters - shares a similar tension. They’re intrigued by automation, but deeply skeptical of anything that substitutes for their judgment.
And with good reason. The legal system is structured around licensed accountability. The moment a tool crosses from document automation to decision-making, it stops being “tech” and starts being “practice.”
That distinction matters - not just for compliance, but for risk, liability, and insurance.
The Insurance Breakdown: What Happens When You Claim to Practice Law
DoNotPay didn’t just upset regulators - it created insurability issues.
Here’s why:
False Advertising & Misrepresentation: Claims that your AI “is a lawyer” or “performs legal functions” may be considered deceptive. That’s not just an FTC issue - it’s often excluded from coverage under Tech E&O and D&O policies.
Professional Services Exclusions: Most policies exclude services that require a professional license unless explicitly covered. If your AI crosses the line into unlicensed lawyering, your E&O coverage may vanish right when you need it most.
Regulatory Enforcement Gaps: State bars, FTC regulators, and attorney generals can initiate investigations. Many D&O policies don’t cover informal inquiries - let alone regulatory settlements - unless they’re negotiated into the policy.
Designing Insurance That Reflects the Boundaries of AI
The solution isn’t fear - it’s structure. Legal tech startups (and AI platforms across professional services) need to think clearly about what their tools do, what they claim, and where human oversight fits in.
Founders should ensure:
Marketing Alignment with Risk: Don’t promise AI replaces lawyers. Promise it helps them. Your marketing language can trigger coverage exclusions just as easily as it triggers consumer interest.
Tailored E&O Language: Policies should specifically address the type of automation you’re performing - especially if you're generating legal documents, guiding users through legal processes, or interpreting regulatory language.
D&O That Covers Real-World Scrutiny: Make sure your board is protected - not just from shareholder suits, but from regulatory inquiries, subpoenas, and personal reputational fallout.
The Bigger Picture: AI in the Age of Accountability
The FTC’s DoNotPay order isn’t just about a chatbot gone too far. It’s a referendum on the limits of AI in licensed professions. Until a machine can be licensed, held accountable, and disbarred - it cannot practice.
And so, professionals remain just that - essential.
The smart path forward isn’t “robot lawyer” hype. It’s AI as an assistant, not a substitute. Human in the loop. Professional at the core.
How We Approached Professional Services AI
At Upward Risk Management, we’ve been building AI tools with human-in-the-loop oversight from day one - for exactly the reasons outlined above. Not because AI isn’t fast, powerful, or useful. But because AI cannot be licensed. It cannot be sued. It cannot be held accountable.
That’s not just a technical limitation - it’s a legal and ethical boundary that every company operating in a licensed domain must respect.
The question isn’t whether AI is “good enough” on its own. The answer, for now, is simple: it’s not allowed to be. The law doesn’t permit software to assume the mantle of a licensed professional. And no insurance policy will cover liability once that line is crossed.
Which is why we’ve designed our AI to assist professionals - not replace them.
We’ll be releasing our AI-assisted Attorney Policy Review system in the coming weeks, built for firms that want faster analysis, not riskier shortcuts. It will be followed by the broader rollout of Undr AI, our platform for AI-powered risk analysis, policy reviews, and insurance benchmarking - starting with fintech.
More at:
コメント