top of page

Interview with A Vampire: AI Litigation Example / Strategy from a Plaintiff Attorney

  • Writer: Steven Barge-Siever, Esq.
    Steven Barge-Siever, Esq.
  • May 17
  • 6 min read

Updated: May 19

By Steven Barge-Siever, Esq.


Example: How Plaintiff Attorneys Build an AI Litigation Case


Seasoned plaintiff attorneys lay out how litigation will likely take shape as AI enters high-stakes decisions, and why early-stage companies should take notice before plaintiffs do.


AI Litigation - A Legal Analysis

Example AI Litigation vs. Classic Plaintiff Malpractice

I’ve done this before - not with AI, but with buildings, surgeries, and drug labels. I’ve sued developers who blamed subcontractors. I’ve cross-examined doctors who swore the outcome wasn’t their fault.


Every industry assumes its complexity will protect it. But in a courtroom, complexity doesn’t protect you - it isolates you. When juries don’t understand how a system works, they look for someone to hold accountable. And the more opaque the system, the more they want a human face to blame.


Add to that a headline number - like a $100 million valuation - and they don’t think “runway” or “Series B.” They think you’re a cash cow. To most jurors, $100 million might as well be infinite.


It shifts the emotional posture of the case. You’re no longer a scrappy startup. You’re a company that can afford to do better.


So when I look at a Series A or B tech company using AI to make credit decisions, offer dynamic pricing, or flag user risk, I see a familiar pattern emerging. If litigation takes off in this space - and I believe it will - here’s what I expect it to look like.


Step 1: AI Litigation Starts with the Human Story

Plaintiff attorneys typically begin with a narrative that feels intuitively unfair - because that’s what resonates with juries.


Maybe a borrower is denied a loan. Same income, same region, but a different outcome than someone else. Different demographic profile.


Or maybe a user is flagged as suspicious - they clicked through too quickly, or their IP address changed. They don’t know why they were blocked. They just know they were.


Attorneys can make a fairly confident assumption that the company hasn’t fully documented or audited how that decision was made - and that gives them space to frame the harm not as bad luck, but as a systemic failure buried in automation.


Attorneys looking at these cases today will often point to Mobley v. Workday as a sign of where courts are heading.


In that case, the court didn’t just allow the lawsuit to proceed against the employer who used the AI-powered hiring tool - it also allowed claims to move forward against Workday itself, the AI vendor.


“You built the system. You enabled the outcome. You share responsibility.”

That’s the shift. Courts are beginning to recognize AI tools as real decision-makers - and they’re willing to treat both the deployers and the creators as accountable.


If they’re litigating against an AI fintech, they’ll cite Mobley to show this reasoning is already gaining traction.


Step 2: Follow the Promises

Next, plaintiffs will examine the company’s public-facing materials - the website, onboarding language, and investor decks.


If they find statements like:

“Bias-free lending”“Democratizing financial access” “Objective, automated decisions”

Those statements become commitments. And any divergence from them becomes a liability.

That gap between what’s promised and what’s delivered supports claims like:

  • Misrepresentation

  • Unfair or deceptive business practices

  • Negligent oversight or design


The case doesn’t require technical failure. Just inconsistency that creates the appearance of harm.


Step 3: Push for Discovery

To move forward, plaintiff attorneys file claims that are likely to survive the initial stages - just enough to unlock discovery. That’s where they start building leverage.


They look for:

  • Slack messages about flagged users or product changes

  • A/B test results showing unequal outcomes

  • Lack of model audits or explainability standards

  • Vendor relationships that shift liability but not accountability


Fast-moving startups are rarely buttoned up at this stage. And that’s not unethical - it’s just operational reality. But in litigation, missing documentation often reads as missing diligence.


Step 4: Break the Firewall

Most companies will try to position themselves as neutral facilitators:

“The AI made the decision.” “That was a third-party tool.” “We just show the result.”

But attorneys have seen these firewalls before in healthcare, construction, and product liability.


Courts don't often buy it.


The typical argument could be:

  • The company profited from the outcome

  • It had the power to test or question the tool

  • It positioned the system as trustworthy or unbiased


That’s often enough for a theory of shared liability, or negligent oversight - especially in the eyes of a jury.


Step 5: Pressure the Policy Limits

This is where things turn strategic.


If the company has $1M in D&O, that’s not protection - that’s a legal budget. It’s often just enough to retain outside counsel, get through early motions, and respond to discovery.


And it’s usually less than it looks. Most startup policies include sublimits that quietly cap:

Regulatory defense at $250K Employment-related claims at $100K Nothing at all for third-party bias or consumer allegations unless specifically endorsed

So while founders may think, “We’ve got $1M in D&O,” what they actually have is a fragmented policy - barely enough to defend, and often nothing left to settle.


From a plaintiff attorney’s perspective, real settlement leverage starts at $3M to $5M+ in total limits across D&O, Tech E&O, and excess. That’s when they know there’s room to negotiate without bankrupting the company or risking board resignations.


But here's the trick most founders miss: Attorneys know how to construct claims that blur the line between what’s covered and what’s not.


They’ll include allegations that trigger the policy - while weaving in uncovered elements (like discrimination, regulatory theories, or intentional acts) that:

  • Create conflict between the company and the carrier

  • Force the company to consider partial denials

  • Introduce the risk of personal liability for executives


That’s where the real pressure builds - not in court, but in the boardroom and the carrier’s claims department.


And when a company has a $100M+ valuation, that perception becomes fuel. Jurors don’t think in terms of burn rate or future capital needs. They assume a company worth that much can (and should) afford to pay.


So while the startup is calculating runway, the plaintiff is calculating the exact moment when the board will walk away with a policy-limit settlement just to make it all stop.


Because even if the claims are weak or the facts are gray, once legal costs spike and reputational risk surfaces, settling within the insurance tower becomes the path of least resistance.


But Is This Actually Happening Yet?

It's not making headlines, but it’s coming.


Right now, most plaintiff attorneys are still watching and waiting. The companies are early-stage. The claims are complex. And there aren’t enough public rulings yet to support a full wave.


But once companies reach Series B or C, and more outcomes become public, lawsuits will follow. The claims will get simpler. Patterns will emerge. And eventually, the first few cases will become templates.


That’s when the second-tier firms get involved - not the elite litigators, but the high-volume operators who thrive on filing dozens of similar cases at once. It happened with wage-and-hour law. It happened with data privacy. It’s only a matter of time in tech and AI.


What They’ll Look For

These early cases won’t require a perfect set of facts. They’ll only need:

  • Rejected borrower(s) without clear explanation

  • Flagged user(s) who looks like an edge case

  • A marketing claim that doesn’t line up with internal controls


From there, attorneys can apply pressure through regulators, the press, and settlement strategy. The goal is not typically trial. It’s leverage.


What Founders Should Take From This

None of this requires bad intent.


In fact, the real risk is building fast, automating early, and trusting a model that no one inside the company can fully explain. That’s what makes these cases appealing - not because they’re egregious, but because they’re gray areas that will play badly in court.


And once you're in litigation, the language of “good faith” gets overpowered by the language of outcomes.


If You’re Building in This Space

Start documenting early. Know what your model is doing and who it might disadvantage.


Don’t settle for insurance that’s “standard” for startups. It won’t hold up under real pressure.


Because once litigation reaches your doorstep, preparation is the only thing you can’t buy retroactively.



What URM and Undr AI Do Differently

At URM, we don’t just shepherd quotes - we engineer insurance around litigation strategy.


We work with AI and fintech companies who know that standard coverage won’t hold up when the claims get serious - especially when you’re scaling, onboarding users, or preparing for your next round.


That means:

  • Audit-ready insurance architecture that anticipates where coverage breaks: sublimits, exclusions, and gray areas.

  • Custom D&O and Tech E&O towers aligned with real-world exposure - not back-of-the-envelope benchmarks.

  • Litigation-informed coverage based on how attorneys actually structure claims - not what underwriters assume is low risk.


And behind it all is Undr AI - our proprietary risk intelligence system that helps us analyze exposure, benchmark peer companies, and forecast how litigation could unfold against your product.


Undr AI turns legal theory into actionable underwriting insight, and URM builds your coverage around it.


Because once a claim hits your inbox, it’s too late to rethink your insurance. And when the lawsuit comes from someone who knows how to engineer pressure, your best defense is having already anticipated the strategy.


Upward Risk Management LLC

Where Litigation Strategy Meets Insurance Design.


Steven Barge-Siever, Esq.

Founder | CEO

コメント


bottom of page