top of page

AI Litigation Risk: Artificial Intelligence Facing a World It Wasn’t Built For

  • Writer: Steven Barge-Siever, Esq.
    Steven Barge-Siever, Esq.
  • May 8
  • 4 min read

Updated: May 15

What Mobley v. Workday reveals about product design, legal risk, and the urgent need for specific AI insurance coverage

By Steven Barge-Siever, Esq.



When Mobley v. Workday survived a motion to dismiss this year, headlines focused on a narrow point: AI vendors might be directly liable for discriminatory hiring outcomes under federal civil rights law. That alone was historic.


But the implications are bigger.


This case exposes a fundamental problem that exists far beyond employment discrimination: AI systems are being deployed into high-stakes, real-world contexts without a corresponding understanding of how legal, social, and operational risk actually manifests. The people building these systems are often far removed from the end results, until a lawsuit drags them in.


The Problem Isn’t Just AI Bias Risk - It’s Blind Spots

We talk a lot about AI “bias.” But bias is only one category of legal risk. What Mobley shows is that AI systems, and especially enterprise-grade tools sold to other businesses, can be legally and ethically problematic even when they are functioning as designed.


Why? Because they’re usually built in technical or product-centric environments where the focus is on functionality, not liability. Vendors optimize for efficiency, throughput, automation. Clients optimize for ease of use. No one is optimizing for legal defensibility or downstream harm until it’s already happened.


That’s how we get tools that:

  • Automate decisions but obscure accountability

  • Filter applicants using proxies that correlate with protected traits

  • Score individuals or organizations using opaque logic no one can fully explain


And then, when a claim arises (discrimination, regulatory overreach, or negligent deployment) the attorneys arrive. Not the engineers. Not the product managers. The lawyers. And the narrative changes from performance to blame.


What Mobley Actually Signals for AI-Centered Litigation Risk

In Mobley, the plaintiff alleged that Workday’s hiring software played a gatekeeping role in rejecting him based on race, age, and disability. But what matters more than the allegations is how the court framed the vendor’s role.

Workday wasn’t just a toolmaker. It was plausibly an agent - a party that shaped employment decisions on behalf of its clients.

That theory opens the door to direct liability - not only for AI in HR, but for any AI system that exercises functional control over regulated decisions.


This applies not just to hiring, but also:

  • Credit scoring tools used by fintechs

  • Underwriting engines used by insurers

  • Patient triage models used by healthtech platforms

  • Moderation systems used by social platforms


In each case, the vendor often claims: we don’t make decisions, our clients do.  But courts may soon say: if your product meaningfully replaces human judgment, you're in the decision loop, and on the liability hook.


The Design Gap: What Happens When Legal Risk Isn’t a Feature of Your AI?

AI products aren’t typically designed by lawyers. In most companies, legal gets looped in late. Sometimes post-launch. Often post-incident.


That delay is a design flaw.


If your AI product touches regulated activities (employment, finance, healthcare, housing) you need to build legal foresight into the product. Not as a compliance checklist. As a core design principle:

  • How does this system track and document decision logic?

  • Who owns the risk of errors or disparate impact?

  • Can a regulator or judge understand how this model arrived at an outcome?

  • Will a court view your tool as a neutral platform - or an agent making real-world calls?


Right now, too many AI products answer these questions after things go wrong. That is the dynamic Mobley is warning us about.


The Shift Ahead: From Tech-First to Accountability-First

The early AI boom was driven by scale, speed, and novelty. But the next phase will be shaped by legal structure, public trust, and traceable decision logic.


This won’t slow innovation. It will separate serious builders from everyone else.

  • Vendors who train their models on quality data, and show their work, will win larger, more sophisticated clients. The kind with attorneys on deck.

  • Systems with clear audit trails and explainability features will become default.

  • Legal, compliance, and product design will stop being silos.


In short: the AI companies that think like regulated companies will survive as AI becomes regulated.


Final Thought: Mobley Was the Warning Shot

If you’re an AI vendor, Mobley isn’t just a headline - it’s a preview of how courts, regulators, and plaintiffs will view your role going forward.


And if you’re building AI that faces the world, and not just internal workflows, you need to assume that your product is going to end up in court someday.


The only question is: Will your system be defensible or just defensively built after the fact?


About Upward Risk Management

At Upward Risk Management, we don’t just broker AI coverage - we helped shape it. Our founder is a former insurance attorney who has drafted AI-specific endorsements, advised on claim strategies, and placed coverage for some of the most complex AI-driven platforms in fintech, SaaS, and enterprise tech. We work directly with underwriters, legal counsel, and founders to ensure your insurance program actually matches how your AI operates - and how it will be scrutinized in court.


When AI risk becomes real, URM is already there.

Comentários

Não foi possível carregar comentários
Parece que houve um problema técnico. Tente reconectar ou atualizar a página.
bottom of page