top of page

AI Washing: The Dirty Truth of AI Litigation

  • Writer: Steven Barge-Siever, Esq.
    Steven Barge-Siever, Esq.
  • 3 days ago
  • 6 min read

What AI Washing Means for Litigation, Risk, Insurance. And How Underwriters Are Getting Smarter


By Steven Barge-Siever, Esq.

Upward Risk Management LLC


Almost a year ago, I spoke with an underwriter who had adopted an early-stage AI tool designed to streamline data collection for risk evaluation. It sounded promising - until they discovered the “automation” was actually being performed by offshore contractors, manually scraping and inputting the data.


Fast forward nine months, I had a conversation with a broker using a platform marketed as an AI solution for policy placements and coverage reviews. But the turnaround time was slow. Suspiciously slow. The kind of delay you’d expect if a person - not a model - was doing the heavy lifting. The broker concluded the tech was likely powered by human labor disguised behind an AI-branded front end and that they were probably just looking for data.


These aren’t isolated anecdotes - they’re early warning signs of a broader trend: AI washing, and its legal sibling, AI litigation.


AI Washing and AI Litigation

What Is AI Washing?

AI washing refers to the exaggeration or misrepresentation of a company’s use of artificial intelligence in products, services, or operations. It’s often used to attract capital, customers, or media coverage, and it's becoming a serious legal, regulatory, and insurance risk.


Where greenwashing distorts environmental credentials, AI washing distorts technological credibility. And when that distortion shows up in investor decks, press releases, or SEC filings, it opens the door to AI litigation, securities fraud, and regulatory enforcement.


Puffery or Material Misrepresentation?

Startups live in a world where puffery is a survival strategy. Pitch decks are built on optimism. Founders are taught to “sell the vision,” sometimes well before the product is fully built. In places like San Francisco, statements like “We’re building world-class AI” or “This is the smartest platform on the market” aren’t just tolerated - they’re required. Investors reward bold claims over modest realism.


But that kind of puffery doesn’t fully translate to New York, London, Delaware or Washington D.C.


The startup playbook doesn’t hold up in a courtroom.

In legal terms, puffery refers to vague, promotional statements that no reasonable investor would rely on as fact. Courts generally excuse language like “cutting-edge,” “best-in-class,” or “industry-defining” as non-actionable sales talk.


But AI misrepresentation is different.


When a company claims “Our platform is AI-powered” - and it turns out to be driven by human labor, simple scripts, or unmodified third-party tools - that’s not puffery. That’s a material misstatement, and in legal contexts - especially SEC filings or investor pitches - it becomes actionable fraud.


This is where real exposure begins:

  • Securities litigation

  • Regulatory enforcement

  • Insurance coverage denial


It’s one thing to impress a seed-stage investor in Palo Alto. It’s another to explain the same claim to a regulator or judge in Delaware.


For brokers, underwriters, and VC-backed founders, this distinction isn’t academic - it’s survival.


Puffery may be harmless in the pitch. AI washing, by contrast, is prosecutable.


Real AI Litigation: Enforcement and Lawsuits Are Already Here

We’re no longer in hypothetical territory - AI litigation is already triggering lawsuits, enforcement actions, and securities fraud charges:


  • Presto Automation: Target of the SEC’s first formal AI washing enforcement action. The company allegedly misrepresented the AI capabilities of its voice product, which in reality relied heavily on off-the-shelf third-party tools.

  • Nate (Albert Saniger): Marketed as an AI-powered app that automated online purchases. In reality, transactions were executed manually by overseas workers. The founder was charged with securities fraud for misleading investors.

  • Innodata, AppLovin, and Skyworks: Each faced securities litigation from investors alleging that the companies overstated their use and integration of AI, distorting market value and misleading shareholders.


These cases are leading indicators of growing regulatory attention and plaintiff activity in the AI sector.


Is There Insurance for AI Misrepresentation? It Depends.

  • What Might Be Covered

  • Securities class actions alleging AI misrepresentation may trigger Directors & Officers (D&O) insurance for legal defense and settlements, provided there's no final adjudication of fraud.

  • Defense costs during SEC investigations may be covered under sublimits, depending on policy wording.

  • Where Coverage Breaks Down

    • Fraud exclusions: If a court or regulator finds intentional deception, coverage can be voided.

    • Prior knowledge: If executives knew the AI claims were false when applying for coverage, the insurer may walk away.

    • Regulatory fines and penalties: Often sublimited or excluded completely


For venture-backed startups, accurate AI disclosures aren’t just a legal risk - they’re a risk transfer issue. Missteps can void the very insurance meant to protect the leadership team.


AI Underwriting Is Evolving, Fast

Underwriters have already been misled by “AI tools” that were little more than front ends to human work. That experience is now shaping how insurers evaluate technology companies.


How Can Underwriters Spot AI Washing

AI Risk Flags: Signals of Misrepresentation in Startup Disclosures

Red Flag

What It Likely Indicates

Vague references to “AI capabilities” with no architectural detail

Absence of true ML infrastructure; likely reliance on deterministic logic or human input masked as automation

Claims of “proprietary AI” with no IP filings, technical whitepapers, or codebase access

White-labeled or third-party models rebranded as in-house innovation; elevated risk of IP misrepresentation

Operational delays inconsistent with claimed automation

Underlying processes are manual or pseudo-automated; potential breach of SLA or deceptive product claims

Overreliance on buzzwords like “LLM,” “neural net,” or “deep learning” without system-level specificity

Superficial understanding of ML; marketing-driven narratives unsupported by technical substance or deployment rigor

  • Smarter Underwriting Questions for AI Risk Evaluation

    • What foundational model or architecture underpins your system (e.g., LLaMA, GPT-4, Mistral)?

      • (Follow-up: Is it open-source, licensed, or internally developed?)

    • Who performed the model training or fine-tuning, and on what data?

      • (Look for in-house vs. outsourced, use of proprietary vs. scraped vs. synthetic datasets, and data governance controls.)

    • Which tasks are fully autonomous, and which involve human intervention or post-processing?

      • (You’re looking for clarity on decision boundaries, confidence thresholds, and fallback protocols.)

    • Do you manage inference on your own infrastructure, or are you dependent on third-party APIs like OpenAI, Anthropic, or Cohere?

      • (This affects latency, security posture, and contractual control over your core functionality.)

    • How do you monitor for model drift, hallucinations, or system degradation over time

      • (Absence of a model monitoring strategy is a serious technical and operational liability.)


If the answers are evasive, vague, or overly polished - price the risk accordingly.


Model Architecture Red Flags (What Founders Say vs. What Underwriters Hear)

As underwriters probe more deeply into AI operations, these three claims are increasingly scrutinized:

  • “We built our own LLM.” Probably not. Or at least without multi-million-dollar compute access and deep ML talent.

    • Often a red flag for exaggeration unless accompanied by technical proof (training logs, architecture specs, compute documentation).

  • “We fine-tuned an open-source model.” Feasible (and something I've been building), but triggers questions about model choice (e.g., Mistral, LLaMA, DeepSeek), training data provenance, data handling practices, and post-deployment safeguards (e.g., hallucination filters, PII obfuscation).

  • “We use GPT/Claude via API.”Perfectly valid - but only if disclosed.

    • Risk escalates when companies rebrand hosted inference as “proprietary AI,” omit dependency disclosures, or lack security protocols around sensitive data transmission.


Where the First AI Lawsuits Will Hit Startups

Litigation won’t wait for IPO.

  • Enterprise buyers may sue over breach of contract if the AI solution is really human-powered.

  • Investors may sue under Rule 10b-5 when they realize valuations were built on inflated claims.

  • Regulators (SEC, FTC) may investigate companies for deceptive trade practices, especially those with consumer exposure.

These claims are already surfacing at Series A and B - not just in the public markets.

Evidence of AI misrepresentation at early stages: In 2024, the SEC charged the founder of


For example: Joonko, a Series B AI hiring startup, with fraud for exaggerating its AI capabilities and customer base. The company had raised $38M before collapsing.


The Acceleration of AI Litigation: What’s Next

AI litigation is poised to accelerate - driven by overstated capabilities, opaque disclosures, and a flood of startups racing to capitalize on hype. Regulators are watching. Plaintiff firms are circling. And underwriters are starting to adapt.


At URM, we expect this pressure to grow - not just for public companies, but for any startup leaning into AI as a core feature. That includes:

  • SaaS tools claiming automated decision-making

  • Fintechs promoting AI-driven underwriting or lending

  • HR tech and recruiting platforms using AI for hiring or compensation decisions

  • Cyber companies offering AI threat detection

If your company is making AI claims, you need to know what’s covered - and what won’t be.


Conclusion: Insurance Protects the Truth, Not the Hype

AI is revolutionizing risk - and AI litigation is redefining it.


Founders must understand that the language used to raise capital is now subject to scrutiny from regulators, insurers, and courts.Brokers must prepare clients for the legal and coverage fallout of AI misrepresentation.Underwriters must dig deeper than pitch decks - and adjust pricing, limits, and exclusions accordingly.


Because in this next wave, insurance doesn’t protect the promise of innovation. It protects the consequences of exaggeration.


About Upward Risk Management (URM)

Upward Risk Management is a specialist insurance advisory firm built for the modern risk economy. We advise venture-backed companies, PE and VC funds, brokers, and underwriters on risk strategy, insurance structuring and securing coverage in high-complexity areas - especially where AI, automation, and regulatory exposure converge.


Whether reviewing Tech E&O, D&O, or bespoke policy language, URM delivers clarity in chaos.


We translate legal risk into insurance strategy - and expose where traditional coverage falls short in the face of emerging technology.


We don’t insure buzzwords. We insure what's real.


Comments


bottom of page