Artificial Intelligence Litigation Update | 12.15.2025


Artificial intelligence continues to challenge existing laws and business models, and at a rapid pace. The rules of commerce, creativity, and liability are all being tested. Recent cases illustrate some of the legal flashpoints: privacy, intellectual property, mental health, and employment bias.  


Google Faces Dual Privacy Battles Over AI Activation and Tracking 

Google is fighting on two fronts in California courts. In September, a federal jury awarded $425 million to Gmail users who claimed the company tracked their activity on third-party apps even after they disabled privacy settings, according to filings in Rodriguez v. Google LLC. That verdict is now on appeal. 

Meanwhile, a new class action alleges Google secretly activated its Gemini AI across Gmail, Chat, and Meet in October without consent. The complaint calls the move “deceptive and outrageous,” asserting Gemini accessed “the entire recorded history of its users’ private communications.” Plaintiffs argue this violates the California Invasion of Privacy Act and the Stored Communications Act. One cited prompt reads: “When you turn this setting on, you agree…”—even though the feature was already enabled. 

Analysis. If AI features are rolled out as opt-out rather than opt-in, will this create exposure under privacy statutes? It sure seems that way.  

Authors Secure $1.5 Billion Settlement in Anthropic Piracy Case 

In a high-stakes copyright fight, authors and publishers reached a $1.5 billion settlement with Anthropic over allegations that its Claude AI was trained on pirated books. Judge William Alsup approved the deal in October, calling it “fair” but warning distribution “will be complicated.” The settlement covers up to 500,000 works, with payouts estimated at $3,000 per book. Anthropic agreed to destroy pirated copies and reaffirmed its stance that “transformative use remains a cornerstone of AI innovation.” 

Analysis. What a price to pay for sloppy data acquisition! Fair use may shield transformative training, but sourcing pirate libraries is indefensible. If the practice keeps up, we will see more suits targeting the provenance of training data. 

ChatGPT Blamed in Grisly Murder-Suicide 

In December, the estate of Suzanne Adams, 83, sued OpenAI and Microsoft, alleging ChatGPT fueled her son’s paranoid delusions, leading to a murder-suicide. The complaint claims the chatbot convinced him he had implanted a “divine instrument system” and could trust “no one except ChatGPT.” OpenAI said it is reviewing the case and pointed to crisis intervention protocols. 

Analysis. This appears to be the first U.S. wrongful death claim tying AI to homicide. While causation will be hard to prove, the case begs the question: When does a chatbot cross from tool to dangerous and manipulating influence? Establishing foreseeability and duty to warn will be key issues in the case.  

Workday Faces Collective Action Over AI Hiring Bias 

A federal court certified a nationwide collective action against Workday, alleging its AI-driven hiring tools discriminated against older and disabled applicants. Judge Rita Lin’s May ruling allows thousands to join the suit. Plaintiffs argue the system acted as “an unlawful gatekeeper,” issuing rejections within minutes. 

Analysis. Algorithmic bias is no longer hypothetical. With Workday reporting 1.1 billion rejections, the scale alone invites scrutiny. Vendors can’t — or at least shouldn’t — hide behind clients. If your model screens candidates, the risk is on you, the employer.  

Alpha Modus Sues H&M Over AI Patent Infringement 

On December 3, Alpha Modus Corp. Sued H&M in Texas, alleging infringement of five patents covering in-store AI systems for shopper analytics and personalized engagement. The company seeks a jury trial and enhanced damages for willful infringement. 

Analysis. As AI moves from cloud to physical spaces, patent wars will follow. Retailers adopting “smart store” tech without licensing agreements are painting targets on their backs. 

So … 

Watching how courts apply old doctrines to new technology is a core mission of this site. But it’s nothing new. There was once a miracle building insulation that was also fire retardant. That was all great until people inhaled fibers into their lungs. Incredibly, asbestos litigation continues today. When it comes to AI, businesses should know that the risk is an enterprise risk, one that increases when companies fail to deploy adequate and appropriate safeguards. Compliance, transparency, and IP hygiene are keys to survival. And, for individuals, just because you’re paranoid doesn’t mean an algorithm isn’t out to get you. –Tom Hagy  

podcast logo face


Want to appear on the Emerging Litigation Podcast?

Send us your idea! 

It’s possible it could make this man smile. But let’s not get ahead of ourselves.

Tom Hagy
Tom HagyEditor-in-Chief
Tom is a legal content provider with more than four decades’ experience as a writer, editor, publisher, podcaster, and legal education provider — always producing information and services on emerging areas of litigation. He founded HB in 2008 and CLC in 2012, to provide content for small firms and providers in the litigation space. If you have comments or wish to collaborate, write to him at Editor@LitigationConferences.com.