A massive payout tied to artificial-intelligence training practices is drawing an extraordinary response from the writing community. Nearly 120,000 authors and copyright holders have stepped forward seeking compensation from a proposed $1.5 billion settlement involving AI developer Anthropic, according to a filing in a California federal court.
The numbers tell their own story. Claims have been submitted for roughly 91% of the more than 480,000 works included in the settlement—an unusually high participation rate that far exceeds typical consumer class-action responses. The scale of engagement has been framed by supporters as proof that the agreement resonates widely with affected creators.
A judge is expected to weigh final approval next month. If approved, the agreement would stand as the largest copyright settlement in U.S. history.
The dispute traces back to allegations that Anthropic used unauthorized copies of books to train its AI system, Claude. The lawsuit, filed in 2024, accused the company of relying on pirated materials without consent or compensation. Anthropic, backed by major technology investors, opted to settle rather than push forward to a damages trial that could have exposed it to staggering financial risk.
The case also intersected with a pivotal legal ruling last year. A court determined that using copyrighted works for AI training could qualify as fair use, but found fault with the company’s storage of millions of pirated titles in a centralized repository not necessarily limited to training purposes. That distinction left room for liability and helped set the stage for settlement negotiations.
Not everyone is satisfied. Some authors argue the total compensation is too modest, while others question the size of the requested attorneys’ fees or say certain foreign rights holders were left out. Law firms representing the class are seeking 12.5% of the fund—about $187.5 million—after scaling back an earlier, larger request.
The upcoming approval hearing will determine whether the agreement becomes final. If it does, it could reshape how disputes over AI training data are resolved—and signal to both tech companies and creators that the financial stakes of generative AI have firmly entered blockbuster territory.


