Skip to header
Home

Main navigation

  • Services
  • People
  • Blog
AI and the Perils of Intellectual Property Law
FastrackPR
June 30, 2025
Bookshelf with old books

In a test case for the artificial intelligence industry, a judge in San Francisco ruled that Anthropic, an AI company that builds Large Language Models (LLMs), did not break the law by training its chatbot Claude LLM on millions of copyrighted books. LLMs are AI programs designed to understand, generate, and manipulate human language. They are primarily trained on large sets of text to model intelligent responses. The issue was to what extent the use of the works to train LLMs qualifies as “fair use” under US copyright law. 

The judge ruled that Anthropic didn’t break the law by training Claude by using millions of copyrighted books because the distilling of thousands of written works to produce its passages of text was “quintessentially transformative.” In other words, there is no violation of intellectual property (IP) rights when undertaking the process of building LLMs. While many people see this as a huge win for AI companies, the ruling highlights the continued dangers that infringements of IP laws pose for organizations building LLMs. 

Much of the opinion deals with how the books were acquired. Anthropic acquired online pirated book copies as well as by purchasing paper copies. Purchased books were scanned without seeking the permission of the rights holders.Since it was easy to avoid copyright holders, Anthropic bypassed them! 

Unsurprisingly, the judge ruled that stealing is still wrong. To quote the ruling, “From the start, Anthropic had many places from which it could have purchased books, but it preferred to steal them to avoid [the] ‘legal/practice/business slog’”. 

IP law is designed to protect and enforce the rights of the creatives (authors, musicians, and artists) and includes writings, music, and designs, among other forms of art and is one of the most important areas of business law in the U.S. At the University of California, San Francisco (UCSF), copyrighted web pages were regularly being scanned by bots for AI without UCSF’s permission, so it seems fairly likely that pirating works in an LLM is not just an Anthropic problem. 

Interestingly, the judge ruled that converting a paper book to a digital version and then destroying the hard copy was not infringment. While this is labor intensive, it does provide a way for the LLMs to generate data legally.

If this ruling stands, look for the publishers and other rights holders to become major players in AI and the creation of LLMs. The often onerous legal reviews that slow down approval processes, and which Anthropic was trying to avoid, could become an essential part of building every LLM. The purchasing rights and clearances are going to become very important, meaning LLM owners will need to have records of the content they acquire, digitize, and destroy.

In a perfect world, creatives whose work becomes part of an LLM would receive just compensation. In our current societal environment, the risk is that big tech companies acquire the publishers, media companies, and other rights holders, and use their new IP ownership to thwart competition, leaving creatives out in the cold. 

IP law reforms are needed that ensure creatives receive additional compensation from AI companies. This would encourage creatives to continue to create and innovate for the benefit of society.

Footer menu

  • My privacy settings

Copyright © 2025 Fastrack PR- All rights reserved