In Worried About AI Monopoly? Embrace Copyright’s Limits, Michael Carrier and Derek Slater argue that we should “embrace copyright’s limits” in order to preserve competition in artificial intelligence. They start from a premise that sounds intuitive: if copyright owners can insist on permission for the use of their books, music, journalism, and art in AI training, then only the richest companies will be able to afford it. According to their view, this would lock in the dominance of companies like OpenAI, Meta, and Google, so the solution is to loosen copyright to keep the field competitive.
This argument has things exactly backward. It misdiagnoses the source of AI concentration, and worse, it proposes to fix the problem, not by focusing on large, entrenched companies, but by sacrificing the very people whose work makes their technologies possible.
At the core of this argument is a simple question: are authors entitled to negotiate with those who want to extract value from their creative work?
Nobody questions the premise that AI engineers are entitled to their salaries, that chip manufacturers can get paid for their GPUs, or that power companies can charge for the considerable electricity used by the data centers AI companies require. Nobody argues that their participation in the market will threaten competition.
It seems like it’s only authors, creators, and copyright owners—whose work is a critical and valued input to the AI supply chain—who are singled out as illegitimate participants in these markets. But the right to negotiate is not a novel privilege, it is a basic feature of free markets.
What this argument is really saying is that AI companies should be exempt from that basic principle because obtaining permission would be hard at the scale they want, on the timeline they prefer, in order to pursue the commercial goals they set for themselves. But inconvenience measured against a company’s own ambitions is not a legal standard, and it’s certainly not a justification for overriding the rights of the people whose work supplies much of the value. No other industry gets to say, “Our business model would be simpler if we didn’t have to negotiate for inputs,” and then have the law reshape itself to accommodate that preference.
“Move fast and break things”
There’s another, deeper problem with the “licensing is too hard” claim: it’s driven in part by circumstances the AI companies themselves created. With some exceptions, they did not attempt to license. They did not test collective solutions or support the development of licensing intermediaries. Instead, they proceeded almost immediately on the argument that everything was fair use, scraped the entire internet, absorbed vast amounts of copyrighted material, and even downloaded millions of obviously infringing files from pirate sites. (On this last point, Carrier and Slater are bothered more that prohibiting the use of pirate sites could reinforce the market power of large developers than with the fact that large companies are building their businesses with sites that have been subject to criminal and civil actions across the world).
Meta, for example, initially pursued licensing for training materials. But after learning most of the works it wanted to license were available on pirate site LibGen, and after escalating to CEO Mark Zuckerberg, Meta abandoned its licensing efforts. Anthropic’s cofounder and CEO Dario Amodei referred to licensing as a “legal/practice/business slog”, and the company also mass downloaded from pirate sites rather than attempting to work with copyright owners.
Because they treated all content as free for the taking, the companies ensured that no normal licensing market could develop. Markets need recognized, enforceable property rights, and willing buyers and sellers. The conduct of the major AI companies short-circuited that process before it could even begin.
Licensing and Competition
Yes, licensing involves costs, but so does every other input—including unlicensed data. Even in a copyright-free world, acquiring, scraping, cleaning, and processing data at the scale AI companies want is expensive. Entrenched incumbents already have a clear advantage in this world.
Consider the following illustration. Court documents revealed that Anthropic hired Tom Turvey, the former head of partnerships for the Google Books book-scanning project, to engage in a similar scanning project that would provide the company with a training dataset of high-quality, professionally edited text that other companies wouldn’t have. Anthropic said this effort cost “tens of millions of dollars.” Even the existence of the dataset “was a closely guarded trade secret.” That type of hiring and spending is not readily available to many other companies, especially new entrants.
USC Gould Professor Jonathan Barnett, author of the recent book The Big Steal, discusses how weaker property rights undermine competition in his forthcoming article, A ‘Minority Report’ on Antitrust Policy in the Generative AI Ecosystem.
Without property rights, the informational assets used by generative AI model and applications developers cannot be priced, in which case either the producers (and curators) of those assets will struggle to find financing, or organizational structures will be distorted in a manner that favors business models that cross-subsidize content and data production through revenue flows sourced from integrated organizational structures that are difficult and costly to imitate. The paradoxical result: in an environment with weak IP rights, the content and data production segments of the AI industry are likely to experience higher entry costs and increased concentration since those functions can only be supported when embedded within integrated structures that necessitate increased capital and technical requirements.
Licensing doesn’t introduce new barriers, it creates incentives for innovation and investment. When copyrighted works become tradeable, marketable inputs, firms can compete to build better datasets, offer flexible licensing terms, and develop tools that make rights clearance easier. Training datasets could be offered at various tiers and price points, like so many software products and services these days.
And if licensing markets were embraced instead of obstructed, they would become more efficient over time. Carrier and Slater treat licensing as if it would always be expensive and chaotic. But history shows otherwise. If there’s demand for high-quality, rights-cleared training materials, companies will find ways to meet it, and if bottlenecks appear, mechanisms like voluntary collective licensing, standardized contracts, or new intermediaries can emerge to address them.
Conclusion
The most striking thing about Carrier and Slater’s argument is that it treats the people who create the underlying material as a kind of externality—background noise that must be quieted for the “real” innovation to proceed. But the novels, journalism, scholarship, photographs, illustrations, and music used in training is not debris on the internet. They reflect the work of human beings and the investment of publishers, labels, and studios. And they are not inevitable.
The question is not whether AI companies can innovate. The question is whether that innovation requires erasing the rights of everyone whose work makes that innovation possible. There is no reason in law, economics, or basic fairness that it should.
In the end, the proposal to “embrace copyright’s limits” does not solve the real problem of AI concentration, it distracts from it. Weakening copyright won’t address its causes, but it will harm the creative ecosystem that has supplied the raw material for these technologies and that will continue to supply it as long as its rights are respected.
The real path forward is not to strip creators of the rights “designed to assure contributors to the store of knowledge a fair return for their labors.” It is to acknowledge that AI does not exist separate from the creative world. That means treating creators as participants, not obstacles, and recognizing that sustainable innovation cannot be built on uncompensated appropriation.