The UK's AI Copyright Update Is Newtonian. Style Protection Requires Einstein.
The initial battle is over, but there's still a significant change needed to protect creators in the long run.
For centuries, physics worked perfectly well. Newton's laws explained how objects moved, how forces interacted, how the universe behaved at a human scale. Then we started probing the edges like speeds approaching light and particles smaller than atoms. That led to realizing the old laws didn't break down gradually. They simply stopped applying. The universe had always operated on deeper principles. We just hadn't needed them until now.
The UK government's recent decision to scrap the proposed "text and data mining" copyright exemption is a genuine win. The exploit that would have allowed AI companies to legally scrape every creative work in the UK for commercial purposes without compensating a single creator has been closed. The artists, writers, and advocates who pushed to make that happen deserve real credit. It mattered.
And, while this victory has stopped an outrageous interpretation from becoming law, it does not rewrite the underlying physics. And the underlying physics are what's actually broken.
The Loophole That Remains
The broad exemption is gone, but the "commercial research" framing is not going away. An AI company that characterizes its latest model as a research project until the moment it monetizes can still argue its way through the gap (stubbornly claiming fair use). Rights holders may have more footing than they did, but they are still standing on uncertain ground.
More importantly, the 10,000 writers who protested at the London Book Fair — including a personal favorite, Kazuo Ishiguro — aren't primarily worried about their sentences being ingested by a machine. They are worried about their voice being replicated. That is a different problem entirely, and the current legal framework has no answer to it.
Why Style Is the Deeper Problem
Traditional copyright law was built on a clear distinction: you cannot copyright an idea, only its expression. The specific words on the page, the particular arrangement of notes in a composition — those are what is protected. The underlying concept is not. This was workable in a world where copying required effort and produced identical or near-identical output. Plagiarism was detectable. Theft was visible.
AI changes the physics of this entirely. A model trained on everything Kazuo Ishiguro ever wrote doesn't copy his sentences. It reverse-engineers the statistical DNA of his creativity — his syntax, his rhythm, the particular gravity of his silences, the way he constructs interiority. When someone prompts that model to write a new novel in Ishiguro's voice (his style), no specific sentence is being reproduced. But something more fundamental is being taken.
Copyright law was not built for this. Copyright was designed to protect output — the book, the song, the painting. It has no framework to protect the creative fingerprint that makes an artist's work distinctively theirs. Amending existing statutes won't solve this. We need new laws built from different first principles, the way Einstein's theory of relativity wasn't an amendment to Newton — it was a new framework that operated at a different scale entirely.
Allied with the 10,000
We stand with the writers who say the UK's move is not enough. The frustration coming from the creative community is legitimate and precise. They are being told, in effect: we won't reproduce your work verbatim, but we will study the exact frequency of your mind until we can build a system that replaces you. That is an existential threat dressed in the language of progress.
At Credtent, we believe that creative rights must extend beyond the literal text. A creator's voice is not just another data point to be harvested. If the law hasn't updated its architecture to recognize that, then we build the infrastructure ourselves. That is what we have done.
The Transparency Problem
What is one of the most pressing issues still? Creators currently have no reliable way to know whether a generative model has already consumed their work. The UK government has gestured toward transparency without mandating it in any enforceable way. Without a verifiable record of what data was used and where it came from, rights remain theoretical. You cannot enforce what you cannot see.
This is precisely why we built the Credtent Registry. It is about establishing provenance — planting a verifiable flag that signals, "This work is mine, I am reserving my rights, and those rights extend not just to the words but to the creative identity behind them."
The Constitutional Stakes
This matters in a particular way for Americans. Copyright is not a legislative afterthought in the United States. It is enshrined in Article I of the Constitution, which grants Congress the power to secure for authors and inventors the exclusive right to their creations. The founders understood, at the founding of the republic, that protecting what people create is not a cultural nicety but an economic foundation.
The American creative economy is one of the most powerful export industries the US has. Music, film, publishing, software are industries that generate hundreds of billions of dollars annually precisely because creators have had legal standing to own what they make. Strip that right away, or allow it to erode through technological loopholes that the law hasn't caught up to, and you are not only doing a disservice to artists. You are dismantling a core principle of American economic identity.
If we reach a point where a person's creative voice can be extracted, replicated, and monetized without their consent or compensation, we have not modernized copyright. We have abandoned it. We can't let this happen.
Building the New Framework
Moving forward requires two things operating in parallel.
The first is technical verification. The era of taking AI companies at their word about "what they trained on and how" is over. The army of technologists trying to claim that 'the LLM just LEARNED about your work, it didn't keep it' or comparing LLM training to the way a human reads books, looks at art, and listens to music, and then is inspired to do something new, just needs to stop. It's not the same thing in any way, shape, or form.
As much as the technology industry wants us to anthropomorphize AI in every possible way, artists know that we need to absolutely do the opposite and remember that we are dealing with a technology product designed with specific business goals, not another human being with a personal point of view, a unique mind of their own, including lived experiences, a perspective that comes from walking around the earth and interacting with other people, and, notably, things that we don't know.
That is quite different than a machine with access to nearly the entirety of human knowledge, insane processing power, and a fundamental lack of information about how the world works in physical space making predictions often without understanding about things like how gravity feels, what's it's like when a cat rubs against your leg, or the mouthfeel of a piece of coconut - things that a five-year-old child would understand more clearly.
Trust needs to be restored. Creators need tools to certify their work as human-composed and to explicitly control whether, and under what terms, it is used for training or generative referencing. Our Creative Origin Badges (and the Human-Composed Creation badge in particular) are the first universal tool an artist or creator can use to implement this, not just as a label but a declaration of human agency and a foundation for enforceable rights. They are free for commercial use.
The second is a new licensing standard that recognizes style as a licensable asset. If a studio wants to deploy an AI that writes in a specific author's voice, that author should be at the table, in control of the terms for use, and compensated. This is not a radical position to adopt. It is the founding American principle applied to new technology.
Physics aside, both Newton and Einstein would likely agree on one thing: If AI works to reshape the laws of creativity, then humanity must consciously choose the laws that govern AI.
The UK decision has stopped a bad outcome for now. What comes next has to build something better. The physics of creativity in the age of AI requires more timely action and we need to demand it be done.



Comments
There are no comments for this story
Be the first to respond and start the conversation.