Who Owns AI Art? A Deep Dive into Copyright, Intellectual Property, and Liability
In 2019, a
But the thing about advancement is that change begets change.
Fast forward to 2022, when the release of ChatGPT changed everything. Suddenly, generative AI was not just a cool tech demo; it became accessible to everyone—from hobbyists to professionals to everyday users. What seemed like a fascinating novelty quickly turned into a disruptive force in creative industries. Now, a much louder conversation is emerging, one that dives straight into legal and ethical quicksand: who owns AI-generated art? Does ownership lie with the user who prompted the algorithm? The developer who wrote the code? Or the company that owns the model? And what about the artists whose original works were scraped to train these systems?
Studio Ghibli Embroiled in an IP Struggle
Generative AI tools such as ChatGPT,
Online forums and social media are flooded with Ghibli-inspired AI artworks—some so convincing that fans mistake them for unreleased concept art. While impressive, this has sparked outrage among creatives who see it as theft, plain and simple. Advocates are now calling for tighter regulations to protect artists’ rights in the age of machine-made media.
Mr. Chua, an intellectual property lawyer, explained to The Straits Times that “no one can claim exclusive rights over a style,” citing a notable legal precedent involving musician Ed Sheeran, who was sued for allegedly copying another song. The case was ultimately dismissed because chord progressions—like art styles—cannot be copyrighted. However, this doesn’t mean artists are without recourse. If AI-generated content crosses the line into direct reproduction of copyrighted elements (like specific characters or original compositions), that’s when legal trouble begins.
The Case for Ethical Oversight
As one of the most beloved animation studios in the world, many creatives hope Studio Ghibli will lead the charge in confronting the unchecked rise of AI art. The concern isn’t just about imitation—it’s about dilution of creative labor. Artists who spend years refining their craft are now competing with machines that can mimic their styles in seconds, without credit, compensation, or consent.
Just as OpenAI faces lawsuits for training language models on copyrighted materials without permission—such as the one from Scarlett Johansson, who alleges her voice was used to train a voice assistant without her consent—visual artists are demanding similar accountability. But so far, unless an AI blatantly recreates iconic characters like Totoro or Princess Mononoke, current copyright law remains toothless.
Who Owns AI-Generated Images?
Here lies the core dilemma. In most jurisdictions, copyright is only granted to works created by humans. This legal principle was reinforced in the infamous “monkey selfie” case, where a macaque used a photographer’s camera to take a selfie. The courts ruled that because the image was not created by a human, it wasn’t eligible for copyright.
So, if AI isn’t human—and if it autonomously generates a piece of art—then who can own the rights to that creation?
Currently, many AI platforms state in their terms of service that
If an AI learns to draw “like” a particular artist by analyzing thousands of their works, is that theft or inspiration? And should that artist be entitled to royalties or recognition when their style is emulated?
The Grey Zone: Copyright, Culture, and the Future
Right now, the legal consensus leans toward considering AI-generated images as public domain—meaning nobody owns them. But this could change as litigation continues and as governments begin crafting new legislation. Some countries are already taking tentative steps. The European Union, for example, has introduced draft AI regulations that include transparency requirements for generative systems. Meanwhile, the U.S. Copyright Office has begun rejecting registrations for AI-generated content unless there is “significant human authorship.”
This leaves us in a grey zone—a sort of copyright limbo where users believe they own what they create, platforms claim limited liability, and original artists have little to no recourse. It’s a situation that benefits tech companies the most, while creatives are left to navigate an evolving digital Wild West.
What Happens Next?
The path forward is murky. Any meaningful solution will likely require an unprecedented level of international cooperation. After all, the internet doesn’t recognize borders, and neither does AI-generated content. For copyright laws to truly be effective in this space, they’ll need to be
In the meantime, artists are fighting back in the ways they can: through lawsuits, public campaigns, and even by creating “poisoned” data that disrupts how AI systems train on their work. There’s also a growing call for new forms of digital licensing that would allow artists to opt in—or opt out—of having their work used to train generative systems.
Whether AI art becomes a democratizing force or a destructive one depends on how we, as a society, choose to regulate it. One thing is clear: the conversation is only just beginning, and the rules of ownership in the digital age are still being written.