Who Owns AI-Generated Art?
The central question puzzling legal experts, artists, and tech companies is simple yet deeply complex: who is the creator of a work produced by AI? The user who wrote the prompt? The company that developed the model? Or nobody at all?
In the United States, the Copyright Office has made it clear that works created solely by machines cannot receive copyright protection. This principle is rooted in the Constitution, which limits rights to “Authors” — human creators. The Thaler v. Perlmutter case (2023) confirmed that AI cannot hold copyrights, just as the Naruto v. Slater case ruled regarding the “monkey selfie.”
However, in January 2025, the Copyright Office clarified that works where human creative expression remains evident — through selection, arrangement, or adaptation of AI outputs — can be protected. “A Single Piece of American Cheese” became the first work composed entirely of AI outputs to be registered as a composite work, thanks to the human curation (selection, arrangement, coordination) involved.
The Théâtre D'opéra Spatial Case
Jason Allen used Midjourney to create “Théâtre D'opéra Spatial,” which won a prize at an art competition, sparking outrage. When he sought copyright, the Copyright Office denied it, stating the work contains “more than a de minimis amount” of AI-generated content that must be disclaimed in the application.
Training AI on Copyrighted Works
Perhaps the most contentious issue is whether using copyrighted material to train AI models constitutes infringement or falls under fair use. Deep learning models are trained on massive datasets of images, text, and audio scraped from the internet, often without the consent of their creators.
Proponents of AI companies argue that this use is “transformative” and limited, similar to the Google Books case. In 2015, the Appeals Court ruled that Google's book scanning constituted fair use because the content wasn't made publicly available and the use was non-expressive.
On the other hand, Judge Vince Chhabria expressed serious doubts: "You have companies using copyright-protected material to create a product capable of producing an infinite number of competing products. You are dramatically changing, you might even say obliterating, the market for that person's work." However, in June 2025 he ruled in Meta's favor due to insufficient demonstration of harm from AI outputs.
Major Artist Lawsuits
In January 2023, three American artists — Sarah Andersen, Kelly McKernan, and Karla Ortiz — filed a class-action lawsuit against Stability AI, Midjourney, and DeviantArt, claiming these AI tools were trained on 5 billion images without consent. The suit alleged copyright infringement, unfair competition, and violation of publicity rights.
Judge William Orrick dismissed most claims but maintained the copyright claim against Stability AI. In August 2024, additional copyright and trademark infringement claims were accepted, significantly strengthening the case.
Getty Images vs Stability AI
Getty Images sued Stability AI in both London (January 2023) and the US (February 2023). The suit alleges Stable Diffusion was trained on millions of Getty images without a license, and even generated images bearing Getty's watermark — proving the theft.
Authors vs OpenAI and Meta
The Authors Guild, representing 17 writers including George R.R. Martin, sued OpenAI in September 2023. The New York Times sued Microsoft and OpenAI in December 2023. By March 2025, the judge refused to dismiss the case, sending a strong message. Sarah Silverman, Paul Tremblay, and Mona Awad also filed lawsuits against OpenAI and Meta.
The Anthropic Settlement — $1.5 Billion
In August 2025, Anthropic proposed a $1.5 billion settlement — approximately $3,000 per author for 500,000 affected creators — the largest in copyright infringement history. Judge William Alsup rejected the deal, finding it would be “forced down the throat of authors” through inadequate terms.
Style Imitation vs Copyright
A particularly interesting issue involves the imitation of an artist's style. AI models can faithfully reproduce a creator's characteristics — brushstrokes, color palette, perspective — but generally, an artist's overall style is not subject to copyright protection.
In March 2025, a ChatGPT update triggered a viral “Ghiblification” trend — creating images in the style of Studio Ghibli and Hayao Miyazaki. The trend was deemed inappropriate given Miyazaki's negative stance on AI, and OpenAI placed restrictions on generating images mimicking living artists.
In June 2025, Disney and NBCUniversal sued Midjourney, calling it a “bottomless pit of plagiarism,” for training on characters from Star Wars, The Simpsons, and more. Warner Bros. followed suit in September 2025.
Legislation by Country
United States
The American approach relies on the fair use framework. The Copyright Office issued guidelines in March 2023, while its May 2025 report examined two critical factors: (1) whether AI creation is derivative or transformative and (2) the commercial impact on the original market. Representative Adam Schiff proposed the Generative AI Copyright Disclosure Act in April 2024, which would require AI companies to submit training data to the Register of Copyrights.
European Union
The EU AI Act (2024) recognizes the legality of text and data mining (TDM) with a key condition: copyright holders can opt out. From August 2025, providers of “general-purpose” AI models must publish a detailed summary of training content. The EU's TDM exception is considered a strong legal basis, supported by a 2024 German court decision (Kneschke vs LAION).
United Kingdom
British law (Copyright, Designs and Patents Act 1988) explicitly recognizes “computer-generated works” and assigns rights to the person who “made the necessary arrangements” for their creation. The government proposes a new TDM exception for any purpose, with safeguards for rights holders.
China
In November 2023, the Beijing Internet Court recognized copyright in AI-generated images — a landmark decision that stands out globally, as most jurisdictions refuse such recognition.
AI, Music, and Digital Media
Legal issues extend far beyond visual art. In June 2024, the RIAA and major record labels sued Suno AI and Udio — AI music generation models — claiming they were trained on copyrighted music without consent. German collecting society GEMA sued OpenAI in November 2024 for reproducing lyrics, and won — the court found lyrics were being reproduced nearly verbatim.
Multiple news organizations — The New York Times, Tribune Publishing, Canadian media outlets, ANI (India) — have sued AI companies for using their articles to train models. In September 2025, even Encyclopædia Britannica sued Perplexity AI, alleging the search engine copies content without compensation and generates false results attributed to Britannica.
Theft vs Fair Use: The Arguments
For Fair Use
- Transformative use: AI models don't copy — they learn patterns, just as a painter studies techniques
- Creative tool: AI is analogous to a paintbrush or camera — simply a tool
- Technological neutrality: If pressing a camera button grants copyright, why not a prompt?
- Licensing impossible: Datasets are so vast that licensing every element is practically impossible
Against Fair Use
- Commercial purpose: Usage serves the profit of multi-billion-dollar corporations
- Market destruction: AI outputs directly compete with original works
- Reproduction: Studies show models can reproduce training images (0.03% exact copies)
- Ethical dimension: Creators were neither asked nor compensated — web scraping without consent
- LLM memorization: Large language models reproduce 1-7% of training content verbatim
The Future of AI Art and Rights
Legal uncertainty surrounding AI art will continue for years. Each court ruling sets a new precedent, but the questions remain open. Key issues to be resolved include:
- What is the minimum human involvement required for copyright in AI-assisted works?
- Should AI companies pay licensing fees or compensation?
- When does style imitation become infringement?
- How will the EU AI Act be applied in practice?
- Should an entirely new legal framework be created specifically for AI?
Tech giants — Google, Microsoft, Meta — can afford licensing fees. If courts rule that licensing is required, this will benefit the giants at the expense of smaller companies and independent developers. On the other hand, complete exemption means creators will never be compensated.
What This Means for You
If you use AI art tools, always check the terms of service. If you sell AI-generated works, know that legal coverage remains unclear. If you're an artist, consider whether your tools respect other creators' rights. The situation is evolving rapidly and each new court decision changes the rules.
