AI’s Copyright Dilemma: Recent Lawsuits and Implications

Over 25 copyright infringement lawsuits against AI companies are currently pending in federal court.
As AI advances, copyright law is being tested on two critical fronts: the inputs that train AI models and the outputs they generate. These disputes are creating new opportunities for plaintiffs attorneys to challenge the alleged unauthorized use of copyrighted material.
In February 2025, the United States District Court for the District of Delaware (D. Del.) issued its decision in Thomson Reuters v. Ross Intelligence, marking the first major ruling in the United States to directly address the legality of using copyrighted content without authorization for training artificial intelligence models. The court ruled in favor of Thomson Reuters, finding that Ross’s use of Westlaw’s legal headnotes harmed the market for the original work and failed to meet the threshold of transformative use.
On the issue of ownership, the US Court of Appeals for the District of Columbia Circuit upheld the lower court’s ruling in Thaler v. Perlmutter on March 18, 2025. This was a significant ruling affirming that human authorship is a prerequisite for copyright protection under US law.
As cases continue to set new precedents, plaintiffs attorneys will play an important role in shaping the future of copyright protections in the AI era.
Who Owns AI-Generated Content?

Copyright law traditionally protects original works of authorship, including literature, music, and visual art, but only when created by humans. Since early 2023, the US Copyright Office has been investigating how AI impacts copyright law and policy, focusing on the copyrightability of AI-generated works and the use of copyrighted content in AI training.
After conducting public listening sessions and webinars, the Office issued a notice of inquiry in the Federal Register in August 2023, which prompted more than 10,000 public comments.
Based on these comments, the Office has issued two out of three parts of its Report on Copyright and Artificial Intelligence. Part Two, published on January 29, 2025, addresses if outputs created by generative AI can be copyrighted.
In the report, the Office draws the following conclusions and recommendations based on this matter:
- Questions of copyrightability and AI can be resolved pursuant to existing law, without the need for legislative change.
- The use of AI tools to assist rather than replace human creativity does not affect the availability of copyright protection for the output.
- Copyright protects the original expression in a work created by a human author, even if the work also includes AI-generated material.
- Copyright does not extend to purely AI-generated material, or material where there is insufficient human control over the expressive elements.
- Determining whether human contributions to AI-generated outputs are sufficient to constitute authorship requires case-by-case analysis, with copyright protection extending to perceptible human authorship in the outputs, as well as the creative selection, coordination, or arrangement of material or creative modifications of the outputs.
- Based on the functioning of current generally available technology, prompts alone do not provide sufficient control over the output.
- The case has not been made for additional copyright or sui generis protection for AI-generated content.
While the Office will continue to monitor technological and legal developments to determine whether any of these conclusions should be revised, as it currently stands, copyright protection in the US still requires human authorship. This is based on the Copyright Clause in the Constitution and the Copyright Act (17 U.S.C. § 501) as interpreted by the courts.
The Courts Weigh In
In 2023, the US District Court for the District of Columbia became the first court to specifically address if AI-generated outputs could be copyrighted in Thaler v. Perlmutter. In this case, Dr. Stephen Thaler attempted to register a copyright for an artwork, A Recent Entrance to Paradise, created entirely by his AI system, the Creativity Machine. The US Copyright Office and the court denied the request, stating that the “plaintiff played no role in using the AI to generate the work” and did not meet the human authorship requirement.
Thaler’s attempts to appeal this ruling were once again shot down by the courts last week, reinforcing that US copyright law remains centered on human creativity.
In an interview with Law.com in early March, intellectual property attorney, Ryan Phelan, who contributed to an amicus brief in the case, argues that the law needs updating. He likens AI to a paintbrush: an advanced tool used by human creators, suggesting that copyright should recognize human involvement in AI-assisted works.
Phelan draws a parallel to early legal debates over photography, where courts ultimately determined that the photographer’s choices in framing, lighting, and composition constituted human authorship. Similarly, he believes AI-generated content should be eligible for copyright when human creativity plays a meaningful role in its creation.
The First AI-Assisted Image is Granted Copyright
In February 2025, the US Copyright Office did grant copyright protection to the first AI-assisted image, A Single Piece of American Cheese, marking an important milestone.
Created by Kent Keirsey, CEO of Invoke, this artwork involved approximately 35 iterative edits using Invoke's AI inpainting features. Keirsey meticulously documented his creative process, demonstrating substantial human authorship in selecting, arranging, and coordinating AI-generated elements.
Initially, the US Copyright Office had reservations but reversed its stance after reviewing Keirsey's workflow video, acknowledging that the combined AI elements formed a unified image reflecting human creativity. This decision shows that while purely AI-generated content remains ineligible for copyright, works exhibiting meaningful human input can qualify for protection.
The Controversy Over AI Training Data

While questions of authorship and copyrightability focus on the outputs of AI systems, there’s another major dispute over how these models are trained. The datasets used to develop generative AI, including large language models (LLMs) and image generators, often contain copyrighted works, raising concerns about unlicensed use and infringement. Courts are now deciding whether AI developers can freely use publicly available content for training, or if they must obtain explicit permission from rights holders.
Since AI became a mainstream tool, artists, writers, and other content creators have brought several class actions under copyright infringement and unfair competition laws against major companies including GitHub, Stability AI, OpenAI, and Meta, arguing that these companies have improperly used their works without permission.
In January 2023, a group of visual artists also filed a putative class action lawsuit in the Ninth Circuit against AI companies, Stability AI, Midjourney, and DeviantArt. The original complaint alleged the companies violated the Copyright Act by unlawfully inputting copyrighted material to train its AI models and outputting images that are derivative of the protected material.
As of August 2024, US District Judge Wiliam Orrick allowed the plaintiffs' copyright claims to proceed, marking a significant development in the case.
In September 2023, the Authors Guild and more than a dozen well-known authors sued OpenAI and Microsoft, claiming their copyrighted works were used to train AI models without permission or compensation. The lawsuit argues that AI-generated content could qualify as “derivative works,” potentially harming the market for their books. That same month, authors Michael Chabon, Ayelet Waldman, and Matthew Klam also took legal action, filing a copyright infringement suit against both OpenAI and Meta.
Thomson Reuters v. Ross Intelligence
In the world of legal tech, we’ve also witnessed some significant case law develop. Thomson Reuters, the owner of Westlaw, recently won a case against Ross Intelligence, a now-defunct AI legal research startup, alleging that it copied Westlaw’s proprietary legal headnotes to train its AI model. This case marks the first major US court decision addressing whether using copyrighted materials to train an AI model constitutes fair use.
Ross defended itself under the fair use doctrine, which allows limited use of copyrighted material under certain conditions. Courts assess fair use based on four factors:
- Purpose of use: Whether the use was transformative, adding new meaning or function.
- Nature of the copyrighted work: Whether the material is more factual or highly creative.
- Amount used: Whether the portion copied was a significant part of the original work, either quantitatively or qualitatively.
- Market effect: Whether the use negatively impacted the market value of the original work or its derivatives.
The judge found that two factors favored each side, but the fourth, market harm, ultimately tipped the scales in favor of Thomson Reuters. Informed by the decision in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith, 598 U.S. 508 (2023), the court ruled that Ross Intelligence’s use of Westlaw’s headnotes was not transformative because it directly contributed to the development of a competing product, affecting Westlaw’s market position and potential licensing opportunities for AI training data.
Additionally, the court rejected Ross’s intermediate copying defense, which applies in cases where computer code is copied for functional purposes. The judge clarified that this defense does not apply to text-based works like Westlaw’s headnotes.
This ruling strengthens protections for publishers and legal databases, making it harder for AI developers to use copyrighted materials without permission. It also sets an important precedent for future lawsuits against AI companies, including ongoing litigation involving OpenAI, Stability AI, and others, reinforcing the need for proper licensing agreements.
Key Takeaways for Plaintiffs Attorneys

The recent rulings in AI copyright cases signal that courts are prepared to protect original content, and it offers a roadmap for future claims. Plaintiffs attorneys should consider the following points when considering these cases:
- AI training data practices: Courts are increasingly defining the boundaries of permissible use of copyrighted materials in AI training. The Thomson Reuters v. Ross decision suggests that AI developers may be required to obtain licenses for training datasets, presenting opportunities to challenge unauthorized use.
- Derivative works claims: There is a growing recognition that AI-generated content can replicate key elements of copyrighted works. Cases against companies like OpenAI and Stability AI highlight the potential for claims based on unauthorized reproductions.
- Fair use analysis: Courts are demonstrating a readiness to reject broad fair use defenses, especially when AI-generated content competes directly with original works. Emphasizing market harm will be crucial in these arguments.
- Class action strategies: Given the extensive use of copyrighted content in AI training, class actions can effectively address widespread infringement, as evidenced by lawsuits filed by the Authors Guild.
- Licensing negotiations: The precedent set by Thomson Reuters v. Ross may encourage AI companies to seek licensing agreements to avoid litigation. Attorneys should explore both litigation and strategic licensing as viable enforcement paths.
Uncover Legal Violations and Build Stronger Cases

The world of law is constantly evolving, establishing new boundaries and creating a growing need for attorneys to stay ahead of changes while simultaneously identifying potential violations and building cases supported by strong evidence and strategic insight.
This is where Darrow comes in.
Whether you're preparing for discovery, challenging fair use, or building a class action, Darrow gives plaintiffs attorneys the tools, insights, and partnership they need to litigate complex cases with confidence.
Our Legal Intelligence Platform combines anomaly detection algorithms to surface potential legal violations. Along with human insight and supervision, our Platform scans vast volumes of public and proprietary data to help attorneys uncover high-value infringements and patterns that might otherwise go unnoticed, laying the groundwork for strong, evidence-backed claims.
In addition to case discovery, our in-house legal consultants provide strategic litigation support throughout the process. We partner with plaintiff firms to evaluate claims, refine arguments, and anticipate defenses, offering deep subject-matter expertise every step of the way.
Interested in learning how Darrow can help you connect with and litigate impactful cases? Contact us.
This might interest you: