Reflecting on Key AI Developments of 2025 and Considering Practical Implications for 2026
- Jan 12
- 8 min read
By: MacKinzie Neal, Matt Sumner and Emily Crone
2025 marked a pivotal year for the intersection of intellectual property and artificial intelligence (AI). High-profile cases, including those discussed in greater detail below, raised, and in some instances began to resolve, fundamental questions concerning the scope of fair use and risk allocation in AI transactions. These cases, alongside various legislative and regulatory developments in 2025, offer meaningful guidance for licensing strategy and compliance in this constantly evolving legal landscape. This article summarizes key copyright decisions from this past year, identifies additional legal and regulatory developments to monitor, and outlines practical considerations for technology agreements in the year ahead.
Notable Copyright Decisions in 2025
Multidistrict Litigation Against OpenAI and Microsoft
Among the most closely watched disputes at the intersection of AI and copyright law is the multidistrict litigation against OpenAI and Microsoft in the Southern District of New York (In re OpenAI, Inc. Copyright Infringement Litig. (S.D.N.Y)). This dispute includes plaintiffs such as The New York Times Company and other notable media publishers. The Times alleges that OpenAI trained ChatGPT on millions of copyrighted articles, including paywalled content scraped from the web, in violation of copyright law. Specifically, the Times points to ChatGPT’s production of substantially similar or near-verbatim passages from articles (identified in the Times’ original complaint in 2023) as evidence of infringement (The New York Times Company, et al. v. Microsoft Corporation, et al, S.D.N.Y. Case No. 1:23-CV-11195). OpenAI and Microsoft have defended data scraping as fair use, arguing that using large troves of text to train an AI model is a transformative process that does not replace the original works, analogizing the training to how search engines index content.
In April 2025, Judge Stein rejected most of OpenAI’s motion to dismiss, allowing the Times’s core claims to proceed. OpenAI and Microsoft filed answers denying the allegations, and the case moved to discovery alongside other consolidated disputes in the multidistrict litigation.
The consolidated cases remain in the discovery phase, and whether any parties will reach settlement in 2026 or proceed to trial remains to be seen. On January 5, 2026, Judge Stein affirmed an order compelling OpenAI to produce 20 million anonymized ChatGPT conversation logs. Judge Stein rejected OpenAI’s objections based on privacy obligations under the EU’s General Data Protection Regulation and U.S. state statutes and proposal to produce only conversations that implicated plaintiffs’ specific works. This ruling illustrates the need to consider how data retention policies and privacy compliance frameworks may be impacted by litigation exposure as disputes in this area continue to develop.
Bartz et al v. Anthropic
Bartz et al. v. Anthropic PBC (N.D. Cal. 2025) produced one of the year’s most significant copyright rulings along with its largest settlement. In Bartz, authors and publishers alleged that Anthropic trained its Claude model on copyrighted books without permission, including titles downloaded from pirate websites. In June 2025, Judge Alsup granted partial summary judgment, finding that Anthropic’s training on lawfully purchased books was “highly transformative” and likely fair use, but rejected any fair use defense for the downloaded pirated copies as “inherently, irredeemably infringing,” regardless of whether those works were ultimately used for model training.
This ruling underscores that outcomes in AI training cases will continue to be highly fact-specific. Further, the provenance of training data, not merely its use, may be dispositive in a fair use analysis, even where fair use arguments might otherwise succeed.
The resulting $1.5 billion settlement, reached in September 2025, demonstrates the substantial financial exposure that AI companies may face in similar litigation. Key terms of the settlement included coverage of past conduct only (with no license for future AI training), exclusion of claims based on infringing outputs, and required destruction of all pirated materials. The precedential effect of this settlement on AI-related litigation in 2026 remains to be determined.
Thompson Reuters v. Ross
Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc. (D. Del. Feb 11, 2025) arose from Ross’s effort to build an AI-enabled legal research business using Westlaw’s editorial materials, including headnotes and its “Key Number” classification system. After Thomson Reuters declined to license Westlaw content, Ross allegedly commissioned third-party contractors to generate training data derived from Westlaw headnotes.
On February 11, 2025, Judge Bibas (Third Circuit judge sitting by designation in the District of Delaware) held that Westlaw’s headnotes and related editorial features reflect sufficient originality to merit copyright protection and granted summary judgment with respect to copying and substantial similarity regarding the materials Ross commissioned from the third-party contractors. The also Court rejected Ross’ fair use defense as a matter of law, emphasizing the commercial and competitive character of Ross’ use and the risk of market substitution, since the training activity was directed at enabling a competing legal research product. Judge Bibas distinguished this case from prior intermediate copying cases, noting that, among other things, those involved functional computer code necessary for innovation, whereas Ross’ copying of text-based headnotes was not necessary to create its product.
The case is currently under appeal, where the Third Circuit will address whether Westlaw headnotes and the Key Number System contain sufficient originality for copyright protection and whether using those materials to train Ross’s AI models qualifies as fair use.
Although the case does not involve generative AI, the Third Circuit’s resolution, expected in 2026, is anticipated to be a bellwether for how courts analyze AI training on proprietary datasets.
Additional Developments to Watch in 2026
Continued Copyright Litigation: The cases discussed above represent only a subset of the ongoing AI-copyright disputes. Numerous suits involving artists, musicians, and software developers remain pending in various jurisdictions. Further decisions and the application of the Bartz settlement in 2026 may clarify or complicate the legal landscape.
U.S. Copyright Office Reports: In 2025, the U.S. Copyright Office released the remaining two installments of a three-part report on copyright and AI. The two installments, “Copyright and Artificial Intelligence: Copyrightability” (January 2025) and “Copyright and Artificial Intelligence: Generative AI Training” (May 2025) analyzed, among other issues, whether unauthorized use of copyrighted materials for AI training may qualify as fair use. Although these installments offer a useful analytical framework and provide key insights into the Copyright Office’s current view on the topic, these installments are nonbinding and were notably issued during a period of leadership transition at the Copyright Office. Companies should monitor whether the Office issues additional guidance or proposes regulatory changes in the coming year.
U.S. AI Legislation and Regulatory Developments:
At the state level, several states enacted or are advancing AI-focused laws. California, for example, has adopted transparency and safety disclosure requirements for certain AI systems through The Transparency in Frontier Artificial Intelligence Act, and Colorado’s AI Act imposes obligations on developers and deployers of high-risk systems to mitigate algorithmic discrimination, among other notable state-level developments. States have also enacted sector-specific legislation, including legislation related to hiring, consumer chatbot safety and healthcare.
At the federal level, several bills addressing AI transparency and algorithmic accountability remain under consideration. In May 2025, the “Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act” (known as the “Take It Down Act”) was signed into law, prohibiting any person from “knowingly publishing” intimate visual depictions of minors or non-consenting adults and requiring covered platforms (which include certain websites that host user-generated content) to establish a related notice and takedown process. The law requires the applicable companies to comply by May 19, 2026.
On December 11, 2025, President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” directing federal agencies to pursue a unified, “minimally burdensome” national standard and authorizing efforts to challenge state laws viewed as conflicting with that framework.
While the executive order does not itself preempt state laws, it directs the Department of Justice to form an “AI Litigation Task Force” to identify and potentially challenge state laws on constitutional or commerce clause grounds. The executive order also requires other agencies to consider preemptive federal standards. For example, the Federal Trade Commission is responsible for issuing guidance on how existing consumer protection laws (mainly, Federal Trade Commission Act’s prohibitions of unfair or deceptive acts) apply to AI models.
Overall, uncertainty remains regarding the implementation of these directives and the likelihood of success of any resulting legal challenges. Companies should continue to monitor both federal and state AI policy developments.
EU AI Act: The EU AI Act was enacted in August 2024 and the Act continues its phased implementation. Certain requirements become fully applicable on August 2, 2026, with EU authorities gaining enforcement authority at that time. AI providers and deployers serving European markets, including non-EU companies, should assess applicability and ensure readiness for compliance.
Practical Considerations for 2026
The 2025 decisions and regulatory developments offer guidance on how technology agreements should consider AI-related intellectual property risks. The following considerations may help guide contract negotiations and risk allocation in the year ahead.
Indemnification for AI-Related IP Infringement:
Given the magnitude of potential liability in AI-related disputes, as demonstrated by the Bartz settlement, indemnification provisions in agreements involving the use of AI tools continue warrant careful consideration. Agreements should clearly address whether IP indemnification covers claims arising from training data used in any underlying AI model, outputs, or both (as applicable). The scope of coverage can matter significantly: a narrow indemnity may leave gaps if infringement claims arise from how an underlying AI model was trained or with respect to the outputs the AI tool produces.
Parties should also carefully evaluate any carve-outs or limitations. For example, a provider may disclaim any warranty that AI outputs will be free of third-party intellectual property or exclude liability for claims arising from the use of publicly available training data. These provisions may limit recourse if infringement claims materialize. In some cases, parties may also consider negotiating caps on indemnification liability or requiring the indemnifying party to maintain insurance coverage for AI-related claims.
Clearly Defining AI Use: License agreements should expressly address whether and how licensed materials may be used for AI development or training, as ambiguity on this point could ultimately drive disputes. While an agreement may include general-purpose license grants (e.g., “use for internal business purposes”) with respect to licensed content, such provisions may risk opening parties to disputes. Licensors should therefore be mindful to ensure that the scope of use of any licensed intellectual property or data is clearly defined. Further, licensors may want to consider whether such license should expressly prohibit AI training outright, permit it subject to specific conditions, or require separate negotiation and compensation for such uses in the future. Licensees, in turn, should confirm that any intended AI-specific use cases fall clearly within the scope of a license grant.
Representations and Warranties: The Bartz decision demonstrates that fair use defenses are weakened where underlying materials were improperly procured. Customers should consider representations that training data has been lawfully acquired and does not infringe third-party intellectual property rights. AI providers should maintain detailed records of training data sources and implement remediation processes.
Ownership and Use of AI-Generated Outputs: Technology agreements should also address ownership of AI-generated outputs and whether license rights are granted to either party. Parties should consider addressing the risk that outputs may inadvertently contain protected third-party material (e.g., including covenants that AI outputs will not contain substantial portions of third-party copyrighted material or warranties that tools will not output more than a de minimis amount of verbatim content from any single source). In addition, parties should also consider including processes for handling claims in the event an output is found to infringe third-party rights, including notice obligations, cooperation requirements, and remediation steps for infringement claims.
Monitoring Legal Developments and Flexibility: The legal framework governing AI and copyright remains in flux. For long-term agreements, parties should consider provisions requiring good-faith cooperation to amend the agreement if legal requirements materially change, or clauses addressing how changes in applicable law will affect obligations, including potential termination rights or price adjustments. Even without such provisions, companies should continue to monitor ongoing litigation and legislative developments to adjust their strategic and contractual approaches proactively rather than reactively.
Conclusion
The developments of 2025 demonstrate that AI and copyright law present increasingly complex challenges, highlighting the importance of proactive legal risk management.
By incorporating lessons from 2025 and strengthening agreements with clear AI usage clauses, warranties, indemnities, and ownership terms, companies can more effectively mitigate risk in the coming year. Companies that adopt responsible AI governance practices and engage proactively with rightsholders will be better positioned as the legal landscape continues to evolve.

