Courts are setting new boundaries for AI copyright, while creators unite against unauthorised use of their works, signalling a shift towards greater protection and accountability in AI development.
What happenend?
-
Courts are establishing boundaries for AI copyright: Multiple federal court rulings have allowed copyright lawsuits against AI companies to proceed, including The New York Times’s case against OpenAI and Microsoft. At the same time, courts have ruled that AI-generated content without meaningful human input cannot be copyrighted, preserving public domain status for purely AI-created works.
-
Creative industries are mobilising against unauthorised AI training: Global creators – from authors to musicians to visual artists – are uniting against AI companies’ use of copyrighted works without permission or compensation. The New Zealand Society of Authors highlighted thousands of books being illegally copied for AI training, while Meta faces accusations (which it denies) of pirating 53 copyrighted works among 7.5 million books used to train Llama 3.
-
Industry self-regulation efforts are emerging: The Authors Guild launched a “Human Authored” certification system to distinguish human-created content from AI-generated works. Meanwhile, blockchain solutions like the Oasys-AnimeChain partnership aim to protect intellectual property rights in the anime industry against unauthorised AI use.
-
Governments are recalibrating regulatory approaches: The EU has updated Model Contractual Clauses for AI Procurement, while UK ministers are reassessing AI regulations following pushback from prominent artists like Sir Elton John and Sir Paul McCartney. These developments signal a shift toward greater protection for creators while attempting to balance innovation needs.
-
OpenAI’s image generation capabilities have sparked fresh controversy: The company’s GPT-4o image generation feature, particularly its ability to mimic Studio Ghibli’s distinctive style, has reignited debates about copyright, artistic integrity and the boundaries of fair use in AI development.
What do we think?
The ethics debate in AI publishing is undergoing a significant shift, with the balance of power gradually tilting toward greater protection for creators and increased accountability for AI companies. Several key developments support this assessment:
The legal landscape is evolving in favour of content creators, with courts allowing major copyright lawsuits to proceed against AI companies while simultaneously establishing that AI cannot claim copyright protection. These judicial decisions are creating important precedents that will likely shape the future relationship between AI development and creative industries.
We’re witnessing the emergence of a more organised resistance from creative communities. Rather than isolated complaints, we now see coordinated action across different creative sectors – from authors to visual artists to musicians. This collective approach amplifies their influence and increases pressure on both AI companies and regulators to address their concerns.
The regulatory pendulum appears to be swinging back toward creator protection after an initial period that heavily favoured technological innovation. The UK government’s reconsideration of its text and data mining proposal following creator pushback exemplifies this shift, as does the EU’s increasingly structured approach to AI governance. The US remains a more tech-friendly outlier, it seems.
Industry self-regulation initiatives like the Authors Guild’s “Human Authored” certification and blockchain-based IP protection systems indicate a maturing market that recognises the need for clearer boundaries and standards. These developments suggest the industry is moving beyond the “Wild West” phase of AI development toward more sustainable models.
However, significant challenges remain. The definition of “fair use” in the context of AI training remains contentious and inconsistent across jurisdictions. Economic models for fairly compensating creators whose work contributes to AI development are still underdeveloped. And the technical challenge of enforcing creator rights in a digital environment where copying is effortless continues to present obstacles.
We anticipate that the next phase of this debate will focus on developing practical frameworks for balancing innovation with creator rights – moving beyond legal battles toward collaborative solutions that recognise the legitimate interests of all stakeholders.
Scenarios and probabilities
Base case (probability 50%)
A gradual strengthening of creator protections will emerge alongside continued AI innovation. Courts will establish clearer boundaries for fair use in AI training, while industry-led certification systems gain wider adoption. AI companies will implement more transparent data sourcing and opt-out mechanisms, but full compensation frameworks remain incomplete. Regulatory approaches will vary by region, creating compliance challenges for global operators. Publishers will develop hybrid human-AI workflows with clear attribution standards.
Upside scenario (probability 20%)
Collaborative frameworks emerge that effectively balance creator rights and AI advancement. Major AI developers establish comprehensive licensing programs with fair compensation models. International standards for AI training data usage are harmonised across key markets. Technical solutions for content provenance and attribution become widely implemented. New business models create win-win opportunities for both creators and AI companies. Public trust in AI-assisted publishing increases due to transparent practices.
Downside scenario (probability 30%)
The regulatory landscape fragments further, creating significant legal uncertainty. High-profile lawsuits result in contradictory rulings across jurisdictions. AI companies face substantial financial liabilities for past copyright infringements. Creative industries experience economic disruption as compensation models remain unresolved. Public backlash against AI-generated content intensifies, leading to more restrictive regulations. Innovation slows as legal risks deter investment in generative AI technologies.
How to interpret this/What to do
• Monitor evolving legal precedents in copyright cases against AI companies, particularly the New York Times lawsuit against OpenAI and Microsoft. These cases will establish important boundaries for AI training practices and fair use interpretations.
• Assess your organisation’s AI training data sources for potential copyright vulnerabilities. Companies using AI should conduct audits to ensure proper licensing and permissions for training materials, with particular attention to creative content.
• Implement clear attribution and transparency policies for AI-assisted content creation. Organisations should develop and enforce standards that clearly distinguish between human and AI contributions in published materials.
• Consider adopting or supporting certification systems like the Authors Guild’s “Human Authored” initiative. These emerging standards will likely become increasingly important for maintaining trust and authenticity in publishing.
• Engage with industry associations and regulatory consultations on AI ethics and copyright. The collective voice of industry stakeholders is proving influential in shaping both government policy and industry practices.
• Develop contingency plans for different regulatory outcomes, particularly if operating across multiple jurisdictions. The fragmented regulatory landscape requires flexible approaches to compliance.
• Explore collaborative models between creators and AI developers that ensure fair compensation and recognition. Forward-thinking organisations are moving beyond adversarial positions toward mutually beneficial frameworks.
Market impact summary
• Publishing industry: Traditional publishers face both challenges and opportunities in the evolving AI ethics landscape. While unauthorised use of content threatens revenue models, emerging licensing frameworks and human- authored certification systems offer new value propositions. Publishers who develop clear AI policies and embrace transparent attribution standards will gain competitive advantage. Expect increased investment in content provenance technologies and rights management systems as the industry adapts.
• Technology sector: AI companies face heightened legal and regulatory scrutiny that will impact development strategies and cost structures. Companies will need to allocate more resources to proper data licensing, compliance frameworks, and potential legal liabilities. This shift may temporarily slow development cycles but will ultimately create more sustainable business models. Smaller AI startups with limited legal resources may struggle with compliance, potentially leading to industry consolidation.
• Creative economy: Individual creators and rights holders are gaining leverage in negotiations with technology companies, though economic benefits remain unevenly distributed. New collective licensing models and blockchain-based rights management systems offer promising avenues for fair compensation. The distinction between human and AI-created content is becoming economically significant, creating premium markets for certified human work. Creative professionals who develop expertise in directing and refining AI outputs will find expanding opportunities.
• Legal services: Demand for specialised legal expertise in AI ethics, copyright and intellectual property is surging. Law firms with capabilities in these areas are experiencing significant growth. The complex and evolving nature of AI regulation across jurisdictions creates sustained demand for compliance advisory services. Litigation related to AI training data and outputs will continue to increase, creating both costs and opportunities for stakeholders throughout the ecosystem.
Did you find this report useful? Tell us what you think by emailing [email protected]
- https://constitutioncenter.org/blog/federal-court-rules-artificial-intelligence-machines-cant-claim-copyright-authorship – This article supports the claim that courts have ruled against AI-generated content being copyrighted without meaningful human input. The specific case of Thaler v. Perlmutter highlights that AI cannot be listed as the author of a work for copyright purposes.
- https://www.mass.gov/guide-to-evidence/article-xi-miscellaneous – Although not directly related to AI copyright, this article discusses legal evidentiary standards, which are relevant to ongoing legal debates surrounding AI and copyright issues.
- https://www.jw.com/news/insights-federal-court-ai-copyright-decision/ – This article discusses a federal court’s decision regarding AI copyright, specifically focusing on the use of copyrighted material to train AI models, highlighting the legal complexities and challenges surrounding AI and copyright law.
- https://pmc.ncbi.nlm.nih.gov/articles/PMC10311201/ – This article touches on digital evidence and its impact in legal proceedings, which is tangentially relevant to AI copyright discussions, particularly regarding digital data and legal implications.






