The case before London’s High Court accuses the AI firm of illegally using millions of its images to train its Stable Diffusion model.
Getty Images has launched a high-profile copyright lawsuit against artificial intelligence firm Stability AI, with proceedings now under way at London’s High Court. The case is expected to set significant legal precedents for how copyright law applies to artificial intelligence.
Getty alleges that Stability AI unlawfully scraped millions of images from its website to train its text-to-image model, Stable Diffusion. The tool generates images from written prompts and, Getty claims, relies on creative content taken without permission or payment. It argues that this use amounts to copyright infringement and undermines its business.
Stability AI denies the allegations and has framed the dispute as a broader test of how copyright should function in the context of emerging technologies. A spokesperson said its models build on “collective human knowledge,” and suggested the training process aligns with fair use principles.
The case comes amid a wider global reckoning over the use of copyrighted content in AI training. As generative tools such as ChatGPT and image models like Midjourney have grown in prominence, artists, photographers and other rights holders have called for stronger legal protections. Prominent voices, including Elton John, have warned of the risks posed to creators if their work is reused without consent.
Lawyers expect the Getty case to be closely watched by governments and regulators. A win for Getty could open the door to a wave of legal actions from other content owners. Rebecca Newman, a partner at Addleshaw Goddard, described the case as “uncharted territory,” with potentially far-reaching consequences for copyright enforcement in the AI era.
Cerys Wyn Davies, a partner at Pinsent Masons, said the decision could also influence investment in the UK’s AI sector. “The outcome may affect how the UK is viewed as a market for developing and deploying AI technologies,” she said.
The lawsuit also highlights growing concerns about the limits of current copyright frameworks. In the US, the Copyright Office has published reports outlining the challenges of regulating AI-generated content and deepfakes, and there have been calls for new federal laws to protect people’s likenesses and rein in unauthorised use of digital replicas.
Source: Noah Wire Services
- https://www.aol.com/news/gettys-landmark-uk-lawsuit-copyright-051103907.html – Please view link – unable to able to access data
- https://www.reuters.com/sustainability/boards-policy-regulation/gettys-landmark-uk-lawsuit-copyright-ai-set-begin-2025-06-09/ – Getty Images has initiated a significant copyright lawsuit against Stability AI at London’s High Court, alleging that Stability AI unlawfully used millions of its images to train the Stable Diffusion system, which generates images from text inputs. Stability AI denies the allegations, asserting that the dispute centres on technological innovation and freedom of ideas. This case is part of a broader global trend of legal actions concerning the use of copyrighted material to train AI models, with potential implications for AI development and copyright law.
- https://www.reuters.com/legal/legalindustry/report-deepfakes-what-copyright-office-found-and-what-comes-next-ai-regulation-2024-12-18/ – The U.S. Copyright Office released a report addressing the issue of deepfakes, noting that current copyright and intellectual property laws are insufficient to tackle the harm posed by AI-generated digital replicas. The report urges new federal legislation to protect individuals’ likenesses and ensure accountability. The NO FAKES Act, introduced by a bipartisan group of Senators, seeks to create a national standard by granting individuals the right to control the use of their voice and likeness and holding platforms accountable for unauthorized replicas.
- https://www.ft.com/content/8e02f5e7-a57c-4e99-96de-56c470352eff – Brenda Sharton of Dechert and Andy Gass of Latham & Watkins are leading the charge in navigating new legal territories surrounding generative AI, with Sharton successfully defending Prisma Labs against class action allegations. Despite AI’s long-standing development history, the recent surge in cases, including those involving giants like OpenAI and Anthropic, brings complex issues of copyright and privacy to the forefront. Legal experts are tasked with educating judges on intricate AI functionalities while determining lawful use of copyrighted material.
- https://www.breitbart.com/tech/2023/01/19/getty-images-sues-stability-ai-for-scraping-millions-of-copyrighted-photos/ – Getty Images is suing Stability AI, the creators of the popular AI art tool known as Stable Diffusion, for scraping millions of images from its site. The stock photo company claims the creators of the AI tool have engaged in copyright violation and “chose to ignore viable licensing options and long-standing legal protections in pursuit of their stand-alone commercial interests.”
- https://www.thefashionlaw.com/getty-names-stability-ai-in-copyright-lawsuit-over-ai-generator/ – In the wake of Getty announcing that it had “commenced legal proceedings” in the High Court of Justice in London against Stability AI, Getty Images (US), Inc. has lodged what might be the most notable domestic lawsuit currently on the artificial intelligence (“AI”) front amid a larger rise in cases that center on companies’ unauthorized use of others’ works to train AI models. According to the complaint that it filed with the U.S. District Court for the District of Delaware on Feb. 3, Getty claims that as part of a “brazen infringement of [its] intellectual property on a staggering scale,” Stability AI has copied millions of photographs from its collection “without permission from or compensation to Getty Images, as part of its efforts to build a competing business.”
- https://www.reuters.com/legal/legalindustry/report-deepfakes-what-copyright-office-found-and-what-comes-next-ai-regulation-2024-12-18/ – The U.S. Copyright Office released a report addressing the issue of deepfakes, noting that current copyright and intellectual property laws are insufficient to tackle the harm posed by AI-generated digital replicas. The report urges new federal legislation to protect individuals’ likenesses and ensure accountability. The NO FAKES Act, introduced by a bipartisan group of Senators, seeks to create a national standard by granting individuals the right to control the use of their voice and likeness and holding platforms accountable for unauthorized replicas.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The narrative is current, with the lawsuit commencing today, 9 June 2025. No earlier versions of this specific content were found, indicating high freshness.
Quotes check
Score:
10
Notes:
The direct quotes in the narrative are unique and do not appear in earlier material, suggesting originality.
Source reliability
Score:
10
Notes:
The narrative originates from a reputable source, Reuters, enhancing its credibility.
Plausability check
Score:
10
Notes:
The claims made in the narrative are plausible and align with known facts about the lawsuit and the parties involved.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is fresh, original, and sourced from a reputable outlet, with all claims being plausible and supported by current information.


