top of page

Where Art Meets Algorithm


By Hannah Lau and Isha Ursekar

Editorial Committee + UKSLSS Essay Competition 2025 Submission


Introduction

The recent landmark case of Getty Images Inc v Stability AI Ltd marks the UK’s first significant ruling in navigating the complex relationship between copyright law, trademark law and generative AI.


Passed on 4 November 2025, this case involved Getty Images (“Getty”) suing Stability AI (“Stability”) for infringing Getty’s IP rights by using Getty’s images to train Stability’s image-generating AI model, Stable Diffusion. The Court dismissed all but Getty’s trademark infringement claims. In the course of the proceedings, Justice Joanna Smith ruled as follows:


  1. Clarified the interpretation of an ‘article’ to include intangible goods.

  2. Established the threshold for trademark infringement to be whether users will generate the watermark with general prompts.

  3. Held that the final AI model trained on copyrighted images does not amount to copyright infringement.


Three key aspects of the judgment will be covered in this essay, alongside what they mean for the future of IP law: first, the court’s decision on what a ‘secondary infringement’ includes; second, the threshold of trademark infringements and its fairness; third, the difference between the training and released versions of Stable Diffusion.


What does a ‘secondary infringement’ entail?

There are two notable aspects under the secondary infringement claim: first, that intangibles could constitute ‘articles’; second, no infringing copies were imported, possessed or dealt with.


It is first crucial to outline how an “article” and “secondary infringement” are defined in the Copyrights, Designs and Patents Act 1988 (“CDPA”). An article is ordinarily used with reference to tangible objects such as books or artworks. [1] Intangibles, in reality, do not fall neatly within the definition of ‘articles’. [2] Up until now, it had been unclear as to whether articles can include intangibles that store information. This is evident through the lack of cross-referencing to past cases, with the exception of Sony v Ball. [3] The court’s definition of an ‘article’ to include intangibles opens the possibility, within the statutory framework, for other intangible forms to be recognised as lawfully storing information. [4]


Another point to note is the court’s departure from EU copyright principles. On one hand, the court finds ‘articles’ to include intangible goods. On the other hand, retained EU law has previously clarified its scope to only encapsulate tangible objects. [5] As such, the court departs from EU copyright principles, misaligning UK law with retained EU law. For the sake of legal coherence, this case may have been an opportunity for the courts to differentiate the rights and infringement conditions for intangibles from those for tangibles.


Second, the court’s approach to finding no infringements should be reassessed. Under sections 22 and 23 of the CPDA, a secondary infringement includes importing, possessing or dealing with an article which is an infringing copy.


It was held that even though the images were used for training the AI models, they were not stored or reproduced in the final, published versions of the model. Unlike Sony, where a RAM chip had stored infringing copies of information, the published model involved only a residue of information, not the information itself. [6] However, this is not a meaningful comparison. AI models, unlike RAM chips, are developed over time. The published model should include the training data and methods behind the AI model, rather than treating the training and published models as two separate articles. This point will be further explored below.


The threshold of trademark infringements

The basis of this claim is the Getty watermarks, which Stable Diffusion synthetically reproduced. On this front, the courts found infringement only with respect to models for which there was evidence that the marks were generated. Given that trademark law centres around the protection of consumers and their recognition of brands, this case continues to protect that sentiment: the criteria to determine infringement is the ‘likelihood that the average consumer would generate an infringing watermark’.


However, there is an unfair balance in the burdens of Stability and Getty. The burden lies with Getty to prove that infringing watermarks were generated in the normal course of use of Stability AI. This is a high threshold to adopt. The question then lies in the difference between consumers who specifically generate Getty’s watermark and consumers who unintentionally do so. Section 10(1) Trade Marks Act 1994 provides that a registered trademark is infringed if it is ‘[used] in the course of trade’. [7] This, in our opinion, should also include circumstances where Getty’s watermark is intentionally prompted. Generating the watermark, be it purposefully or accidentally, falls within ‘the course of trade’ and hence both constitute infringement. The fact that Stability can generate the watermark alone should warrant liability.


Even so, some burden is placed on Stability to prevent watermarks from being produced, not on consumers. This requires strict filtering measures and improvements to Stable Diffusions’ model to reduce the likelihood of trademark-bearing outputs. [8] The unfair balance in burdens lies in Getty’s difficult task of proving watermark generation. At the same time, Stability carries a minimal burden of proof and even benefits from developing and refining its filters.


Evidently, the courts are carving out a space in the law for technological development, carefully avoiding overly restrictive burdens on AI learning. AI learning essentially requires extensive use of existing data to develop its intelligence and accuracy. The courts, understanding the essential role of technological advancement in society, may therefore be wary of cutting off all access for AI models to process such information. However, whether this approach is fair, especially in the creative industry, warrants greater academic debate. Given that Getty failed on most claims, digital content creators may be increasingly cautious about releasing their work online, fearing insufficient safeguards to protect it against AI. The balance struck between technological advances and recognition of creative contributors is a potential area for development in the law.


The different versions of AI models

The High Court of Justice held that Stable Diffusion’s training model had infringed upon Getty's works, whereas the final released model had not. This implies that the status of an AI model is determined by its final composition rather than by its prior use of infringing data. The difference lies in the data each model contained. The training of the model involved “the reproduction (by means of storage) of the Copyright works” [9], while the final model simply contained statistical patterns generated from the data gathered during training.


While developers may still incur liability for infringement during the training stage, the current approach is unfair to content providers such as Getty, who lack insight into how their works may be used and must shoulder the burden of monitoring and proving misuse. Even if the final model no longer stores or reproduces copyright material, liability should not be precluded, for the final model’s functionality is the product of infringement. A potential approach may involve the courts imposing a duty of transparency on developers, requiring them to document the sources they use for training. The burden of proof would be shifted away from rightsholders, which may deter developers from engaging in infringing behaviour. However, we concede that such a duty would not resolve the fundamental issue of treating a model’s final form as lawful. While imposing such stringent measures on developers would stifle innovation, rightsholders may rely on mechanisms such as direct licensing, in which they grant developers a license to use their content in exchange for monetary compensation.


Infringement by AI in the Singaporean Context

While Singapore has yet to deal with similar cases, its recent amendment to the Copyright Act 2021 offers a potential point of comparison. The amendment made in 2021 introduced the Computational Data Analysis (CDA), which allows the use of copyrighted materials to train machine learning models, subject to specific conditions stipulated in s. 244 of the Act being met.


CDA is defined non-exhaustively as “using a computer program to identify, extract and analyse information or data from the work”. [10] This amendment is limited to the extraction and analysis of data for input, but remains silent on the generation of output. Hence, if a dispute akin to Getty Images Inc v Stability AI Ltd were to reach Singapore’s legal landscape, a statutory gap could be exposed, as Singapore’s legislation does not currently fully address the generation of AI-generated output or whether such output constitutes copyright. Ultimately, it remains for the courts to develop this area of law. Given AI’s growing presence in the legal landscape, it will be interesting to see how the courts will develop such issues in the future.


Conclusion

While protecting trained AI models from infringement liability encourages innovation in generative AI, it diminishes the ability of copyright and IP laws to regulate the unauthorised use of data. The judgment in the case signals a more flexible judicial approach toward emerging technologies. As the development of generative AI burgeons, there is a need for more explicit legislative guidance to ensure a fair balance between developers and other relevant stakeholders.



References

[1] Getty Images v Stability AI [2025] EWHC 2863 (Ch) [569]

[2] S Goossens and B Shandler, ‘Getty Images v. Stability AI: English High Court Rejects Secondary Copyright Claim’ (Latham & Watkins LLP, 13 November 2025) <https://www.lw.com/en/insights/getty-images-v-stability-ai-english-high-court-rejects-secondary-copyright-claim> accessed 9 December 2025

[3] Sony v Ball [2004] EWHC 1984 (Chregard)

[4] Getty v Stability AI: Stability AI generates big win in English court's landmark first judgment on AI and IP infringement (Osborne Clarke, 18 Dec 2025)

[5] Information Society Directive 2001

[6] Getty Images v Stability AI [2025] EWHC 2863 (Ch) [600]

[7] Trade Marks Act 1994, s10

[8] O Yaros and others, ‘Getty Images v Stability Ai: What the High Court’s Decision Means for Rights-Holders and AI Developers: Insights: Mayer Brown’ (Insights | Mayer Brown, 19 November 2025)

[9] Getty Images v Stability AI [2025] EWHC 2863 (Ch) [549]

[10] D Tan, ‘Copyright Fair Use in the Face of Technological Developments: Staying Ahead or Limping Behind? – Part 2’


 
 
 

Comments


ABOUT US

The United Kingdom Singapore Law Students' Society is a student-run society that connects Singaporean law students in the UK. We organise both social and professional events, such that members are in touch with the legal scene back home even when abroad. In addition to our sponsors and Board of Advisors, UKSLSS is made up of a dedicated Executive Committee and various sub-committees. Together, students and lawyers ensure that UKSLSS members have access to legal and career-related information from both countries.

STAY UP TO DATE

  • Image by Rubaitul Azad
  • YouTube
  • LinkedIn
  • Facebook Social Icon
  • insta logo_edited

Questions? Check out our Contact/FAQ page.

© 2025 by UKSLSS 

bottom of page