Marsya Amnee Mohammad Masri ¹ Lawrence Tan²
¹ Executive – Strategy and Communications, Sunway iLabs ² Founder of LAWENCO | Advocates & Solicitors, Director of IP Gennesis
Remember when ChatGPT rolled out its GPT-4o image generator earlier this year and suddenly half the internet became ghiblified? Social media feeds were flooded with AI-generated images that looked straight out of a Studio Ghibli film, from celebrities and politicians to movie scenes, memes, and even people’s pets.
It was fun, nostalgic and wildly impressive, but what felt like a harmless creative fad quickly resurfaced a deeper question:
If AI can imitate creative works so convincingly that it feels almost… too good, where do we draw the line between inspiration and imitation — and who owns it?
That question is no longer theoretical. It’s now being debated in courtrooms, policy discussions, and creative industries around the world.

This AI-generated image is for illustrative purposes only. Not for reuse or redistribution.
As AI-generated content increasingly resembles familiar artistic styles, courts and policymakers have begun grappling with how existing copyright rules apply when creativity is assisted by machines.
Over the past year, lawsuits involving AI and intellectual property (IP) have been popping up with increasing frequency, reflecting growing concerns about whether frameworks designed to protect human creativity still hold up.
Cases involving Anthropic, Open AI, Google, and others have produced mixed outcomes across jurisdictions. One case, however, offers a particularly insightful lens into how courts are currently thinking about these issues. A lawsuit brought by a group of authors against Meta, commonly known as Kadrey v. Meta Platforms Inc.
A group of authors led by Richard Kadrey sued Meta, accusing it for using books downloaded through “shadow libraries” to train its AI Llama models. The court ultimately ruled in Meta’s favour, deciding that training AI models qualifies as fair use under U.S. copyright law (Taft, 2025).
The reasoning for this centered on the idea that training AI models is considered as transformative because AI models do not copy or redistribute creative works as they are, but are trained to learn statistical patterns from vast amounts of data and generate new content based on probabilities. Students of copyright law have described this process as “highly creative in its own right” (Kadrey v. Meta Platforms, Inc., 2025).
A similar line of reasoning has also appeared in other recent cases, including the Bartz v. Anthropic, where judges likewise recognised that training AI models can be highly transformative. In that case, however, Anthropic still faced consequences for how some training data was obtained, highlighting that while courts may accept the purpose of AI training, the methods used to acquire data still matter (Bartz et al v. Anthropic PBC, 2025).