This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minute read

What Gen AI court rulings mean for content owners and creators

Recent court decisions in two highly publicized Gen AI cases favored the platforms and may start to reduce the concerns over using Gen AI. But content owners and those working for them should still understand the legal ambiguity of Gen AI and, regardless of legal limitations, be wary of how Gen AI use might affect the authenticity and perception of their brands.

What happened

In Kadrey v. Meta Platforms, 13 authors sued Meta, alleging it downloaded their copyrighted books without permission to train its Gen AI platform. A federal court found the training was a fair use but limited the ruling to the specific facts of the case.

In Bartz et al. v. Anthropic PBC, another group of authors sued over the use of a Gen AI platform to train its model. This different court found that training on lawfully acquired books is permissible under fair use, as it does not compete with or substitute for the original works. But it denied summary judgment on claims involving the use and retention of potentially unauthorized copies, which will proceed to trial.

 

Together, these cases may show an early trend towards a finding that training Gen AI models on copyrighted works could be considered fair use under some conditions. However, content owners and creators should be wary of the limitations of the case, particularly as the decision indicates that acquiring and retaining unauthorized content may not be protected.

Other recent controversies

An outcry recently developed after an investigation revealed a major online video platform had used a significant portion of uploaded videos to train Gen AI models. Many creators complained they weren’t explicitly informed that their work could be used for training. In response, the company introduced new safeguards and opt-out options.

Similarly, a popular video editing app updated its terms, raising online commentary about whether the uploaded content could be used by the parent company or for advertisements without explicit consent. While the prior terms already allowed broad usage rights, the confusion highlights ongoing concerns by content owners and creators about content use in the evolving Gen AI landscape.

In the brand space, Coca-Cola last year released a holiday ad generated entirely by Gen AI. The commercial reimagined its 1995 “Holidays are Coming” campaign with classic glowing trucks. Despite the nostalgic imagery, the ad was criticized as soulless, with unrealistic animation and an eerie tone.

Where we were

When Gen AI first gained widespread attention in 2023, content owners generally prohibited its use. There was confusion about how Gen AI worked and concern over the risks of infringement through both training data and outputs. Another concern was whether content created with Gen AI could be legally owned. Many content owners also worried that platforms might retain rights to both input and output content.

Some creators were early adopters, mostly experimenting or signaling they were on trend. Some creators embraced Gen AI openly, but agencies and production companies were often limited by conservative clients enforcing no-Gen AI policies.

By late 2024 and into 2025, some of the initial fear began to ease as Gen AI’s behind-the-scenes use became widely recognized and adoption seemed inevitable. Many content owners began permitting Gen AI use under strict conditions: no input of personal data, confidential or third-party content, no training and legal review of prompts and outputs.

By mid-2025, content owners are being pushed by their agencies and production companies to go even farther and permit Gen AI use, prompting some to permit use provided they use enterprise-grade platforms that don’t train on user data (unless the platforms’ models being trained are dedicated solely to that content owner), offer strong data security, guarantee all rights to input and output remain with the content owner and provide some form of indemnity or insurance.

Careful content owners are also requiring substantial human involvement in Gen AI material to support ownership arguments and legal teams’ review of prompts and outputs to reduce copyright risks.

Where we are going

While these legal cases are unlikely to immediately change Gen AI adoption, they may start to ease some concerns and contribute to the continuing relaxation of restrictions. If concerns about litigation lessen, platforms could gain better insurance coverage and offer more indemnity protection, encouraging broader Gen AI adoption.

Still, these early rulings don’t resolve whether training Gen AI models on content without permission, including scraping internet content, is lawful. Gen AI platforms must be carefully considered, and creators must proceed with caution.

At a minimum, enterprise-grade platforms should be used that train on user data only when usage is dedicated solely to that content owner, have robust security and ensure all rights remain with the content owners, with a preference for platforms offering insurance and indemnity. Significant human involvement in Gen AI creation should be required to address copyright concerns, along with legal review of inputs and outputs.

Most importantly, what these cases don’t address is the potential impact Gen AI may have on how a creator’s original brand is perceived. Public-facing Gen AI use may not suit all creators, especially those with reputations built on trust or creativity. Some may ultimately decide Gen AI is useful for ideation, but not a replacement for the traditional creative process.

 

 

 

Tags

advertising & media, advertising marketing & promotions