This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minute read

Key Takeaways from the Advertising Roundtable at Loeb & Loeb’s AI Summit

At Loeb & Loeb’s AI Summit in New York City on February 11, 2026, I hosted the roundtable focused on how advertisers and their agencies are using AI and navigating the legal risks arising from its use. The discussion reflected on the efficiencies AI can bring to the production process and advertising generally and the growing legal and reputational risks that come with its use. Below are some of the key takeaways from our discussion: 

  • AI Disclosures: One of the most notable topics was when advertisers are required or—or should voluntarily—disclose the use of AI in their ads. Some states have recently passed new laws that set clear requirements for when a disclosure is required in ads that use AI, such as the New York Senate Bill S8420 signed on December 11, 2025, which requires a disclosure when AI-generated “synthetic performers” appear in video ads. AI disclosures may also be required under existing state and federal truth and advertising laws if the use of AI would be “material to a consumer’s understanding of the ad.” However, when the use of AI in an ad is “material to a consumer” is not always clear. This puts advertisers in the position of having to decide whether or not an AI disclosure is needed to comply with truth and advertising laws. Although taking a conservative approach of always including the AI disclosure (even when the use may not necessarily be “material to a consumer”) is safer from a legal risk perspective, many advertisers had concerns that AI disclosures may turn off many consumers who have a more negative views of the use of AI. These concerns sparked a larger conversation on consumer sentiment generally on the use of AI in ads and how AI may shape the perception of the advertiser’s brand from a PR perspective.
  • Approval of AI Tools: The roundtable also discussed to what extent advertisers should be involved in approving the AI tools used by their agencies. While the traditional approach has been for the advertiser to approve every AI tool used by its agencies, this has become less practical as AI is now embedded in everyday productivity software, such as word processing and email, and agencies are racing to license as many new AI tools as they can for their staff to use in order to meet client needs. In addition, questions were raised on why approval of AI tools was needed when they are subject to the same data security requirements in client MSAs, including with respect to protecting client information, as any other IT systems. The participants discussed working towards a more tailored approach of requiring approval for only “high risk” use cases, such as when AI‑generated outputs are incorporated into publicly released ads, the AI tools process sensitive client information or the AI tool is used in decision-making that may trigger legal requirements or regulatory oversight.
  • Allocating Liability for AI‑Related Risk: Another key topic was how liability for risk arising out of the use of AI should be allocated between advertisers, their agencies and AI vendors. Participants discussed whether advertisers, agencies or AI vendors are best positioned to bear responsibility for claims arising out of AI‑generated content and the tension between shifting risk contractually between these three parties. The conversation highlighted a broader trade‑off advertisers face when using AI—while AI offers significant cost savings and efficiencies, it may also introduce heightened legal risks. As a result, advertisers are being put in a position where they must weigh the benefits of the cost savings from AI against the increased legal risks.

Tags

advertising & media, advertising marketing & promotions, advertising technology, artificial intelligence, emerging technologies, technology