This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 4 minutes read

Disclosures Required When Using AI-Generated Actors/Voices

As advertisers and agencies continue to determine how to best use artificial intelligence (AI) to market their products and services, they should consider how to disclose to consumers that the materials they are viewing were generated via AI. Below are some common disclosure questions that have been arising surrounding the use of AI in ads:

We are using an AI-generated actor (or voice) in our marketing materials, are there laws that require disclosures for this type of use? 

(We’ll call this our “core AI question” for purposes of this article.)

Utah currently mandates this disclosure. The law requires that, if asked, a company using generative AI to interact with a person must disclose that fact (Utah SB 149). No private right of action exists. This would be something to keep in mind when using AI chatbots or other AI technology to directly interact with your customers.  While the Utah law only required disclosure “if asked,” this type of a disclosure may be a “best practice” even if your consumer hasn't inquired specifically whether they are interacting with AI, as the FTC has previously cautioned about the potential for deceiving consumers using “AI Chatbots.” Additionally, in 2019, before the recent AI hype, California’s chatbot disclosure law took effect, which prohibits the use of a chatbot online with the intent to mislead a consumer about the bot’s artificial identity to incentivize a purchase.  The central idea here is simple -- consumers should not be deceived about whether they’re interacting with a human. Advertisers should consider applying this principle across their marketing to be more transparent with consumers. 

There are no U.S. federal laws currently that specifically direct AI-related disclosures. However, various state and federal false and deceptive practices statutes could potentially be applied to your use of AI. 

The guidelines most applicable to our core AI question are the FTC’s Guides Concerning the Use of Endorsements and Testimonials in Advertising (16 CFR 222) and the FTC Policy Statement on Deception.  The Endorsement Guides are based on the basic truth-in-advertising principle that endorsements must be honest and not misleading, which are reflected in two primary requirements: (1) if there’s a connection between an endorser and the marketer that a significant minority of consumers wouldn’t expect and it would affect how they evaluate the endorsement, that connection should be disclosed clearly and conspicuously, and (2) endorsements must reflect the truthful experiences and opinions of the endorsers. 

The Policy Statement on Deception applies to all advertising, not just those with endorsements. It asks a similar question: Is the practice likely to affect the consumer’s conduct or decision? If not, then it is not “material,” not likely misleading and probably does not require a disclosure. 

Whether/what type of disclosure is necessary depends in large part on whether the marketing materials using AI actors/voices include an endorsement. The FTC has said that “an endorsement is an advertising message that consumers are likely to believe reflects the opinions or beliefs of someone other than the sponsoring advertiser,” whether the endorsement is found in traditional or social media. 

So, what do the Endorsement Guides tell us about our core AI question: We are using an AI generated actor (or voice) in our marketing materials, are there laws that require disclosures for this type of use? 

Given the novel aspect of these issues, here is our preliminary thinking, which may change over time as regulators and case law provide more guidance.

Advertisements that do not include an endorsement, but that are using an AI-generated voice/actor may not currently require any disclosure. For example, an ad that uses an AI-generated voice/actor that says something like “you can buy XYZ lawnmower at local hardware stores today,” would not be an endorsement, as it is not an “opinion or belief.” If there’s no endorsement, then the Endorsement Guides do not apply. Additionally, since the AI is not purporting to represent someone’s thoughts or beliefs, how important is it to a consumer’s purchasing decision about who that speaker might be, human or not?  Here it seems that the fact that your actor/voice was AI-generated isn’t necessarily material to the consumer.  Of course, the more specific the statements/actions of the actor/voiceover (e.g., using a video that looks like a real consumer’s social media post, or identifying the actor/voiceover as a real person: “Hi, I’m Brian Heidelberger from Loeb & Loeb…”), the more likely that the statements could be understood as endorsements and also be material to consumers. 

There are likely a variety of exceptions to this general rule. Your particular industry also may have regulations and/or best practices that need to be taken into consideration when considering whether disclosure is required. 

Marketing materials using AI actors/voices that do include an endorsement bring more significant risk, even with a disclosure such as “fictionalization using an AI-generated actor.”  It could be inherently and incurably deceptive to use an endorsement purporting to be spoken by a real human, about a personal experience that never happened, when in fact the person and experience do not exist. The FTC has been clear that a “disclosure may be used to clarify the intended meaning of an implied claim, however, the disclosure may not contradict the claim.” 

Marketing materials using AI actors/voiceovers to recreate real endorsements from a real person probably carry less risk than the situation above, provided a disclosure is included, such as “real customer testimonials voiced using generative AI.”  The material aspect is the actual endorsement itself, which is truthful in our example; the fact that the voice you are hearing isn’t the real person should be disclosed for purposes of clarity but isn’t necessarily tantamount to contradicting the claim.  To be clear, however, regardless of disclosure, the AI actor/voiceover must reflect what other consumers can expect from the advertiser’s product. The FTC has specifically stated that statements like “results not typical” or “individual results may vary” won’t change that interpretation.

As a final note, we recommend paying careful attention when using AI to market towards children.  BBB National Programs' Children's Advertising Review Unit (CARU) recently issued a new compliance warning on the application of CARU's Advertising and Privacy Guidelines to the use of AI.  According to the warning, marketers should be particularly cautious to avoid deceiving children about what is real and what is not when engaging with realistic AI-powered experiences and content.

Tags

advertising marketing & promotions, advertising & media