This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 6 minute read

California Enacts 18 New AI Laws: What Advertisers Need to Know

This September, California Governor Gavin Newsom signed 18 bills related to artificial intelligence into law. The newly enacted laws deal with topics ranging from accountability and transparency for AI technology to addressing AI literacy in school curricula and deepfakes in political campaigns. We reviewed all 18 laws to identify the key takeaways for advertisers:

  • AB 2885: Definition of AI

AB 2885 establishes a broad definition of “artificial intelligence” that is not limited to just generative AI (i.e., the type of AI technology that responds to prompts or generates images that we typically think of first when we hear “artificial intelligence”). Some of the new laws apply only to generative AI, but many apply to all types of AI technology, including artificial intelligence that is used for automation, analyzing data, making decisions or predictions, etc.

  • SB 942: California AI Transparency Act

SB 942 requires providers of certain generative AI systems to offer a free AI-detection tool that allows users to upload content and determine whether that content was created or altered by that AI system. It also requires that providers give users an option to include a conspicuous disclosures in content generated using the AI system that identifies such content as AI-generated and to include a latent disclosure in all generated content. This means that content generated using covered generative AI systems will be easily identifiable as created or modified using AI, and advertisers using AI-generated content may therefore want to consider whether there could be risks for their brand associated with such use or if this should be worked into brand messaging. For example, if a brand that emphasizes “self-acceptance” uses AI to modify images of individuals featured in its ads, there may be additional public relations considerations around using an AI system where consumers can easily identify this use and potentially raise it as an inconsistency in the brand’s messaging.

  • AB 2013: Training Data Transparency

AB 2013 also addresses transparency for developers of generative AI systems, requiring them to disclose information about training data on their websites before public release, including a high-level summary of the datasets used in the development of the AI system. The summary must include the sources or owners of the datasets, a description of how the datasets further the intended purpose of the AI system, whether the datasets include any data protected by copyright, trademark or patent or if the datasets are entirely in the public domain, among other disclosures. While training data transparency is useful for advertisers looking to leverage generative AI systems, advertisers developing their own AI tools (such as a chatbot for a website) could face increased compliance costs due to the requirement to track and document training data.

  • AB 2905: Use of Artificial Voices with Auto-Dialers

AB 2905 amends existing law regarding disclosures that must be made for calls using an auto-dialer to specifically address call content generated or modified by AI. The law requires that an “unrecorded, natural voice announcement” be made by the caller that states the nature of the call and the name, address and telephone number of the business being represented, asks for consent to hear the message and informs the individual if the message was generated using AI. For advertisers that make automated calls, this law expands the onerous requirement to have a live human obtain consent before playing a prerecorded message to apply to AI-generated messages as well, and noncompliance could result in fine for each violation.

  • AB 2355: Use of AI in Political Ads

AB 2355 requires disclosure when a political ad is generated or substantially altered by artificial intelligence if the generated media falsely appears to a reasonable person to be authentic or if the altered media would cause a reasonable person to have a fundamentally different understanding as compared to the unaltered version. For advertisers creating political ads, any time AI is used in any part of production, a decision must be made about whether that use needs to be disclosed. And note that the law contains detailed disclosure requirements for different types of media (e.g., spoken clearly at the beginning/end of a radio ad, displayed for at least 5 seconds for video ads, and so on).

  • AB 2602: Contracts for Use of Digital Replicas

AB 2602 gives individuals transparency into, and control over, the creation of digital replicas of themselves, and ensures certain protections during contract negotiations for use of a digital replica in lieu of work that the individual would have otherwise performed. The law specifies certain ways in which such a contract may be deemed unenforceable. At a high level, advertisers contracting with talent for use of a digital replica should ensure such contracts include a specific description of the use (versus broad license grants) and put those terms in a separate clause of the agreement. Contracts should also include a severability clause to increase the likelihood that the remainder of the contract will survive even if the provision dealing with the use of a digital replica is deemed unenforceable. Finally, advertisers should also consider whether the individual who will provide services under the contract is represented by legal counsel and/or a labor union with a collective bargaining agreement that expressly addresses uses of digital replicas. If the applicable contract terms do not meet the specificity requirements mentioned above, they could nevertheless be saved if the individual otherwise had adequate representation during negotiations and included a clear statement in the contract detailing commercial terms for use of a digital replica (or if this is addressed in an applicable collective bargaining agreement).

  • AB 1836: Use of Digital Replicas of Deceased Personalities

AB 1836 amends California’s Right to Publicity Statute to prohibit the production, distribution or making available of a digital replica of certain deceased personalities’ voice or likeness without prior consent from the appropriate rights owner. The rights owner could be the deceased personality’s estate or any other party to which the rights were subsequently granted, and these rights survive for 70 years after the personality’s death. It is important to note that there are certain exceptions to the requirement to get consent for use, and these track closely to common “fair use” exceptions (such as for use in connection with news or public affairs, for purposes of comment, criticism, or scholarship, and fleeting or incidental use). However, the law establishes statutory damages for violations in the amount of the greater of $10,000 and actual damages suffered by rights owner.

  • AB 2655: Defending Democracy from Deepfake Deception Act

AB 2655, requires online platforms (such as websites, search engines, social media platforms and advertising networks) that meet certain web traffic thresholds to develop and implement state-of-the-art processes to identify and remove materially deceptive content (including deepfakes) relating to a candidate in an election, as well as a means for users to report such content. These requirements do not apply to content that constitutes satire or parody and that includes an appropriate disclosure, but there are no other exceptions. As with any takedown process developed by online platforms, advertisers should be aware that ads featuring content about a political candidate could be caught up in this process and (rightly or wrongly) removed or challenged.

  • AB 2839: Deceptive Media in Election Ads

Similar to AB 2655, AB 2839 prohibits the knowing distribution of materially deceptive content relating to elections. Notable for advertisers involved with political ads, if a candidate uses manipulated media in their own campaign, a disclosure is required, even if the use constitutes satire or parody.

  • AB 1008: Personal Information and AI Systems

And finally, AB 1008 amends the California Consumer Privacy Act to expressly address artificial intelligence systems. Specifically, the law clarifies the definition of personal information by adding that personal information can exist in various formats, including artificial intelligence systems that are capable of outputting personal information. It does not otherwise expand obligations under the CCPA, but advertisers collecting or processing consumer personal information will now need to ensure all compliance efforts take into account any AI use or output. This may include ensuring any AI technology being used to process or generate personal information is compliant with privacy laws (including honoring consumer rights requests for information processed or generated by those systems and ensuring opt-outs are provided if use within a particular AI system constitutes sharing or a sale of data), including AI systems that process or generate personal information in data mapping or other internal compliance efforts, and updating consumer-facing privacy policies and disclosures where appropriate to reflect the use of AI systems – to name just a few areas of potential impact.

Tags

artificial intelligence, advertising & media, advertising marketing & promotions, advertising technology, privacy security & data innovations