This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minute read

How the Recent Investigation of OpenAI (ChatGPT) Will Impact its Users

The New York Times recently reported that the Federal Trade Commission (FTC) has opened an investigation into OpenAI, the startup behind ChatGPT over possible violation of consumer protection laws. 

Currently, investigation relates entirely to OpenAI, and not to any of its users. I would be surprised if the investigation expanded to any of its users in the near future. So, in that regard, I think it has a relatively small meaning to a day-to-day user of ChatGPT, although I think there are some good concepts we can take away from the investigation.

The first issue raised by the investigation is whether OpenAI has been undertaking deceptive practices with respect to data collection and privacy matters. In that regard, lawsuits have already been filed against ChatGPT alleging that it has used hundreds of millions of people's data to train its models without proper consent.

In addition, As you have probably seen, a variety of lawsuits from intellectual property owners, including comedian, Sarah Silverman, have sued over the use by OpenAI of their intellectual property to train the AI model. This represents another step towards protecting the rights of those being used without permission to create AI products.

Given that AI is in its nascent stage, I'm not sure that a brand or agency merely using AI faces substantial risk in this area, but as AI products develop, and are created and brought in-house by brands and agencies, these parties certainly have to seriously consider how and where it gets the data that it will use to train it's AI models.  This is especially true for companies that are creating their own AI products to be used in house, which need substantial amounts of data to make the tool useful.

The investigation also surrounds whether OpenAI is engaged in deceptive practices relating to reputational harm towards those over which it provides information. As you may have seen in the news, there have been a variety of allegations of ChatGPT responding to prompts in a manner arguably incorrectly summarizing various persons' lives, business, and personal history. In some cases, there are allegations that the answers that ChatGPT is giving are possibly false and defamatory. For example, there is a radio host that has filed a lawsuit against OpenAI claiming that ChatGPT produced the text of a legal complaint that accused him of embezzling money when he argues that this complaint never occurred.

While I do believe it's somewhat unlikely that the FTC would be interested in an individual company using a single individual's information garnered from ChatGPT, this investigation certainly underscores the fact that not all information one receives from ChatGPT is truthful and that information garnered from ChatGPt needs independent verification before one relies on it in a significant manner.

While not directly applicable, you should be aware of New York's recently enacted Automated Employment Decision Tool Law, which recently came into effect, which says that employers who use AI part of their hiring process have to tell candidates that they are using AI and the companies will have to submit to independent audits to prove that their systems are not biased.

Of course, let's remember that this is only an initial investigation, no charges have been filed, and OpenAI has not admitted to any wrongdoing, so we do need to merely take the investigation under advisement and watch it as it unfolds.

As a final note, I do think that looking at the FTC’s most recent guidance regarding Endorsements and Testimonials provides excellent food for thought when considering how regulators like the FTC, State AG’s, or local district attorneys look at a company's use of AI. Within the updated Endorsement Guides which came out just a few weeks ago, the FTC clarified that companies should have policies in place that cover their obligations and specifically stated, “While not a safe harbor, [creating and following a policy] should reduce the incidence of deceptive claims and reduce an advertiser’s odds of facing a Commission enforcement action.”  To me, this is another good reason for a company to establish a "Company Acceptable AI Use Policy".  Check out our webinar on this topic which was recorded. 

As a final note, the investigation does not address allegations of copyright infringement, which have been numerous with respect to AI tools. It may be possible for a user of AI tools to have secondary liability for infringement. But I think when the lawsuits come, these will be alternate claims and the primary allegation will be that the user directly infringed on a copyright by taking the content created by the AI tool and republishing it in an advertisement, entertainment, vehicle, or other form. Let's be careful out there!

Though the letter reported by the post is hardly the first time the agency has taken on any of AI’s many forms, it does seem to announce that the world’s present undisputed leader in the field, OpenAI, must be ready to justify itself.

Tags

chatgpt, openai, ftc, new york times, investrigation, artificial intelligence, ai, technology, litigation, consumer class action & regulatory defense, privacy security & data innovations