ĂŰÖ­ĎăĚŇ

Table of Contents

This is not a test: FIREopposes FCC’s plan to regulate AI in political ads

Mobile phone with seal of US agency Federal Communications Commission FCC on screen in front of web page

T. Schneider / Shutterstock.com

Right now, many state and federal officials are looking for ways to regulate speech created with artificial intelligence tools. The Federal Communications Commission — an agency charged with regulating TV and radio broadcasters — is no exception. The FCC appears to be gearing up for a regulatory power grab, looking for opportunities to plant its flag in a burgeoning new field.

In August, the FCC issued a  that, if finalized, would require TV and radio broadcasters to issue a disclaimer every time they air a political ad with AI-generated content. In other words, if a broadcaster becomes aware that an ad contains AI-generated content, they are required to let the audience know that it “contains information generated in whole or in part by artificial intelligence.” 

This proposal has significant problems, as FIREexplained in a formal comment submitted to the FCC.

First, the FCC lacks jurisdiction over the content in political ads and simply doesn’t have the legal authority to regulate the “transparency of AI-generated content,” let alone to compel speech through mandated disclosures. Second, the proposal does not pass constitutional muster. And third, the proposal will not address voter confusion and may suppress beneficial uses of AI for election-related speech. 

The FCC says it wants to stem “confusion and distrust among the voting public,” mentioning the use of deepfakes to deceive potential voters into thinking a political candidate said or did something they didn’t say or do. On its surface, the FCC’s plan appears noble, particularly because deepfakes could convincingly convey false statements purporting to be fact to manipulate voters. 

In reality, while some uses of AI may very well constitute conduct that falls outside of First Amendment protection and implicate laws prohibiting defamation, false light, and the like, the FCC’s definition of AI-generated content is so broad that it would cover the use of AI for even innocuous or beneficial purposes. 

Illustration of the US Capitol building with the profile of a human head behind it with microchip circuits

FIRE Comment on FCC NPRM on Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements

Statements & Policies

The FCC appears to be gearing up for a regulatory land rush, looking for opportunities to plant its flag in the burgeoning new field of artificial intelligence.

Read More

For example, AI can be used as an editing tool to upscale audio and video, improving their digital quality to achieve a professional look. Since that use -– and other uses like image and audio editing — will also be subject to a disclaimer, alerting viewers and listeners of that fact does little to inform them whether AI was used in a deceptive manner. Instead, viewers or listeners may very well believe that every ad containing AI is deceptive, even if the ad is factually accurate. 

These tools can also cut production costs of creating political ads, making advertising more accessible for candidates who otherwise lack the economic means to do so. AI, therefore, could be used to further democratize political campaigns and our elections by creating new entry points. The same is true for  or disabilities who may rely on AI to communicate directly to potential voters.

Testifying before Congress earlier this year, FIREPresident and CEO Greg Lukianoff  that the government must proceed cautiously and respect First Amendment principles when considering regulating artificial intelligence. We extended a similar warning here to the FCC, calling on it to withdraw the proposed regulation altogether.

Recent Articles

FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.

Share