֭

Table of Contents

FIREComment to FCC on Disclosure and Transparency of AI-Generated Content in Political Advertisements

Research & Learn

The Federal Communications Commission appears to be gearing up for a regulatory land rush, looking for opportunities to plant its flag in the burgeoning new field of artificial intelligence. But FIREargues the Commission lacks the power to enact the rules it has proposed.

Download Comment (PDF)

Illustration of the US Capitol building with the profile of a human head behind it with microchip circuits

Before the

FEDERAL COMMUNICATIONS COMMISSION

Washington, D.C. 20554

In the Matter of                                             )

Disclosure and Transparency of Artificial     )

Intelligence-Generated Content in Political )

Advertisements                                              )

 

COMMENTS OF THE FOUNDATION FOR INDIVIDUAL RIGHTS AND EXPRESSION

 

Robert Corn-Revere, Chief Counsel

Ronnie London, General Counsel

700 Pennsylvania Ave., SE, Suite 340

Washington, DC 20003

(215) 717-3473

bob.cornrevere@thefire.org

Aaron Terr, Director of Public Advocacy

John Coleman, Legislative Counsel

510 Walnut Street, Suite 900

Philadelphia, PA 19106

(215) 717-3473

aaron.terr@thefire.org

September 19, 2024

 

EXECUTIVE SUMMARY

FIRE submits that with its Notice of Proposed Rulemaking (NPRM) on Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements, the Federal Communications Commission’s reach exceeds its grasp. The Commission appears to be gearing up for a regulatory land rush, looking for opportunities to plant its flag in a burgeoning new field. But the Federal Election Commission, the federal agency that has historically and statutorily regulated campaign communications, is already considering the issue. And the 2022 White House Blueprint for an AI Bill of Rights did not contemplate a role for the FCC in federal AI policy or the regulatory means the NPRM advances. There is good reason for the FCC’s absence: The Commission does not possess statutory authority to regulate the “transparency of AI-generated content” either generally or for political ads, let alone to compel speech through mandated disclosures. Simply put, the agency lacks the power to enact the rules it has proposed. 

But even if the Commission did possess the jurisdictional and statutory authority to act, its proposed rules raise serious constitutional concerns. The FCC is an agency that “works in the shadow of the First Amendment,” FCC v. Fox Television Stations, Inc., 556 U.S. 502, 556 (2009) (Breyer, J., dissenting), and any regulation of broadcast content requires constitutional scrutiny. While the NPRM nods to constitutional constraints and seeks comment on the First Amendment implications of its proposal, the Commission underestimates the level of First Amendment scrutiny that would attend any decision to enact regulations to compel broadcast content. The proposed rules are content-based regulations that compel speech to address a conjectural problem—and, even then, fail to address that supposed problem in a direct and material way. Accordingly, the NPRM’s disclosure requirements would likely fail any level of constitutional scrutiny. This should not be a surprise. Decades of precedent make clear that using regulatory mandates to police the political marketplace inevitably leads to First Amendment problems.

Finally, the proposed rules present a host of practical problems. By broadly mandating disclosure of AI in broadcast ads, the rules would impact even uses of AI that in no way mislead viewers. Using AI on a political broadcast ad to edit audio or video or to upscale production quality would require disclosure, for example, providing little if any information to viewers—and inadvertently implying the ad’s use of AI must be somehow deceptive to warrant such a warning. Requiring disclosures will discourage innovative and empowering uses of artificial intelligence, chilling campaigns and grassroots organizations from employing technological advances to their benefit. It will also invite abuse of the complaint process from political opponents, putting both the agency and broadcasters in a bind while doing little if anything to better equip Americans with the information necessary to make informed decisions at the ballot. 

For these significant jurisdictional, statutory, constitutional, and pragmatic reasons, FIREstrongly urges the FCC to retract the proposed regulations.

The FIRE(“֭”) hereby comments on the Notice of Proposed Rulemaking in the captioned matter[1] to urge the Commission to forgo enacting the proposed regulations as ill-conceived on a variety of jurisdictional, statutory, constitutional, and pragmatic grounds.

I. INTRODUCTION

The Commission’s proposed rules requiring mandatory disclosure for political advertising using artificial intelligence illustrate the danger ill-considered regulations pose to Americans’ First Amendment rights. The FCC appears to be gearing up for a regulatory land rush, looking for opportunities to plant its flag in a burgeoning new field that threatens to usher in boom times for regulators.[2] But the agency is looking in the wrong place: Public concerns over artificial intelligence do not arise from broadcast advertising, and the FCC lacks jurisdiction to address the perceived issues. The proposed rulemaking is an illustration of the “streetlight effect”: Like someone searching for their lost keys under the streetlight, the Commission is looking for regulatory opportunities in an area where it believes it is most likely to be found. But this is not the right place for the search.[3] By stretching to assert jurisdiction, the FCC exceeds its statutory and constitutional authority, threatens to violate the First Amendment, and proposes an ill-considered solution that likely will have the opposite of its intended effect.

II. THE COMMISSION LACKS JURISDICTION AS WELL AS STATUTORY AND CONSTITUTIONAL AUTHORITY TO ADOPT THE PROPOSED RULES

A. The FCC Lacks Jurisdiction to Adopt the Proposed Rules

With these proposed rules, the Commission seeks to regulate beyond its purview. FIREshares Commissioner Carr’s concern that “Congress gave the Federal Election Commission —not the FCC—the exclusive statutory authority to interpret, administer, and enforce the Federal Election Campaign Act. That includes the authority to establish disclosures for political communications on television and radio.” NPRM at 38 (Carr, Comm’r, Dissenting). Moreover, Commissioner Carr notes “the FEC is actively considering the very issues implicated by the FCC’s proposal.” Id. As of the date of this comment, the FEC will have considered AI use in political advertisements.[4] It is set to hear an AI proposal on September 19, 2024.[5] In seeking to impose regulatory mandates on AI disclosures where the FEC has historically and statutorily regulated campaign communications, the FCC would create confusion and potential for regulatory overlap.

The Commission’s overreach is reflected in the fact that the 2022 White House Blueprint for an AI Bill of Rights (Blueprint)[6] did not envision development of federal policy concerning AI through the FCC, nor did it contemplate the regulatory approach the NPRM advances. The Blueprint identified AI as a developing technology with significant implications in both the private and public sectors and emphasized the importance of its development without sacrificing civil rights and democratic values.[7] And where it provided examples of the actions some federal agencies have taken with respect to AI that exemplify the Blueprint’s principles, it did not mention any approaches like those in the NPRM.[8] It instead advocated for careful steps “to help protect the public from harm”[9] without infringing on individual freedoms, stressing that AI regulation must not undermine civil rights or democratic principles, including free speech and due process.[10]

Where the Blueprint did discuss involvement of federal agencies in developing best practices for AI transparency, it highlighted ongoing research at federal agencies.[11] The National Institute of Standards and Technology, for example, has already taken significant steps in developing principles for the responsible use of AI and is actively conducting research into synthetic content and transparency.[12] At present, other agencies are likely better equipped to address the complexities of issues like AI transparency than the FCC, given that they have been cultivating specialized expertise.

Meanwhile, Congress presently is considering legislation that specifically addresses AI-related issues and has several bills under review.[13] Enactment of legislation would ensure that AI regulation is grounded in statutory authority, which the FCC lacks in this area. Moreover, the legislative process is better suited to consider potential constitutional problems when regulating new technology.

B. The FCC’s Statutory Authority Over Content Is Limited and Does Not Empower it to Impose the Disclosure Rules the NPRM Proposes

The answer to the NPRM’s question of “whether the Commission has the authority to adopt the proposed . . . requirements for AI-generated content in political ads,” NPRM ¶ 27, can be only a resounding “no.” The Commission claims a legal basis for the proposed rules under a pastiche of Communications Act provisions.[14] But some are simply nonstarter general grants of power.[15] Others have no bearing at all on the issue at hand.[16] And even those involving the general subject matter — political candidate ads[17] — come not even close to authorizing what the NPRM proposes.

  1. General Grants of Statutory Authority Cannot Support Content Rules

The FCC, like other agencies, “literally has no power to act . . . unless and until Congress confers power upon it,” Louisiana PSC v. FCC, 476 U.S. 355, 374 (1986), and it is “axiomatic” that the agency “may issue regulations only pursuant to authority delegated . . . by Congress.” American Library Ass’n v. FCC, 406 F.3d 689, 691 (D.C. Cir. 2005). Its power to promulgate legislative rules is thus limited to the scope of authority Congress delegates. Id. (citing Bowen v. Georgetown Univ. Hosp., 488 U.S. 204, 208 (1988)). See also NAB v. FCC, 39 F.4th 817, 819 (D.C. Cir. 2022). The fact that the Commission believes regulatory action to be in the “public interest,” as the NPRM repeatedly recites or implies,”[18] is alone insufficient—an “agency’s power to regulate in the public interest must always be grounded in a valid grant of authority from Congress.” FDA v. Brown & Williamson, 529 U.S. 120, 161 (2000); Lyng v. Payne, 476 U.S. 926, 937 (1986).

Further, the law must avoid constitutional conflicts. See Jones v. United States, 526 U.S. 227, 239 (1999). This basic rule of statutory construction has special relevance to the FCC as its “‘public interest’ standard necessarily invites reference to First Amendment principles.” CBS, Inc. v. Democratic Nat’l Comm., 412 U.S. 94, 122 (1973). The Commission has thus always had to “walk a ‘tightrope’” to preserve the free speech values embedded in the Act, a balancing act the Supreme Court called “a task of great delicacy and difficulty.” Id. at 117. And there is “something about a government order compelling someone to utter . . . speech,” especially in the political arena, as the proposed rules would require, “that rings legal alarm bells.” Arkansas AFL-CIO v. FCC, 11 F.3d 1430, 1443 (8th Cir. 1993) (en banc) (Arnold, C.J., concurring).

Compelling disclosures ancillary to content regulates speech,[19] making the need for specific statutory authority here critical. This is true not only because “such regulations invariably raise First Amendment issues,” Motion Picture Ass’n of Am. v. FCC, 309 F.3d 796, 805 (D.C. Cir 2002), but also given the Act’s provisions expressly limiting content regulation. See 47 U.S.C. §§ 326; 544(f).[20] As the D.C. Circuit emphasized, “Congress has been scrupulously clear when it intends to delegate authority to the FCC to address areas significantly implicating program content.” MPAA, 309 F.3d at 805.[21]

Despite the requirement that “[t]o regulate in the area of programming, the FCC must find its authority in provisions other than” general grants of power, id. at 804, much of the statutory authority the NPRM cites falls precisely into that category. But nothing in the Act comes even close to authorizing the proposed rules, and the extent to which the NPRM casts about so broadly for statutory authority is a red flag: any argument that “regulations are permissible because the statute does not expressly foreclose the construction advanced by the agency” is “entirely untenable.” ALA, 406 F.3d at 705. In any event, even if its statutory powers under its general grants of authority may be deemed as “broad,” they are “not without limits,” especially when the Commission seeks to promulgate rules that “significantly implicate” content. ALA, 406 F.3d at 704.

The primary source of such general authority relied upon here, Section 303(r),[22] is one the D.C. Circuit has already held the Commission cannot invoke in the manner the NPRM attempts. As in MPAA, any “claim[] that the regulations are justified under § 303(r), which permits the FCC to regulate in the public interest . . . to carry out the provisions of the Act . . . simply cannot carry the weight of the [] argument.” 309 F.3d at 806. As the court explained:

The FCC cannot act in the “public interest” if the agency does not otherwise have the authority to promulgate the regulations at issue. An action in the public interest is not necessarily taken to “carry out the provisions of the Act,” nor is it necessarily authorized by the Act. The FCC must act pursuant to delegated authority before any “public interest” inquiry is made under § 303(r). 

Id. That analysis applies equally to the other generic grants of authority the NPRM cites in Sections 307(a), 309(a), 309(k)(1)(A), and 335. “A generic grant of rulemaking authority to fill gaps . . . does not allow the FCC to alter the specific choices Congress made.” NAB v. FCC, 39 F.4th at 820. The Commission must accordingly look elsewhere.

  1. No Provision in the Act Authorizes What the NPRM Proposes

The Commission also cannot find statutory authority for the proposed AI political ad disclosures in any of the specific grants of power the NPRM cites. Provisions that have nothing to do with political advertising certainly cannot suffice. The NPRM generally invokes Section 317 of the Act, NPRM ¶¶ 6, 27–28, 45 & App. B ¶ 45, but cites nothing specific within it as justifying political advertisement AI disclosures. Nor could it. Section 317 involves only on-air sponsorship identification when consideration is paid or promised in exchange for the broadcast of program material, id. ¶ 6, and has no relevance in the present context. And the D.C. Circuit has confirmed that Section 317 cannot be stretched beyond “the means [Congress] has deemed appropriate, and prescribed, for the pursuit of [statutory] purposes.” NAB v. FCC, 39 F.4th at 820 (citation omitted). Sections 325(c)-(d), governing broadcasts to foreign countries for rebroadcasts to the U.S. and applications for permits to do so, 47 U.S.C. § 325(c)-(d) (cited NPRM ¶¶ 24–26, 34, 45 & App. B ¶ 5), are even less relevant as a source of statutory authority.

That leaves Sections 315 and 312(a)(7), which do involve uses of broadcast stations by legally qualified candidates, including campaign ads,[23] but fall well short of statutorily authorizing political ad AI disclosures. As an initial matter, Section 312(a)(7) simply allows license revocation for violations of, effectively, Section 315, so any statutory authority in the former is derivative of that in the latter. And while Section 315 governs in some respects the content of candidate ads, far from allowing the Commission to force broadcasters to append disclosures to them, Section 315(a) acts as a limit, providing that those who air candidate ads “have no power of censorship over the material broadcast under . . . this section.” 47 U.S.C. § 315(a). That alone is fatal to the NPRM’s reliance on Section 315 to authorize the proposed rules.

Were there any question about this, courts have interpreted Section 315 as prohibiting broadcasters from altering candidate ads, e.g.Farmers Educ. Coop. Union of Am. v. WDAY, Inc., 360 U.S. 525, 527 (1959), which is completely at odds with reading implied statutory authority into it to force broadcasters to append AI disclosures. The Commission itself has admitted that “censorship” as used in Section 315 “encompasses more than the refusal to run a candidate’s advertisement or the deletion of material contained in it.” See Becker v. FCC, 95 F.3d 75, 83 (D.C Cir. 1996). If “the no-censorship provision of Section 315 prohibits any interference, direct or indirect” with candidate ads means that broadcasters “may not require a candidate to execute an agreement to indemnify the licensee against liability resulting from the candidate’s political ad,” id. at 83–84 (quoting D.J. Leary, 37 FCC.2d 576, 578 (1972)), it certainly does not authorize broadcasters to add disclosures that call into question the ad’s efficacy. See infra Part C.1. Even the NPRM concedes that “section 315(a) prohibits broadcast licensees from censoring candidate ads.” NPRM ¶ 4, ¶ 16 n.54.

Further, to the extent Section 315 regulates ad content at all, it does so with regard to what candidates must do if they want to receive the lowest unit charge, not anything broadcasters may or must do. See 47 U.S.C. § 315(b)(2)(A). And more to the point, it says nothing about ancillary disclosures in connection with candidate ads and is altogether silent regarding the kinds of issue ads the rules proposed here would govern. Those silences are not construable as free rein for the Commission to regulate, but rather must have meaning. Section 315 requires access and prescribes limits on candidates to claim it,[24] but otherwise does not authorize the FCC to regulate ads. When coupled with the lack of authority under § 1, this “clearly supports the conclusion that the FCC is barred” from doing what the NPRM proposes. MPAA, 309 F.3d at 802.

There is no basis, either, for the Commission’s premise that it is seeking to mandate only “content neutral disclaimers,” or its tentative conclusion that such disclaimers are consistent with section 315(a) and do not constitute censorship. NPRM ¶ 16 n.54 (citing South Ark. Radio Co., 5 FCC Rcd 4643, 4644 (MB 1990)). At the outset, political ad AI disclosures are not “content-neutral.” See infra Part C.2. More to the point, the only authority on which the NPRM relies is a Notice of Apparent Liability that led to a forfeiture order that is silent on the cited point.[25] Not only is this of questionable precedential value,[26] the logic of the NAL undermines rather than supports the disclosures proposed here.

Specifically, while the NAL allowed “content neutral disclaimers,” it prohibited broadcasters from applying them to some candidate ads but not others. If they used disclaimers at all, it required affixation to all candidate ads, to avoid selective application that would denigrate candidate ads in violation of Section 315’s no-censorship rule. Southern Ark. Radio, 5 FCC Rcd at 4644. Conversely, the proposed AI disclaimer would serve no purpose but to potentially denigrate ads. Bottom line, the NPRM’s conclusion about content neutral disclaimers “ignores the limits that the statute places on broadcasters’ narrow duty,” NAB v. FCC, 39 F.4th at 820, with regard to candidate ads.[27]

Even if the Commission disregards all the foregoing, and somehow deems Section 315 to support the proposed rules for candidate ads, there is still no statutory authority to mandate AI disclosures for issue ads as the NPRM proposes. NPRM ¶ 2 n.4, 9, 16, 28. The NPRM effectively concedes as much, noting “section 315 imposes specific programming obligations only with respect to candidate ads, and not issue ads,” which “suggest[s] that [it] provides authority to adopt the proposed on-air disclosure requirements only for candidate ads.” Id. ¶ 28. As the Supreme Court found in CBS v. DNC, broadcaster treatment of political issue advertising falls “within the sphere of journalistic discretion which Congress has left with the licensee.” 412 U.S. at 119.

“Great caution is warranted” where “regulations rest on no apparent statutory foundation and, thus, appear to be ancillary to nothing.” ALA, 406 F.3d at 702. Or as Commissioner Simington put it, “authority to accomplish this regulation doesn’t exist.” NPRM at 44 (Simington, Comm’r, dissenting). Congress has not authorized the FCC to regulate the “transparency of AI-generated content” either generally or for political ads, NPRM ¶ 1, let alone to compel speech through mandated disclosures. Especially in light of this term’s decision in Loper Bright Enterprises v. Raimondo, 144 S. Ct. 244 (2024), attempts to read into the Act statutory authority to require disclosures of AI use in political ads neither withstand scrutiny, nor are they likely to survive judicial review.

C. FCC Lacks Constitutional Authority to Impose Disclosure Requirements

The FCC is an agency that “works in the shadow of the First Amendment,” FCC v. Fox Television Stations, Inc., 556 U.S. 502, 556 (2009) (Breyer, J., dissenting), and any regulation of broadcast content requires constitutional scrutiny.[28] The NPRM overstates the Commission’s authority to enact regulations to compel broadcast content and underestimates the level of First Amendment scrutiny that would attend any decision to impose such rules. Where the Commission lacks specific statutory authority—as it does in this instance—its claim that it may impose disclosure requirements is “very frail” because “such regulations invariably raise First Amendment issues.” MPAA, 309 F.3d at 803, 805. The NPRM provides no basis for its conclusion that such rules would be subject to only “heightened rational basis” scrutiny, or that the proposed rules could “satisfy any standard of First Amendment review that may apply.” NPRM ¶¶ 29, 31.

  1. The Commission Lacks General Authority Over Broadcast Content

Although the FCC historically was allowed somewhat more latitude in regulating broadcast content than for traditional print media, the Supreme Court has characterized this added authority as “minimal.” Turner, 512 U.S. at 652. And even when the Commission’s authority to regulate broadcast content was at its zenith, reviewing courts recognized the “power to specify material which the public interest requires or forbids to be broadcast … carries the seeds of the general authority to censor denied by the Communications Act and the First Amendment alike.” Banzhaf v. FCC, 405 F.2d 1082, 1095 (D.C. Cir. 1968); see Anti-Defamation League of B’nai B’rith v. FCC, 403 F.2d 169, 172 (D.C. Cir. 1968) (“the First Amendment demands that [the FCC] proceed cautiously [in reviewing programming content] and Congress … limited the Commission’s powers in this area”).[29] See also CBS, Inc. v. FCC, 453 U.S. 367, 395 (1981) (“the broadcasting industry … is entitled under the First Amendment to exercise ‘the widest journalistic freedom”) (quoting CBS v. DNC, 412 U.S. at 110).

The NPRM ultimately relies on Red Lion Broadcasting Co. v. FCC, 395 U.S. 367, 388–89 (1969), to support a general public interest mandate predicated on spectrum scarcity, NPRM ¶ 29 n.99, but fails to recognize this rationale for programming regulation has become far more attenuated. To whatever extent a general reference to the “public interest” standard might have permitted certain types of content regulation in the past, courts have reduced the latitude given the FCC with passage of time and changing conditions. Seee.g., Greater New Orleans Broad. Ass’n v. United States, 527 U.S. 173 (1999) (restrictions on casino advertising struck down without reference to Red Lion); Radio-Television News Directors’ Ass’n v. FCC, 229 F.3d 269 (D.C. Cir. 2000) (per curiam) (personal attack and political editorial rules struck down because of tension with First Amendment).[30] It is highly doubtful any new broadcast programming regulations could be justified based on the scarcity rationale.[31]

It has been over a half-century since the Supreme Court decided Red Lion, which was based on “‘the present state of commercially acceptable technology’ as of 1969.” News America Publ’g, Inc. v. FCC, 844 F.2d 800, 811 (D.C. Cir. 1988) (quoting Red Lion, 395 U.S. at 388). Even during that formative period, courts observed that “some venerable FCC policies cannot withstand constitutional scrutiny in the light of contemporary understanding of the First Amendment and the modern proliferation of broadcasting outlets.” Banzhaf v. FCC, 405 F.2d 1082, 1100 (D.C. Cir. 1968); CBS v. DNC, 412 U.S. at 102 (“the broadcast industry is dynamic in terms of technological change; solutions adequate a decade ago are not necessarily so now, and those acceptable today may well be outmoded 10 years hence”). And in years since, courts have emphasized “the rationale of Red Lion is not immutable. E.g., Meredith Corp. v. FCC, 809 F.2d 863, 867 (D.C. Cir. 1987). 

In the mid-1980s, for example, the Commission “found that the ‘scarcity rationale,’ which historically justified content regulation of broadcasting … is no longer valid.” Id. (citing Report Concerning General Fairness Doctrine Obligations of Broadcast Licensees, 102 FCC.2d 143 (1985) (“1985 Fairness Doctrine Report”)).&Բ;See Syracuse Peace Council v. FCC, 867 F.2d 654, 660–66 (D.C. Cir. 1989) (discussing 1985 Fairness Doctrine Report and upholding FCC’s decision to repeal the fairness doctrine).[32] Congress also has cast doubt on the continuing validity of the scarcity rationale as a basis for content regulation. For example, the Telecommunications Act of 1996’s legislative history suggested traditional justifications for FCC regulation of broadcasting require reconsideration. The Senate Report noted that “[c]hanges in technology and consumer preferences have made the 1934 [Communications] Act a historical anachronism.” Telecomms. Competition and Deregulation Act of 1995, S. Rep. No. 104-23, at 2–3 (1995). The House legislative findings were even more direct, pointing out that the audio and video marketplace has undergone significant changes over the past 50 years “and the scarcity rationale for government regulation no longer applies.” Communications Act of 1995, H. Rep. No. 104-204, at 54 (1995).

Consequently, it is far from certain that in 2024 any reviewing court would accept the technological assumptions upon which Red Lion is based. Courts will be far less inclined to defer to agency assessments regarding the scope of their authority. Loper Bright Enterprises, 144 S. Ct. at 2263 (reaffirming the judicial role to “fix the boundaries of delegated authority”) (cleaned up). For some time, courts have been reluctant to rubber stamp FCC assertions of authority over programming. In MPAA, 309 F.3d 796, for example, the D.C. Circuit explained that it interpreted the Commission’s powers narrowly because any regulation of programming content “invariably raise[s] First Amendment issues.” Id. at 805. The same conclusion follows from the D.C. Circuit’s decision in RTNDA v. FCC, 229 F.3d 269, where the court ordered the Commission to repeal the personal attack and political editorial rules. Because of constitutional concerns, the court was unwilling to allow the FCC to continue to enforce the content restrictions (that already had been subject to protracted review) while the Commission assessed their validity.

  1. The Proposed Disclosure Requirements Would Likely Fail Any Level of First Amendment Scrutiny

The Commission’s assumption that “merely requir[ing] a factual statement indicating that [an] ad contains information generated in part using artificial intelligence” would “satisfy any standard of First Amendment review that may apply” is flawed. NPRM ¶¶ 29 & n.97, 31. The NPRM misstates the applicable level of scrutiny by suggesting that content neutral rules are subject to review under what it calls “heightened rational basis,” id. ¶ 29, claims “a compelling interest in providing greater transparency regarding use of AI-generated content,” and concludes the proposed requirements would “promote the goals of the First Amendment” by, among other things, “enhancing the public’s ability to assess the substance and reliability of political ads, thus fostering an informed electorate and improving the quality of political discourse.” Id. ¶¶ 30, 33. None of these conclusions are warranted.

First, a diminished level of scrutiny the Commission describes as “heightened rational basis” is not appropriate for any programming mandate. The NPRM drew this reference from Ruggiero v. FCC, 317 F.3d 239, 247 (D.C. Cir. 2003) (en banc), a case upholding a bar on issuing low-power radio station licenses to individuals who had previously operated unlicensed pirate radio stations in violation of federal law. But that case involved a character qualification based on the applicant’s prior conductId. at 244. It did not address the standard for a “content neutral speech regulation,” since the rule at issue did not regulate speech at all. Accordingly, the minimal scrutiny suggested in the NPRM does not apply to the proposed disclosure rule, which necessarily regulates speech.

Moreover, the NPRM incorrectly assumes the proposed disclosure requirement would be content-neutral. The proposed rule applies specifically to political advertising, and “[g]overnment regulation of speech is content based” and subject to strict scrutiny if it “applies to particular speech because of the topic discussed or the idea or message expressed.” Reed v. Town of Gilbert, 576 U.S. 155, 163 (2015). In this regard, imposing a restriction on political messages but not messages on other topics is content-based. Seee.g., Barr v. Am. Ass’n of Pol. Consultants, Inc., 591 U.S. 610, 636 (2020) (plurality op.).

Second, any disclosure requirement inherently compels speech, which necessarily requires heightened First Amendment scrutiny.[33] This is because freedom of speech comprises “decision of both what to say and what not to say.” Riley v. Nat’l Fed’n of the Blind of N.C., Inc., 487 U.S. 781, 797 (1988); Janus v. Am. Fed’n of State, Cty. & Mun. Emps. Council, 138 S. Ct. 2448, 2463 (2018) (“freedom of speech ‘includes both the right to speak freely and the right to refrain from speaking at all’”); Pac. Gas & Elec. Co. v. Pub. Utilities Comm’n of Cal., 475 U.S. 1, 16 (1986) (“For corporations as for individuals, the choice to speak includes within it the choice of what not to say.”). “Mandating speech that a speaker would not otherwise make necessarily alters the speech’s content,” Riley, 487 U.S. at 795, and “content-based regulations . . . ‘are presumptively unconstitutional,’” NIFLA, 138 S. Ct. at 2371 (quoting Reed, 576 U.S. at 163); accord Volokh v. James, 656 F. Supp.3d at 440–42. 

The Commission cannot minimize the level of constitutional scrutiny by calling its proposed rule a “transparency” measure. X Corp., 2024 WL 4033063 at *8 (“Even a pure ‘transparency’ measure, if it compels non-commercial speech, is subject to strict scrutiny.”). Nor can it avoid heightened scrutiny because “the proposed rules would merely require a factual statement indicating that the ad contains information generated in part using artificial intelligence.” NPRM ¶ 29 n.97. The Supreme Court has emphasized that there is no constitutional difference between “compelled statements of opinion” and “compelled statements of fact” because “either form of compulsion burdens protected speech.” Nat’l Fed’n of the Blind, 487 U.S. at 797–98. The Commission’s assurance that a disclosure requirement would mandate little more than carrying “a line or two of factual information” is thus of little solace. Wash. Post v. McManus, 944 F.3d 506, 518 (4th Cir. 2019).

Third, it is not enough for the Commission to assert that it is seeking to promote the “goals of the First Amendment.” That was a fatal flaw with the fairness doctrine—the FCC’s aspiration might have been to promote First Amendment values, but the doctrine’s mandates conflicted with First Amendment commandsIn re Complaint of Syracuse Peace Council, 2 FCC Rcd 5043, 5046 (1987). Ultimately, after decades of operation, the Commission finally concluded that its regulation of broadcast programming could not be reconciled with constitutional requirements. Id. at 5051–52. The rule proposed in this NPRM presents a similar paradox—the idea that freedom of speech can be obtained through content mandates. As discussed below, the rule is a poor vehicle for achieving the FCC’s asserted goals. And as the Supreme Court recently observed in another context, “[o]n the spectrum of dangers to free expression, there are few greater than allowing the government to change the speech of private actors in order to achieve its own conception of speech nirvana.” Moody v. NetChoice, LLC, 144 S. Ct. 2383, 2407 (2024).

Fourth, the Commission’s bare assertion that a disclosure mandate could satisfy “any standard of First Amendment review,” NPRM ¶ 31, lacks any support. The NPRM prudently stops short of directly stating the proposed rules could satisfy strict scrutiny, but merely posits the elements of the test and asserts somewhat obliquely that the proposal would survive “regardless which level of scrutiny applies.” Id. ¶ 29. But a law subject to strict scrutiny is presumptively invalid unless the government shows it is necessary to achieve a compelling interest and uses the least restrictive means. See United States v. Playboy Ent. Grp., 529 U.S. 803, 813 (2000). This is a “demanding standard” that vanishingly few restrictions of speech meet. Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 799 (2011). Although the NPRM “tentatively concludes” the proposed rule would survive, it does not directly analyze the particular elements of the test. However, the government “must present more than anecdote and supposition” to satisfy strict scrutiny. Playboy, 529 U.S. at 822. Conclusory statements and the Commission’s “predictive judgment” are not enough. Brown, 564 U.S. at 799–800.

The Commission’s assertion that the proposed rule would also satisfy intermediate scrutiny underplays the rigor of that test and is incorrect. It describes intermediate scrutiny as a “less rigorous standard applied to content-based restrictions on [the broadcast] medium,” but that conclusion flows from the agency’s reliance on spectrum scarcity as justification for diminished scrutiny. NPRM ¶ 29 & n.99. Moreover, as the Supreme Court explained in FCC v. League of Women Voters of Cal., 468 U.S. 364, 380 (1984), even under the more permissive regime of Red Lion, content regulation may be upheld only where the government can prove the rule is narrowly tailored to serve a substantial government interest.[34] It is worth noting this level of scrutiny was sufficient to void bans on editorializing in League of Women Voters and on casino advertising in Greater New Orleans Broad. Ass’n, 527 U.S. at 183 (“[T]he Government bears the burden of identifying a substantial interest and justifying the challenged restriction”).

The burden under intermediate scrutiny is substantial: A speech restriction survives constitutional review only if the government proves the law (1) serves a “real” and “not merely conjectural” government interest “unrelated to the suppression of free expression,” and (2) “will in fact” serve that interest in “a direct and material way” (3) that is narrowly tailored to suppress no more speech “than is essential to the furtherance of that interest.” Turner Broad. Sys., 512 U.S. at 662-64 (citation omitted). The NPRM’s analysis falls far short of the necessary showing.

The asserted interest is conjectural. The NPRM’s justification for imposing disclosure requirements is quite thin.[35] It devotes two paragraphs to use of AI in political advertising, in which the Commission suggests “use of AI technologies in political ads could provide a number of benefits,” but also “creates a potential for providing deceptive, misleading, or fraudulent information to voters.” NPRM ¶¶ 9–10. However, intermediate scrutiny requires the government to go beyond “mere speculation or conjecture” and “demonstrate that the harms it recites are real and that its restriction will in fact alleviate them to a material degree.” Edenfield v. Fane, 507 U.S. 761, 770–71 (1993). The Supreme Court has made clear it will not uphold speech restrictions backed only by “unsupported assertions,” Ibanez v. Florida Dept. of Bus. & Prof. Reg., 512 U.S. 136, 143 (1994), or even “anecdotal evidence and educated guesses.” Rubin v. Coors Brewing, 514 U.S. at 490. See also 44 Liquormart, Inc. v. Rhode Island, 517 U.S. 484, 505 (1996).

Courts have declined to accept the argument that new rules are required simply because a given practice has the potential to be misleading. As the Supreme Court has put it, “[w]ere we to accept [this] argument … we would have little basis for preventing the government from suppressing other forms of truthful and nondeceptive advertising simply to spare itself the trouble of distinguishing such advertising from false and deceptive advertising.” Zauderer, 471 U.S. at 646. The Court is particularly suspicious of broad, prophylactic rules that anticipate the mere possibility of deception, noting “First Amendment protections … would mean little indeed if such arguments were allowed to prevail.” It is the burden of “would-be regulators,” therefore, to distinguish “the truthful from the false, the helpful from the misleading, and the harmless from the harmful.” Id. When free speech values are at stake, the government must supply rationales that are “far stronger than mere speculation about serious harms.” .

The disclosure requirement will not address the asserted problem in a direct and material way. The Commission merely asserts that imposing disclosure rules for broadcast advertising would serve the goals of “enhancing the public’s ability to evaluate political ads, thus promoting an informed electorate and improving the quality of political discourse.” NPRM ¶ 31. However, as with other elements of intermediate scrutiny, the government has the burden to prove that any proposed regulation would serve its asserted interest in a “direct and material way.” This requires findings of fact and “evidentiary support” that the regulation “will significantly advance” the government’s interest. 44 Liquormart, 517 U.S. at 505–06. Accordingly, “a regulation may not be sustained if it provides only ineffective or remote support for the government’s purpose.” Greater New Orleans Broad., 527 U.S. at 188. 

This requirement is critical; otherwise, the government “could with ease restrict … speech in the service of other objectives that could not themselves justify a burden on … expression.” Id. It cannot just point to “common sense,” but must gather solid evidence to support its conclusions. Rubin, 514 U.S. at 480; Edenfield, 507 U.S. at 770. Under this standard, the government cannot simply assume a proposed regulation will be sufficient. In Lorillard, for example, the government failed to satisfy its burden to demonstrate that indoor point-of-sale advertising regulations would directly and materially advance its asserted goal of reducing underage use of tobacco products. 533 U.S. at 566. Likewise, the Commission cannot here base disclosure requirements upon mere unsupported speculation that they will solve a presumed problem.

Despite thse stringent requirements, the Commission offers no evidence or explanation to support its conclusion that requiring disclosure that a political broadcast advertisement “contains information generated in whole or in part by artificial intelligence” would do anything to enhance “the public’s ability to assess the substance and reliability of political ads,” “foster[] an informed electorate,” or “improv[e] the quality of public discourse.” NPRM ¶ 33. Given the assumptions that motivated this proceeding, with warnings of possible “deep fakes” and other nefarious AI techniques, such a disclosure is likely to be interpreted by the public as an admission that the political ad is likely to be false or misleading.[36] The disclosure would thus create a false perception by the public, notwithstanding the Commission’s acknowledgement that AI can be used to help candidates “tailor their messages to specific communities,” “produce content in the candidate’s voice in multiple languages,” or “automate the generation of political ads,” enabling campaigns “to create new content quickly in the final days leading up to an election.” Id. ¶ 9. 

The proposed disclosure requirement does not distinguish between candidates using advanced technology to expand and improve political discourse from those who might be up to no good. Ads that might employ AI for either benign or malign uses would be required to make the exact same disclosure. It is difficult to imagine how such a rule could possibly help the public identify “deceptive, misleading, or fraudulent information” or how a rule forcing the public to guess whether a politician is using “good AI” or “bad AI” will contribute to “a more informed electorate.”

More significantly, the Commission’s limited jurisdiction undermines any possible showing that a disclosure requirement for broadcast advertisements could meaningfully advance the NPRM’s asserted objectives. Broadcasting represents only a small subset of the many ways political information and advertising reaches the public, and, as the Chair of the Federal Election Commission cautioned, the FCC has a far more limited role to play in this area.[37] Generally, courts will not sustain a restriction on speech that provides “only ineffective or remote support for the government’s purpose.” Greater New Orleans Broad., 527 U.S. at 188–89. Where, as here, the regulations would apply to political ads on broadcast stations but not to other media, they would be unlikely to survive judicial review. In Coors Brewing, 514 U.S. at 489, for example, the Supreme Court struck down a federal restriction on disclosing alcohol content on beer labels after finding that “exemptions and inconsistencies” regarding wine and distilled spirits “bring into question the purpose of the labeling ban.” The Court concluded that the restriction could not directly and materially achieve its purpose

The Fourth Circuit applied the same reasoning to invalidate a Maryland law that required disclosure and recordkeeping for online political advertisements. McManus, 944 F.3d 506. Determined to combat foreign interference with U.S. elections, Maryland compelled online platforms to disclose information about political ads that appeared on their websites and to retain records pertaining to ad purchases. Id. at 511–12. The court described the law as “a compendium of traditional First Amendment infirmities.” Id. at 513. For starters, “the Act’s publication and inspection requirements ultimately present compelled speech problems twice over.” Id. at 514. But more germane to the state’s obligation to demonstrate a direct and material effect, the court found the law did “surprisingly little to further its chief objective” because it applied only to political advertising, while “Russian influence was achieved primarily through unpaid posts on social media.” Id. at 522 (cleaned up). Likewise, the Commission’s proposed disclosure requirement would fail First Amendment scrutiny because—at best—it could reach only a small slice of the potential uses (or misuses) of AI in political messaging. As Commissioner Simington observed, “viral videos shared in the unregulated space of social media by unaccountable entities will be the setting for the moving action of this story.” NPRM at 43 (Simington, Comm’r, dissenting).

The proposed rule is not narrowly tailored. Finally, the NPRM does not explain how the proposed rule could satisfy the narrow tailoring requirement. The Commission’s scant analysis consists of a single line concluding that “disclosure is a less restrictive alternative to more comprehensive regulations of speech.” NPRM ¶ 33 (quoting Citizens United v. FEC, 558 U.S. 310, 369 (2010)). Perhaps so, but this statement is a tautology. Suggesting that a disclosure rule is less burdensome than some other onerous regulation says nothing about whether this disclosure requirement is narrowly tailored as the First Amendment demands. Moreover, the disclosures contemplated in the language the NPRM lifted from Citizens United involved disclosures by actual participants in the political process—candidates and their contributors—not the media who carry their messages. Citizens United, 558 U.S. at 369–70.

As the Fourth Circuit held in McManus, Maryland’s disclosure law was unconstitutional because it “burdens platforms rather than political actors.” McManus, 944 F.3d at 515. The court explained that the First Amendment calculus that “makes sense for direct participants in the political process . . . falters when extended to neutral third-party platforms that view political ads no differently than any other.” Id. at 516. To the extent the Constitution permits imposing any disclosures involving AI in political advertising, it obviously would be less restrictive (and more effective) to impose such requirements on the speakers, not the messengers. Id. at 523 (“[W]hat Maryland wishes to accomplish . . . can be done through better fitting means. Indeed, it seems plain that Maryland can apply the Act’s substantive provisions to ad purchasers directly.”).[38]

It also is the Commission’s obligation to prove that counter-speech exposing questionable AI-generated ads would be insufficient to serve the asserted interest. United States v. Alvarez, 567 U.S. 709, 728–29 (2012). As the Supreme Court has cautioned, “[t]he remedy for speech that is false is speech that is true. This is the ordinary course in a free society.” Id. Artificial intelligence is a hot topic (which may help explain the Commission’s interest in staking out a claim in this area), and it also is an issue that would attract significant press coverage and public condemnation if a politician were shown to be using AI to create false and deceptive ads.[39] When it comes to outing false or deceptive political speech, “[p]ossibly there is no greater arena wherein counterspeech is at its most effective.” 281 Care Committee v. Arneson, 766 F.3d 774, 793 (8th Cir. 2014). While the Commission may claim that the normal processes for exposing such potential deception will be inadequate because of time pressures or the difficulty of detecting AI-generated ads, it does not explain how an administrative process would be superior in addressing such problems. 

Commissioner Carr warned that “the FCC is wading into an area rife with politicization,” as “[i]t is not difficult to see how partisan interests might weaponize the FCC’s rules during an election season.” He predicted “[t]he FCC’s proposal will invite highly motivated politicos to file a flood of complaints alleging ‘AI-generated content,’ not for the sake of the truth, but as a cudgel to chill opponents’ speech.” NPRM at 40 (Carr, Comm’r, dissenting). In this regard, experience teaches that using regulatory mandates to police the political marketplace inevitably leads to First Amendment problems. Seee.g., Susan B. Anthony List v. Driehaus, 573 U.S. 149, 164 (2014) (constitutional claim may lie where state statute prohibiting “false” political statements “allows ‘any person’ with knowledge of the purported violation to file a complaint”); 281 Care Committee, 766 F.3d at 790–92 (“[A]s a practical matter, it is immensely problematic that anyone may lodge a complaint with the [election commission] alleging a violation” of a state law prohibiting false political statements); Susan B. Anthony List v. Driehaus, 814 F.3d 466 (6th Cir. 2016) (invalidating Ohio law prohibiting false political statements).

III. THE NPRM OPENS A PANDORA’S BOX OF UNINTENDED CONSEQUENCES

A. The Proposed Disclosures Are Likely to Mislead

Although the Commission states that its proposed rules seek to inform potential voters, the most likely effect of the mandated disclosures of AI use will be to mislead the public. The NPRM acknowledges that “AI could help political advertisers provide timely, accurate, and relevant information to potential voters,” or could be used to provide “misleading or deceptive information.” NPRM ¶ 14. But the proposed disclosure would be required regardless of whether AI was being used to mislead or deceive voters. Disclosure would be required when political ads employ AI to increase production efficiency, edit audio or video, upscale production quality, or generate ideas for content. Disclosure, therefore, would provide little, if any, information to potential voters about the usage of AI in any given ad, which raises significant doubts about the efficacy of the FCC’s approach to fulfill its stated goal.

Legitimate uses of AI in developing and generating political messages abound. Such uses include AI’s ability to complete tasks more efficiently, including automating tasks, providing information, streamlining workflows, enhancing employee collaboration, and helping identify inefficient processes.[40] Video and audio editing has long been an industry norm.[41] Like Adobe Photoshop, which has been used for decades to digitally edit campaign material, AI can be used as an editing tool to upscale audio and video by improving their digital quality.[42] AI may be used to create ideas for political campaigns.[43] According to one 2020 report, “political campaigns use data on more than 200 million voting-age Americans to inform their strategies and tactics.”[44] With AI, candidates can quickly sift through such data to provide targeted messaging. It may also be used to draft campaign material, including the text of a political ad based on its understanding of voter sentiment or suggest images and videos that reflect the same.[45]

These tools can cut production costs for political campaigns. By facilitating the creation of low-cost, high-quality ads for candidates who cannot afford professional audio and video production, AI may lower the economic threshold to seek public office.[46] According to the Brennan Center for Justice:

New AI software products are inexpensive, require almost no training to use, and can generate seemingly limitless content. These tools can support personalized advertising at scale, reducing the need for large digital teams and leveling the playing field for campaigns that lack substantial resources.[47]

AI, therefore, could be used to further democratize political campaigns and our elections by creating new entry points for those who otherwise lack economic means to run an effective campaign. AI may provide some candidates with speech-related impairments or disabilities the ability to communicate directly to potential voters in campaign ads.[48] In late July, for example, U.S. Representative Jennifer Wexton addressed the House floor using her voice entirely generated by AI because a progressive disease has weakened her natural speaking voice.[49] It is foreseeable that AI will soon assist others, including through English translation and interpretation, potentially opening opportunities for individuals to engage in political campaigns who otherwise would not.

AI has also demonstrated a capacity to educate and inform. In 2018, Buzzfeed famously published a video of an AI-generated former President Barack Obama to raise awareness about deepfakes.[50] Others have used AI to visualize or critique public policies. In one video released last year, AI-generated images were used to criticize immigration policies of the federal government.[51] In that instance, AI helped make a political point, even if not presented as a literal depiction of actual events. The same point, if made by a human artist’s rendering, would likewise use visual imagery to illustrate a current political issue. It is not clear why one form of production would require a government-mandated disclosure while the other would not.

These examples should not be taken to suggest AI is always a positive force for shaping political discourse. Of course it isn’t. Deepfakes and other techniques can be used to create false or deceptive messages. But a scarlet letter disclosure requirement that applies to certain political messages, without regard for their truth or falsity, carried on one medium of communication will do little to address that possibility. Requiring broadcasters to disclose when a political advertisement “contains information generated in whole or in part by artificial intelligence,” NPRM ¶ 17, will do nothing to alert viewers and listeners which ads might contain deceptive material. To the contrary, it will lead them to assume that every ad using AI is a false message. This would do more to confuse potential voters than to inform them.[52] And as legitimate uses of AI are likely to far outstrip the nefarious ones, the disclosures themselves would be deceptive.

B. Political Opponents Will Likely Manipulate the Complaint Process

The potential misuse of AI disclosure requirements for political advantage should cause the Commission to exercise great caution before it adopts rules in this area. As Commissioner Carr warned, “The FCC should not be offering itself up as a political football just as the big game is kicking off.” NPRM at 40 (Carr, Comm’r, dissenting). The NPRM’s proposed rules would create an avenue for political opponents to gain an unfair strategic advantage during an election cycle by reporting violations to the FCC or broadcasters for the purpose of subjecting rival candidates to inquiries and investigations that take time and resources away from their campaigns. The potential for abuse may do more harm than good.

In 2016, on remand from the Supreme Court, the Sixth Circuit invalidated an Ohio law prohibiting certain “false statements” during an election because the process lacked procedural protections from frivolous complaints. Driehaus, 814 F.3d at 474. When the Supreme Court had the case and considered the question of standing, the burdens that the election commission were able to impose were “of particular concern” because they permitted a private party to use the complaint process for campaign advantage merely by setting into motion the agency’s proceedings. Driehaus, 573 at 165. The Court noted that complainants could time their submissions so the ultimate result would come after the election while “the target of a false statement complaint may be forced to divert significant time and resources to hire legal counsel and respond to discovery requests in the crucial days leading up to an election.” Id

While the FCC complaint process may differ in form from that scrutinized in Driehaus, the resulting peril and potential harms closely resemble those sufficient to confer standing for a pre-enforcement challenge to the Ohio law. The potential for abuse is obvious. Additionally, such complaints would disproportionately impact smaller, cash-strapped campaigns and grassroots organizations that lack the resources to navigate or defend against regulatory investigations. This creates a chilling effect where campaigns might avoid using AI tools altogether.

Given the many legitimate uses of AI in political campaigns, by adopting the rules the NPRM proposes, the Commission could open itself to a flood of sincere but mistaken complaints about opponents’ ads. Either way, it would be unwise for the Commission to inject itself (along with broadcast licensees) into the middle of these political disputes. 

C. Imposing Rules Will Unfairly Put Broadcasters in a Dilemma Given the Obligation to Run Political Advertising without Alteration

FIRE shares Commissioner Carr’s concerns that the FCC has not explained “how its proposal to impose liability on broadcasters for airing covered political ads without a disclosure can be squared with broadcasters’ federal obligation to run them” under Section 315, which while requiring equal opportunities also prevents broadcasters from exercising control over candidate ad content. NPRM at 39 (Carr, Comm’r, dissenting). See also supra § II.B.2 (discussing 47 U.S.C. § 315 and, inter aliaBecker v. FCC, 95 F.3d 75). The “no censorship” provision of Section 315(a) has been construed broadly to bar actions by licensees that would impede candidate advertising. For example, in Becker v. FCC, 95 F.3d at 83, the D.C. Circuit struck down an FCC order interpreting its political broadcasting rules that would allow broadcasters to “channel” political ads with graphic anti-abortion advertisements to late night hours on the theory such ads might harm young viewers. The court made clear “the no-censorship provision of Section 315 prohibits any interference, direct or indirect” with candidate ads. Id. at 84 (citation omitted). 

That provision historically has been interpreted to bar altering or refusing to air candidate ads that, for example, the broadcaster might believe are defamatory. WDAY, Inc., 360 U.S. at 527. And, as the D.C. Circuit explained in Becker, it also restricts limits or conditions broadcasters may impose on political advertising, even when they believe there is an important public interest reason to do so. Broadcasters cannot restrict advertising in ways might “force [a candidate] to back away from what he considers to be the most effective way of presenting his position on a controversial issue lest he be deprived of the audience he is most anxious to reach.” Becker, 95 F.3d at 83. 

If the Commission requires broadcasters to ask candidates whether their ads contain any AI elements and to append disclosures based on the answers, there will be unavoidable tensions with these statutory requirements. Can broadcasters refuse to air candidate ads if the campaign declines to disclose whether it used AI in producing them (or if the broadcaster doubts the answers it receives)? As Commissioner Carr observed, if the proposed rules require that a broadcaster should “deny candidates access for failure to reveal the scope and scale of AI-generated content, it is not clear how the Communications Act would permit it” to do so. NPRM at 39 (Carr, Comm’r, dissenting). The Commission should not adopt rules that would confront broadcasters with this dilemma. 

FOUNDATION FOR INDIVIDUAL RIGHTS AND EXPRESSION

Robert Corn-Revere, Chief Counsel

Ronnie London, General Counsel

700 Pennsylvania Ave., SE, Suite 340

Washington, DC 20003

(215) 717-3473

bob.cornrevere@thefire.org

Aaron Terr, Director of Public Advocacy

John Coleman, Legislative Counsel

510 Walnut Street, Suite 900

Philadelphia, PA 19106

(215) 717-3473

aaron.terr@thefire.org

john.coleman@thefire.org


Notes

[1]Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements, 89 Fed. Reg. 63381, (FCC 2024) (“NPRM”). The FIRE(“֭”) is a nonpartisan nonprofit dedicated to defending the individual rights of all Americans to free speech and free thought—the essential qualities of liberty. Since 1999, FIREhas done so on campuses nationwide in matters implicating expressive rights, and in June 2022, it expanded its public advocacy beyond the university setting to defend First Amendment rights both on campus and in society at large. 

[2] Testifying before Congress earlier this year, FIREPresident and Chief Executive Officer Greg Lukianoff warned that government must proceed cautiously and respect First Amendment principles when considering the regulation of artificial intelligence. Greg Lukianoff, Written Testimony of Greg Lukianoff, Hearing Before the Comm. on the Judiciary, Select Subcomm. on the Weaponization of the Federal Government, Congress.gov (Feb. 6, 2024),

[3] See Wikipedia, Streetlight effect (last edited Mar. 11, 2024),

[4] See David Oxenford, FEC to Consider AI in Political Ads at Their September 19 Meeting – A New Compromise Proposal is Advanced, Broadcast Law Blog (Sept. 13, 2024),

[5] Id.

[6] See White House Office of Sci. & Tech. Policy, Blueprint for an AI Bill of Rights (Oct. 2022),

[7] Id. at 3.

[8] See id. at 21–22.

[9] Id. at 8.

[10] Id. at 10.

[11] Id. at 21.

[12] See Nicky Mouha & Morris Dworkin, Report on the Block Cipher Modes of Operation in the NIST SP 800-38 Series, Nat’l Inst. of Standards & Tech., U.S. Dep’t of Com., NIST IR 8459 (Sept. 2024), ; P. Jonathan Phillips et al.Four Principles of Explainable Artificial Intelligence, Nat’l Inst. of Standards & Tech., U.S. Dep’t of Com., NISTIR 8312 (Sept. 2021), .

[13] See Protect Elections from Deceptive AI Act, S. 2770, 118th Cong. (2023); AI Transparency in Elections Act of 2024, S. 3875, 118th Cong. (2024); Preparing Election Administrators for AI Act, S. 3897, 118th Cong. (2024).

[14] NPRM ¶¶ 27-289; see also id. ¶ 45 & App. B ¶ 5. 

[15] E.g., 47 U.S.C. §§ 151, 154(i), 303(r), 307(a), 309(a), 309(k)(1)(A), 335 (cited NPRM ¶¶ 24, 27-28, 2837, 45, & App. B ¶ 5).

[16] Id. §§ 317, 325(c)-(d) (cited NPRM ¶¶ 6, 24-26, 27-28, 45, & App. B ¶ 5).

[17] See infra Part B.2. (discussing §§ 312(a)(7) & 315). 

[18] Seee.g., NPRM ¶¶ 1–3, 22, 26, 27 & § III.A.

[19] X Corp. v. Bonta, -- F.4th ---, 2024 WL 4033063 (9th Cir. Sept. 4, 2024); Free Speech Coal., Inc. v. Paxton, 95 F.4th 263, 283–84 (5th Cir. 2024), cert. granted 144 S. Ct. 2714 (2024); Volokh v. James, 656 F. Supp.3d 431 (S.D.N.Y. 2023), appeal pending, No. 23-0356 (2d Cir. argued Feb. 16, 2024). There can be no doubt any rules arising out of the NPRM will necessarily implicate content—it’s right in the title: Disclosure and Transparency of Artificial Intelligence-Generated Content, NPRM p.1 (emphasis added).

[20] Section 326 prohibits censorship and expressly withholds authority to “interfere with the right of free speech by means of radio communication.” 47 U.S.C. § 326. This denies to the FCC “the power of censorship” as well as the ability to promulgate any “regulation or condition” that interferes with speech. Id. Similarly, Section 544(f)(1) states that no “Federal agency,” defined to include the Commission, id. § 522(8), “may . . . impose requirements regarding the . . . content of cable services, except as expressly provided” in the Act. Id. § 544(f)(1). 

[21] Insofar as MPAA lists Section 315 as a “clear” delegation of authority over content, 803 F.3d at 805 (including 47 U.S.C. 315 in list of content-related authorizing Act provisions), Section 315 still does not authorize the disclosure rules proposed here, as explained infra Part B.2.

[22] The NPRM’s Ordering Clauses and Initial Regulatory Flexibility Analysis also cite Sections 151 and 154(i), NPRM ¶ 45 & App. B ¶ 5, though its “Statutory Authority” section invokes only §§ 312(a)(7), 315, 317, and 303(r), id. ¶¶ 27–28—for good reason: The general grants of authority in Sections 151 and 154(i) are nonstarters. See MPAA, 309 F.3d at 803 (where “regulations significantly implicate program content” resort to Section 151 is a “very frail argument”); id. at 806 (quoting Implementation of Video Description of Video Programming, 15 FCC Rcd 15230, 15276 (2000), and specifically, Chairman Powell’s dissent that “says it all” on Section 4(i)).

[23] See NPRM ¶¶ 4-5, 17, 27–28, 45 & App. B ¶ 5. 

[24] Where a statute provides authority for an action, but is silent as to a similar, related action, it must be interpreted as authorizing only the former. Seee.g., NextWave Personal Commc’ns, Inc. v. FCC, 2001 WL 702069 *21 (D.C. Cir. 2001); Tennessee Valley Auth. v. Hill, 437 U.S. 153 (1978). “A statute listing the things it does cover exempts, by omission, the things it does not list. As to the items omitted, it is a mistake to say that Congress has been silent. Congress has spoken – these are matters outside the scope of the statute.” Original Honey Baked Ham Co. v. Glickman, 172 F.3d 885, 887 (D.C. Cir. 1999).

[25] See South Arkansas Radio Co., 6 FCC Rcd 5130 (MB 1991).

[26] Seee.g.Remarks of Michael O’Reilly, Commissioner, Federal Communications Commission “FCC Enforcement: Questionable Priorities & Wrong Directions, 2015 WL 3645773, , at 3. 

[27] For that reason, the NPRM’s attempt to invoke Section 315(d) specifically, which instructs the Commission to “prescribe appropriate rules and regulations to carry out the provisions of this section,” fails on the same grounds as those stated above for similar general grants of authority. See supra § II.B.1. And because neither Section 315 nor Section 312(a)(7) provides the necessary authority, the requirement in Section 335(a) to extend them to DBS services, see NPRM ¶ 5 n.26, offers no separate or additional support. 

[28] The NPRM also asks for comment on the First Amendment implications of extending the proposed rules to cable operators, DBS providers and SDARS licensees that engage in origination cablecasting. NPRM ¶ 29. As discussed in these comments, the FCC’s historic justifications for broadcast content regulations do not apply beyond licensed broadcasting, which means that the constitutional hurdles are even higher for these other media. E.g., Turner Broadcasting System v. FCC, 512 U.S. 622, 637 (1994) (“[T]he rationale for applying a less rigorous standard of First Amendment scrutiny to broadcast regulation, whatever its validity in the cases elaborating it, does not apply in the context of cable regulation.”).

[29] It is important to recognize, as noted, that the Act contains specific provisions expressly limiting content regulation. See supra note 21 and accompanying text.

[30] Other circuit court opinions have raised similar questions. In Lutheran Church-Missouri Synod v. FCC, the D.C. Circuit invalidated FCC equal employment opportunity rules that were predicated on promoting diverse programming. 141 F.3d 344 (D.C. Cir. 1998). Although the court did not analyze program content regulation based on spectrum scarcity, it noted the dilemma the FCC faces if it is either too general or too specific when it attempts to regulate programming. It observed the notion of “diverse programming” may be “too abstract to be meaningful,” but that “[a]ny real content-based definition of the term may well give rise to enormous tensions with the First Amendment.” Id. at 354. Accordingly, the court struck down the FCC regulations as violating equal protection. The D.C. Circuit reached a similar conclusion in MD/DC/DE Broadcasters Ass’n v. FCC, 236 F.3d 13 (D.C. Cir. 2001).

[31] Seee.g., Robert Corn-Revere, The Mind of the Censor and the Eye of the Beholder: The First Amendment and the Censor’s Dilemma 170–72, 185–88 (Cambridge Univ. Press, 2021); Thomas Winslow Hazlett, The Political Spectrum (Yale Univ. Press, 2017).

[32] A 2005 FCC staff study picked up where the 1985 Fairness Doctrine Report left off and concluded that the spectrum scarcity rationale “no longer serves as a valid justification for the government’s intrusive regulation of traditional broadcasting.” John W. Berresford, The Scarcity Rationale for Regulating Traditional Broadcasting: An Idea Whose Time Has Passed (Media Bureau Staff Research Paper, March 2005) at 8. It criticized the logic of the scarcity rationale for content regulation and added that “[p]erhaps most damaging to The Scarcity Rationale is the recent accessibility of all the content on the Internet, including eight million blogs, via licensed spectrum and WiFi and WiMax devices.” Content regulation “based on the scarcity of channels, has been severely undermined by plentiful channels.”

[33] A lower level of scrutiny is permitted for compelled commercial disclosures under Zauderer v. Office of Disciplinary Counsel of the Supreme Court of Ohio, 471 U.S. 626 (1985), but that standard is inapplicable to political speech. See X Corp. v. Bonta, 2024 WL 4033063 *7 (9th Cir. 2024); Nat’l Inst. of Fam. & Life Advocs. (NIFLA) v. Becerra, 138 S. Ct. 2361, 2372 (2018) (“The Zauderer standard does not apply here.”).

[34] The Court in League of Women Voters acknowledged Red Lion’s scarcity rationale had been subject to intense criticism but noted “[w]e are not prepared … to reconsider our longstanding approach without some signal from Congress or the FCC that technological developments have advanced so far that some revision of the system of broadcast regulation may be required.” 468 U.S. at 376 n.11. However, that signal was given in the 1985 Fairness Doctrine Report, where the Commission found the scarcity rationale was no longer valid, and it observed “it seems unlikely that the First Amendment protections of broadcast political speech will contract further, and they may well expand.” General Fairness Doctrine Obligations of Broadcast Licensees, 50 Fed. Reg. 35418, 35421 n.35 (Aug. 30, 1985) (citing Loveday v. FCC, 707 F.2d 1443 (D.C. Cir. 1983)).

[35] As Commissioner Simington observed, the Commission should not “cast about for regulatory solutions to problems that do not exist.” NPRM ¶ 43 (Simington, Comm’r, dissenting).

[36] See, e.g., Taylor Orth and Carl Bialik, Majorities of Americans are concerned about the spread of AI deepfakes and propaganda, YouGov (Sept. 12, 2023), (“Recent polling by YouGov shows a great deal of concern among Americans about potential uses of AI, particularly with regard to the spread of deepfake audio and video, and political propaganda.”).

[37] See Letter from FEC Chair Sean Cooksey to FCC Chair Jessica Rosenworcel, June 3, 2024.

[38] The McManus court rejected the state’s comparison to broadcast regulation under Red Lion by distinguishing broadcasting and online media. Id. at 519-20. However, that court had no occasion to question whether the technological assumptions underlying Red Lion remain valid or whether the disclosure requirements at issue would survive intermediate scrutiny.

[39] See, e.g., Alex Seitz-Wald, Democratic operative admits to commissioning fake Biden robocall that used AI, NBC News (Feb. 25, 2024), ; Alex Isenstadt, DeSantis PAC uses AI-generated Trump voice in ad attacking ex-president, Politico (July 17, 2023), (noting broadcast, text message, and digital advertisement using AI-generated voice of former President Trump “does not sound entirely natural” and quoting criticism from Trump campaign staff).

[40] See e.g., Jewell July, 40+ AI Productivity Tools to Help You Get More Done, Nutshell (August 28, 2024), ; Rhett Power, How AI Is Changing the Formula For Efficiency in 2024, Forbes (Jan 11, 2024),

[41] Lois Yoksoulian, How will generative artificial intelligence affect political advertising in 2024?, Univ. of Illinois Urbana-Champaign News Bureau (Mar 7, 2024),

[42] Jess Weatherbed, Adobe’s impressive AI upscaling project makes blurry videos look HD, Verge (Apr 24, 2024),

[43] Christina LaChapelle, Generative AI in Political Advertising, Brennan Center for Justice (Nov. 28, 2023),

[44] Elizabeth Culliford, How political campaigns use your data, Reuters (Oct. 12, 2020),

[45] Rebecca Klar, How AI is changing the 2024 election, The Hill (Jun. 18, 2023),

[46] Marty Swant, AI Briefing: How political startups are helping small political campaigns scale content and ads with AI, Digiday (Jul. 26, 2024),

[47] Christina LaChapelle, Generative AI in Political Advertising, Brennan Center for Justice (Nov. 28, 2023), .

[48] Robert Chesney, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Cal. L. Rev. 1753, 1771 (2019).

[49] Steven Overly, How AI Is Transforming a Lawmaker’s Life After a Terrible Diagnosis, Politico (Sept. 9, 2024), .

[50] 50.   See Craig Silverman, How to Spot a Deepfake like the Barack Obama–Jordan Peele Video, Buzzfeed (Apr. 17, 2018), .

[51] Elizabeth Elkind, House GOP campaign arm slams Democrats in new AI-generated ad turning national parks into migrant tent cities, Fox News (Dec. 4, 2023), .

[52] See, e.g., Rehan Mirza, How AI deepfakes threaten the 2024 elections, Journalist’s Resource (Feb. 16, 2024), (noting that “outsized media coverage” of deceptive potential of AI may itself serve to “undermine trust in information,” and thus “[i]t may, therefore, not be deepfakes themselves, but the narrative around them that undermines election integrity.”). 


Read PDF

Share