ĂŰÖ­ĎăĚŇ

Table of Contents

Texas legislators file unconstitutional bills to prohibit use of AI in election campaigns

Texas state capitol building in Austin

Ken Herman / USA TODAY NETWORK via Imagn Images

Texas state capitol building in Austin.

Earlier this year, Rep. Jennifer Wexton  the House of Representatives using her voice entirely generated by AI because a progressive disease has weakened her natural -speaking voice. However, if she used her AI-generated voice in a campaign ad in Texas, newly introduced legislation would likely make that a crime.

Texas Rep Jennifer Wexton in 2022
Rep. Jennifer Wexton in 2022.

The Texas laws would broadly criminalize AI-generated content related to elections. But one major obstacle to this effort is that a Texas state appeals court already  on First Amendment grounds, writing that the law was an overly broad content-based regulation of core political speech. The law — known as the  — made it a criminal offense to pretend that campaign communications are coming from someone else with the intent to cause injury to a candidate or influence the election's outcome. 

The court did the right thing, but that hasn’t stopped Texas legislators. If enacted, these bills will lead to more censorship in the Lone Star state.

Ex Parte Stafford

The doomed True Source law was used to prosecute John Stafford, a self-described Democratic party activist who was indicted for sending text messages that looked like they came from conservative or Republican campaigns with the intent of exposing the political leanings of candidates running in local nonpartisan races.

The law made it a crime to “knowingly represent[] in a campaign communication that the communication emanates from a source other than its true source . . . with intent to injure a candidate or influence the result of an election.”

The Texas Court of Criminal Appeals,  with a lower court ruling,  that the True Source statute was a content-based restriction on political speech, on the grounds that the law burdened “core political speech.” Under First Amendment doctrine, this meant the law had to satisfy , which requires the state to show that the law is necessary to serve a compelling government interest and is narrowly tailored to serve that interest. 

While the state was able to show a compelling interest in protecting the election process, the court found that the law’s burden on freedom of speech was neither “sufficiently narrow” nor did it impose “as few restrictions as possible to meet the state's goals.” The law was overly broad, encompassing communications that were neither false nor misleading. 

The court emphasized the law’s broad sweep, observing that it’s hard to imagine political speech that would not “influence the result of an election.” The statute covered even neutral and accurate statements. It also reached parody, which is squarely protected by the First Amendment.

Moreover, the law covered statements merely “related” to a campaign, casting an even wider net over "innocuous statements.” The vague language left too much to interpretation, leaving people at the mercy of local prosecutors.

Texas legislation related to use of AI in election campaigns

A major takeaway from Stafford is the court’s recognition of the  as a content-based regulation. Political speech in particular receives strong First Amendment protection because it is essential for our system of government. The Supreme Court stated this explicitly in Buckley v. Valeo (1976): “Discussion of public issues and debate on the qualifications of candidates are integral to the operation of the system of government established by our Constitution.” 

Stafford demonstrates how broad regulations of speech, particularly core political speech, have difficulty passing constitutional muster. Despite this, Texas legislators have begun filing bills for the 2025 legislative session that seek to broadly regulate election-related, AI-generated content. These bills do little, if anything at all, to avoid the major constitutional issues the court found with the True Source statute.

Below are two examples of bills that have been filed so far.

Texas House Bill 556

would make it a criminal offense to create “artificially generated media” and then to publish or distribute it within 30 days of an election with the “intent to injure a candidate or influence the result of an election.” 

The bill defines the term "artificially generated media" broadly: it includes pictures, audio recordings (such as a person’s voice), videos, or text “created or modified using generative artificial intelligence technology or other computer software with the intent to deceive.”

Given the decision in Stafford, it is difficult to imagine this bill surviving a court challenge. The bill similarly targets core political speech by broadly going after media that seeks to influence an election, if it was made or modified with software “with intent to deceive.” 

Texas shouldn’t take a page from a California law that’s already been blocked. As Stafford affirmed, overly broad regulations that stifle protected speech simply won’t pass constitutional muster.

This could include all kinds of speech that is “deceptive,” but does no harm to the integrity of elections. For example, candidates use software to produce and send thousands of communications at once purporting to be from the candidate and personally addressed to individual supporters. Is it deceptive to make it look like candidates are the ones actually sending these messages when software helps them rather than campaign staff?

Or consider an AI reproduction of a candidate’s voice, as illustrated by . The AI-generated voice of Rep. Wexton, who was diagnosed in 2023 with a rare brain disease known as progressive supranuclear palsy, is an example of AI providing greater opportunities for those who have a speech-related impairment or disability to connect with other people. Although arguably deceptive under HB 556, since it was not her actual voice, this use of AI helped her communicate. If this was used in a campaign ad instead of a floor speech, it could run afoul of HB 556.

AI can also improve production quality of campaign ads. It can make it easier to create , high-quality ads for candidates who cannot afford professional audio and video production, lowering the economic threshold to seek public office. Whether this means upscaling the pixel quality of images or suggesting layouts to make an ad look more professional, AI may actually further democratize our elections by giving everyday Americans more effective tools to run competitive races.

While these examples paint a rosy picture of AI as a positive force for shaping political discourse, they should not be taken to suggest that AI is incapable of being used for nefarious purposes. On the contrary, we’ve said before that exceptions to the First Amendment apply to AI-generated speech the same as they do to other speech. These exceptions include incitement to imminent lawless action, true threats, fraud, defamation, and speech integral to criminal conduct. 

But laws prohibiting , , and  are already covered by Texas law. HB 556 instead criminalizes a broad swath of political speech beyond what is necessary to protect elections, and so it would very likely fail the narrow tailoring that the First Amendment requires. 

Even if the bill could survive judicial review, it would chill election-related speech in troubling ways. As described above, it would ban benign uses of AI or software in elections. 

And while the intent might be to prevent other people from altering a video of a candidate for some unlawful purpose, the criminal prohibition might very well apply to candidates who alter their own image or audio. If that’s the case, a law like this could easily be weaponized by political opponents to chill the opposing side’s speech. It could also empower prosecutors to investigate candidates they oppose on any suspicion that they used any software in their campaign ads or other messages that might count as “deceptive.” 

Texas Senate Bill 228

Another proposed piece of legislation, , would prohibit a person from knowingly distributing political advertising that has been changed by AI technology. SB 228 would make it a criminal offense to publish or share “political advertising that includes an image, audio recording, or video recording” of a politician’s “appearance, speech, or conduct that has been intentionally manipulated using generative artificial intelligence technology” to cause harm. This specifically criminalizes “a realistic but false or inaccurate” image or recording that results in depicting something that didn’t happen in a way that would give a reasonable person “a fundamentally different understanding or impression” than the original, unaltered version. 

politician sitting by table with his hands over document during political summit or conference

Legislative Policy Reform

Page

FIREworks closely with lawmakers across the country and political spectrum to protect civil liberties.

Read More

Like HB 556, this bill is not narrowly tailored, meaning all of the examples of legitimate and helpful uses of AI above could very well be prohibited under this bill as well. 

A similarly worded California law has actually been  by a federal court on First Amendment grounds. In , the judge assessed the California law as a content-based speech restriction and concluded that it likely failed strict scrutiny review because “counter speech is a less restrictive alternative to prohibiting videos.” In other words, the cure for false or deceptive speech is more speech.

The court declared that fears of AI-generated content, while justified, do “not give legislators unbridled license to bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment.” The court stated further that “YouTube videos, Facebook posts, and X tweets are the newspaper advertisements and political cartoons of today, and the First Amendment protects an individual’s right to speak regardless of the new medium these critiques may take.” 

Texas shouldn’t take a page from a California law that’s already been blocked. As Stafford affirmed, overly broad regulations that stifle protected speech simply won’t pass constitutional muster.

Looking ahead

While FIREis just beginning to explore AI legislation that might be introduced in 2025, fighting against content-based regulations is not new to us.

We will keep our readers updated on any further developments.

Recent Articles

FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.

Share