֭

Table of Contents

Deepfakes, democracy, and the perils of regulating new communications technologies

Research & Learn

Are deepfakes protected by the First Amendment? When are deepfakes illegal? Read ֭'s answers to legal questions about deepfakes since the launch and widespread use of popular AI image-generation platforms.

Human with TV Head representing artificial intelligence

“Superficial, sudden, unsifted, too fast for the truth.” 

No, that’s not a criticism of the internet or social media. That’s from an 1858  against the telegraph. 

New communications technologies have consistently generated predictions of humanity losing contact with reality. After Orson Welles’ 1938 radio adaptation of “The War of the Worlds” convinced some listeners that Martians really were invading Earth, a Mississippi resident  the Federal Communications Commission, “How will we know when news is news, or when it is just fiction, if that is an example of future radio programs?” 

In 1990 — the same year Adobe released Photoshop — Newsweek published an  titled, “When Photographs Lie,” about the “scary” implications of digital photo alteration. 

Today, one of the culprits is social media, which is often blamed for the rapid spread of misinformation and disinformation. But the most recent tools to stoke such fears are artificial intelligence and deepfakes that could trick people into thinking that fake images, videos, or audio are actually real. 

While some deepfakes may push or exceed the boundaries of the law, they’re not all malicious. In this explainer, FIREwill address some of the concerns and legal questions about deepfakes as we approach the first U.S. presidential election since the launch and widespread use of popular AI image-generation platforms.

What is a deepfake?

The “deep” in “deepfake” comes from deep learning, an AI technique that trains a computer model to recognize and reconstruct patterns based on large amounts of data. A deep learning model trained on images, video, or audio of a person can learn to replicate their voice, appearance, or other characteristics, though the most lifelike deepfakes still require refinement through the use of audio or graphics editing tools.

WATCH VIDEO: Deepfake Tom Cruise performs a magic trick.

Why are people concerned about deepfakes?

Because deep learning technology is so sophisticated, many worry it can easily dupe the public into thinking fake audio and video are real. Conversely, as fakes become more common, they may undermine trust in audiovisual material. Down we go into a post-truth, post-trust dystopia. That’s the fear, at least.

The approaching 2024 presidential election has fueled anxiety over political deepfakes, in particular. In July, Moody’s  that this “election is likely to be closely contested, increasing concerns that AI deepfakes could be deployed to mislead voters, exacerbate division and sow discord.” 

A couple political deepfakes have already gone viral. Ahead of the New Hampshire Democratic primary, a  used AI to mimic President Joe Biden’s voice, falsely suggesting that voting in the primary would make voters ineligible to cast ballots in the general election. More recently, controversy erupted when Elon Musk  a deepfake parody of a Kamala Harris political ad.

Does the First Amendment protect deepfakes?

When people use technologies to communicate information and ideas — whether a printing press, radio, television, megaphone, or online platform — the First Amendment protects them. That’s no less true for AI. 

Deepfakes can fall outside the First Amendment’s protection, just like an email message or social media post can. But that doesn’t justify categorically banning them or passing overly broad regulations that stifle protected speech.

While deepfakes are often associated with misinformation or disinformation, they aren’t inherently deceptive. Consider these creative applications of generative AI:

  • Satire and political commentary (the Kamala Harris ad or  of Mark Zuckerberg giving a sinister speech about his control over billions of people’s data)
  • Parody ( or  dedicated to lampooning Tom Cruise)
  • Art and entertainment (the Dalí Museum bringing the eponymous artist  to interact with visitors, or the popular TV series “The Mandalorian”  a young Luke Skywalker as he appeared in the original “Star Wars” trilogy)
  • Education and outreach (a  of David Beckham, made with his consent, raising awareness about malaria in nine different languages)
  • Helping the  

Even when it comes to misleading deepfakes, there is no general First Amendment exception for misinformation or disinformation. That’s for good reason — the government  such an exception to suppress dissent and criticism. 

Misinformation vs. Disinformation - A girl in stress and anxiety of depression closed her ears.

Misinformation versus disinformation, explained

Issue Pages

Confusingly, the terms are used interchangeably. But they are different — and the distinction matters.

Read More

So if you've been asking yourself, "Are deepfakes illegal?" The answer is not usually, and it depends. Some false or deceptive speech that causes specific, targeted harm to individuals is punishable under narrowly defined First Amendment exceptions. If, for example, someone creates and distributes a deepfake that is intended to and actually does deceive others into thinking someone did something they didn’t do, the depicted individual could have a claim for defamation or  if they suffered reputational harm.

Or take fraud, which generally involves a false representation intended to deceive someone into giving up money or something of material value. Earlier this year, a Hong Kong finance worker was  into transferring $25 million to scammers who used deepfake technology to impersonate the company’s chief financial officer in a video call. Even if those scammers had been operating in the U.S., the First Amendment wouldn’t protect them.

Are laws regulating political deepfakes constitutional?

A recent  indicates at least 26 states passed or are considering legislation regulating election-related manipulated media. To simply say they target deepfakes would be misleading. Some bills define “deepfake” so broadly as to encompass content made or edited without the use of AI or content that doesn’t depict an identifiable person.

Many of these bills and laws contain vague or broad language that includes expression protected under the First Amendment. And by targeting political or election-related speech, they threaten what the Supreme Court has called “the essence of self-government.”

Because these laws regulate speech based on content, they must satisfy the most demanding test in constitutional law: strict scrutiny. To pass strict scrutiny, the government must prove that a real problem exists, that it has a compelling reason to address it, that restricting speech is necessary to do so, and that the regulation doesn’t restrict more speech than necessary. Existing deepfake laws are highly unlikely to meet all these requirements.

Established legal frameworks for addressing specific harms like fraud and defamation are better equipped to deal with AI’s nefarious uses than imprecise laws that single out neutral expressive tools and invite serious unintended consequences.

A Minnesota , for instance, criminalizes dissemination of a “deep fake” in the lead-up to an election if done “with the intent to injure a candidate or influence the result of an election.” But what does it mean to “injure” a politician? And isn’t the point of political speech to influence opinions and how people vote? Defamation is one thing, but this law reaches well beyond that. Even if the content is not shared with intent to deceive, and nobody falls for it, and the depicted person suffers no harm, the person who shared it could go to jail. What’s more, the person disseminating the content need not even know it’s fake. It’s enough that they showed “reckless disregard” for that possibility. 

It’s common for deepfake laws to place no duty on the government to show that those who share such content knew it was fabricated, intended to deceive anyone, or actually caused harm. That means a social media user who reposts a deepfake, or a media outlet that reports on one, could be committing a crime in some states even if they believed the content was authentic.

Artificial intelligence. Technology web background.

Artificial intelligence, free speech, and the First Amendment

Issue Pages

FIREoffers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.

Read More

Like “deepfake” laws in certain other states, the Minnesota law also makes no exception for satire or parody, which the First Amendment jealously guards from government censorship. Parody and satire can be devastatingly effective tools for criticizing the powerful and for exposing absurdities in our culture. While Minnesota’s law applies only to content “so realistic that a reasonable person would believe” the content actually shows someone saying or doing something they never did, that may not stop enforcement against parodies. 

Take the response to the Kamala Harris parody ad. Even though a faked version of Harris’s voice says she is “the ultimate diversity hire” and doesn’t “know the first thing about running the country” — which, along with general public awareness of AI technology, should make clear the video is a parody — the video as an example of dangerously misleading manipulated media, including from politicians such as  and . In fact, Gov. Newsom promised to sign a bill that would make the video illegal—and made good on that promise, only to have a federal court  enforcement of the law on First Amendment grounds just two weeks later. 

As FIREand other groups said in a  opposing two congressional AI bills, “unclear protections for free expression will not only be difficult to parse for courts, but for users, social media platforms, and providers of AI tools as well, creating a chilling effect on legitimate uses of this technology at every stage of content creation and distribution.” 

The California law blocked by a federal court is just one of several AI-related laws the state has passed in recent months. Another one forces large online platforms to censor “deceptive content” about politicians, respond to every single complaint about such content within 36 hours, and filter and block content “substantially similar” to that previously removed. The law is a potent political weapon, easily wielded by powerful figures to suppress content that mocks or criticizes them. 

Given its overbroad definition of “deceptive content,” the massive amount of content that large platforms host, the short timeline for review, and the threat of constantly being dragged into court, platforms are likely to censor protected speech even beyond what the law requires. 

The law doesn’t just infringe platforms’ First Amendment right to editorial discretion. It flies in the face of Section 230, the federal law that protects Americans’ ability to speak online by preventing the platforms they use from being sued out of existence for what users say. 

Freedom of speech is timeless. If, confronted with the challenges of AI, we erode the fundamental rights that form the bedrock of a free society, we’ll have much bigger problems than deepfakes.

Another common provision in deepfake-related laws is mandatory disclosure. A Wisconsin , for example, requires political advertisements to display text indicating if they contain content generated by AI. FIRErecently opposed a rule the FCC proposed that would establish a similar requirement for broadcast political ads, in part because, as we explained, the agency’s definition of AI-generated content is so broad the rule would cover the use of AI for even innocuous or beneficial purposes.

Though they may be less onerous than outright bans, disclosure mandates still face steep constitutional barriers. Courts are deeply hostile to compelled speech, especially in noncommercial contexts like these. And mandates that require disclosure for any use of AI are excessively broad. Again, not all or even most uses of AI or deepfakes are malicious. AI use could, for example, improve a video’s production quality or generate footage of a rolling cornfield. Requiring disclosure even in these circumstances doesn’t help viewers know whether an ad is deceptive. Ironically, it could mislead viewers, causing them to distrust content that ’t&Բ;deceptive. 

What are the alternatives to censorship?

Generative AI is a neutral tool of expression, like a pen or a paintbrush. It can be used for good or ill. Persistent focus on the negatives of AI obscures its potential to enrich communication, nurture creativity, and contribute to the project of human knowledge — benefits that a regulatory panic puts at risk. Established legal frameworks for addressing specific harms like fraud and defamation are better equipped to deal with AI’s nefarious uses than imprecise laws that single out neutral expressive tools and invite serious unintended consequences.

The rush to regulate deepfakes and AI also overlooks the non-legal ways society can address the misuse of these technologies, including counterspeech, media literacy, and advancements in deepfake-detecting technology (which itself uses AI).

Illustration of the US Capitol building with the profile of a human head behind it with microchip circuits

FIRE Comment on FCC NPRM on Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements

Statements & Policies

The FCC appears to be gearing up for a regulatory land rush, looking for opportunities to plant its flag in the burgeoning new field of artificial intelligence.

Read More

As one federal appeals court , “Possibly there is no greater arena wherein counterspeech is at its most effective” than the arena of political debate. Over the past few years, the  and others have   and  deepfakes involving politicians and other public figures (even when it seems unlikely the deepfakes were intended to or actually did fool anyone to begin with). The public is certainly alert to the issue. 

Dating back to the invention of writing, humanity has repeatedly grappled with new modes of communication and their potential to blur the line between truth and fiction. But each time, society adapted, learning to approach emerging media with a more critical eye. “The old saying that ‘photographs do not lie’ must go to join the growing host of exploded notions,”  the New-York Tribune in 1897. We can draw on this historical precedent as we confront the challenges posed by AI. Media literacy is essential and, unlike the ineffective band-aid of censorship, gets to the root of the problem while avoiding collateral damage to free expression. 

Final thoughts

While it’s reasonable to be concerned about the negative impacts of AI, warnings about the imminent demise of truth and democracy are dubious. Despite widespread fears that AI-powered disinformation would upend elections this year,  ٳ’v&Բ;. And   that alarmist narratives about the effects of online misinformation are themselves misleading. Speculative doomsday scenarios are a poor and constitutionally inadequate basis for censorship. 

The focus on deepfakes’ potential to deceive may largely miss the point. Writing for the think tank RAND, Peter Carlyon  the example of a poor-quality yet immensely popular deepfake depicting thousands of soccer fans waving a giant Palestinian flag. Why would people engage with a deepfake they know is fake? “The influence of deepfakes is not necessarily to make people believe something that is not true,” Carlyon writes, “but to create engaging content as part of a broader narrative—just like many other forms of media.”

As the Supreme Court recently :

Whatever the challenges of applying the Constitution to ever-advancing technology, the basic principles of the First Amendment do not vary. … Those principles have served the Nation well over many years, even as one communications method has given way to another.

The Court is right: freedom of speech is timeless. If, confronted with the challenges of AI, we erode the fundamental rights that form the bedrock of a free society, we’ll have much bigger problems than deepfakes.


Last updated: Oct. 11, 2024.

Share