ĂŰÖ­ĎăĚŇ

Table of Contents

Artificial intelligence, free speech, and the First Amendment

Artificial intelligence. Technology web background.

The rapid advance of artificial intelligence (AI) technology has the potential to profoundly shape our world. Some that artificial intelligence could be more transformative than the internet and smartphones. “A.I. has been the fundamental dream of computer science going all the way back to the 1940s,” technology entrepreneur Marc Andreessen. The consulting firm McKinsey & Company AI could automate about two-thirds of our current work time over the next 20 years.

Advances in communications technology often raise questions about how protections afforded by the First Amendment for free expression will apply to the emerging technologies. Because artificial intelligence is in its nascency, cases involving its use are just beginning to arrive on dockets. We have yet to see how lawmakers and judges will address the challenges posed by this emerging technology. But they will not be writing on a blank slate. First Amendment doctrine does not reset itself after each technological advance, and modern First Amendment jurisprudence has navigated the advent of television, motion pictures, personal computers, the internet, smartphones, and more. 

Below, we venture an early analysis of commonly asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.

What is artificial intelligence?

Artificial intelligence or AI is a label applied to a range of computer technologies and programs that perform tasks similar to human learning, reasoning, and response generation based on user input. AI has long referred to things like a computer-controlled opponent in a chess program (or other video game) that responds to user interactions to try to defeat the user. 

Over the past year or so the explosive rise of Generative AI has made AI go mainstream. Generative AI uses AI models trained on vast amounts of human-generated texts and images. These models are used to create apps which can automatically generate unique high quality novel content based on the user's simple input — “prompts.” Some of these apps, such as  , , , etc., have now become widely used and well known. As an example, a user enters a prompt into an app such as “create a picture of a car running into a fire hydrant,” and the model predicts what an image like that might look like, based on pictures it has been trained on matching components of the description. The model then generates a picture as an output. 

People, not technologies, have rights. People create and utilize technologies for expressive purposes, and technologies used for expressive purposes, such as to communicate and receive information, implicate First Amendment rights.

For the purpose of this article, AI refers to these generative and predictive models. Additionally, in this article, we’ll discuss AI used for expressive purposes. Our discussion does not extend to the legal implications of non-expressive uses of AI, such as self-driving car algorithms and AI-automated evaluation of job or college applications.

Who is involved in the creation, development, and use of artificial intelligence?

Because law regulates the actions of human beings (or their interactions), understanding how artificial intelligence interacts with free speech requires understanding who might be regulated by proposed solutions, or found to have violated existing laws. 

While there are potentially dozens of roles involved in the creation and training of a given artificial intelligence, you can think of them as fitting into one of a few general categories:

  1. Developers. These are the people who “created” the AI model, which can include researchers who developed the underlying algorithms, data scientists who took those theories and made them into models, and engineers who took those models and put them into production. It also includes trainers who decide the datasets on which the models will be trained, and ethicists who provide “guardrails” to prevent unintended or harmful outputs. Individual AI developers may take on some or all of these roles.
  2. Dataset generators. These are the people responsible for the content on which the AI is trained, who indirectly, and often unknowingly, contribute to the final output of the AI. For AIs trained on public blogs and social media posts, any poster could be a dataset generator. You could be a dataset generator every time you to prove you are a human when logging into a website. Increasingly, dataset generators are workers in foreign countries hired by major tech companies to that are fed into the model.
  3. Users. These are the people actually asking questions to generate the output and using it. In the case of some models, their queries are folded back into the AI’s library, meaning users might also be dataset generators, too. 

As of right now, artificial intelligence needs these humans to operate. (AGI) doesn’t currently exist. AGI is a hypothetical artificial intelligence that (as currently envisioned in science fiction stories) may one day reach a state where it takes on a “mind of its own,” with abilities that regularly surpass those of humans and behavior that will not always be predictable to the humans who programmed it. Would such an artificial entity acquire legal rights? Currently, there is no legal framework to answer that question.

Is artificial intelligence protected by the First Amendment to the U.S. Constitution?

People, not technologies, have rights. People create and utilize technologies for expressive purposes, and technologies used for expressive purposes, such as to communicate and receive information, implicate First Amendment rights.

Any government restriction on the expressive use of AI needs to be narrowly tailored to serve a compelling governmental purpose, and the regulation must restrict as little expression as is necessary to achieve that purpose.

The printing press, radio, television, and the internet are all technologies used to disseminate and receive information. Similarly, some tools like ChatGPT, Bard, Stable Diffusion, and Runway AI help people write essays, design images, edit movies, and more. Additionally, it’s been that computer code is speech, and receives the protections of the First Amendment.

As a result, the use of artificial intelligence to create, disseminate, and receive information should be protected by the First Amendment to the U.S. Constitution. Any government restriction on the expressive use of AI needs to be narrowly tailored to serve a compelling governmental purpose, and the regulation must restrict as little expression as is necessary to achieve that purpose.

Do the exceptions to the First Amendment apply to artificial intelligence?

Yes, the same exceptions to the First Amendment should apply in the artificial intelligence context as they would in any other multimedia context. These exceptions include incitement to imminent lawless action, true threats, fraud, defamation, and speech integral to criminal conduct.

Defamatory messages are no more permissible, for example, via artificial intelligence than via blog post, printing press, or film. For example, if someone entered a prompt into ChatGPT asking it to write a story about FIREPresident Greg Lukianoff burning down the FIREoffice, and then they spread that story as if it were true, they wouldn’t be any less liable for defamation than if they wrote that story without AI tools.

Similarly, if a person asked StableDiffusion to generate a picture of themself doing grievous bodily harm to another person, and then sent that person the image in an effort to frighten and intimidate them, the fact that the image was AI generated would not change the analysis of whether it was a true threat.

Who is liable for expression generated by artificial intelligence?

It depends. While modern artificial intelligences often have emergent properties that aren’t always easy to anticipate, this does not always mean that at least one (or maybe some, or all) of the creators, owners, or users of the technology will avoid liability on that basis alone.

“You have to have a person to sue,” said Alison Schary, a partner at the law firm of Davis Wright Tremaine. It is thus likely, she continued, that “if somebody is going to bring a case, they’re going to sue the person who distributed the speech, or the person or the entity that takes ownership of, or develops, the AI system.” Whether they succeed, however, will in not insignificant part depend on what First Amendment protections will bear.

At least one defamation lawsuit relating to generative AI , and we will be watching the result with great interest.

Are so-called “deepfakes” protected by the First Amendment?

Deepfakes are hyper-realistic renderings of actual people’s image or voice produced by machine learning and artificial intelligence. Generally, there is no “hyper-realistic” exception to the First Amendment. Just as a realistic painting of an actual person cannot be censored for looking realistic, neither can a deepfake be censored for its realism alone.

However, the “fake” in deepfake may suggest some uses in which deepfakes fall outside of First Amendment protection. Some deepfakes may implicate laws prohibiting forgery, fraud, defamation, false light, impersonation, and appropriation. In short, existing legal tools often applied (and, sometimes, misapplied) in other contexts can help us address unlawful deepfakes.

Currently, the are generated when deepfake creators combine AI tools with the use of more traditional special effects and photo and video editing techniques. The amount of human intervention currently required in those deepfakes is a significant barrier to the production of these sort of deepfakes at scale, but advances in generative AI, and improvements in the ease of use of traditional editing technology may change that.

Department of Homeland Security Seal located outside the US Immigration and Customs Enforcement Headquarters in Washington

Government attempts to label speech misinformation, disinformation, and malinformation are a free-speech nightmare

News

Allowing the government to decide what speech is and is not fit for public consideration will likely make the problem worse.

Read More

There are also non-legal tools we can use to avoid being duped by deepfakes. For one, deepfake-detecting technologies exist, and use the same machine learning technology that generative AI uses. Experts say that there is that deepfake generation technology is outpacing detection, but it’s an open question whether that will continue to be the case. For example, may be easier to convincingly generate than video, and also for detection. 

There is also media literacy. Historically, humanity has gone through periods where we are easily deceived by the hyper-realism and apparent authority of new technologies. Early moviegoers are said to have jumped out of their seats as a train appeared to barrel toward them on screen. Orson Welles’ radio broadcast of “War of the Worlds” created a panic in New York from people who thought a real alien invasion was happening. And people were easily duped by the perceived authority of anything written down when the world was far less literate.

In all cases, we eventually adapted and became more media literate and less credulous.

Could the use of artificial intelligence violate intellectual property laws, including copyright?

Probably. The same intellectual property principles that apply in other multimedia contexts should apply in the artificial intelligence context as well. “AI could spit out potentially infringing materials the same way that any other tool could,” according to David Greene, senior staff attorney and civil liberties director at the Electronic Frontier Foundation.

For example, artificial intelligence outputs that incorporate copyrighted material may not receive protection if they are not sufficiently transformative or fail to satisfy the for determining if a particular creation is a fair use. Generated content that doesn’t meet standards for transformative use, and can also implicate false light and laws.

One major question is how to address artificial intelligence inputs: Is it a violation of intellectual property rights to your artificial intelligence on copyrighted material — say, a copyrighted photo of Mickey Mouse or the text of the latest New York Times bestseller?

Here, again, drawing parallels to non-AI contexts can prove helpful. Generally speaking, authors who conduct research in a library or on the open internet using copyrighted material do not violate copyright. A painter who looks at a painting from Mark Rothko and is inspired to emulate his style does not violate Rothko’s copyright. A filmmaker who studies John Ford films and takes lessons from them to create their own western-style film does not violate Ford’s (or his movie studio’s) copyright.

While there may be open and complicated questions about platform liability in the context of generative AI, sticking AI creators with blanket liability for user-generated content produced through their services will be a disaster for technological advancement and free speech on the internet.

One can think of artificial intelligence as a very efficient and powerful research assistant. If you can hire a research assistant to comb large troves of publicly available information, why not an AI? 

In the future, more otherwise free, publicly accessible repositories of data — such as Yelp!, Medium, and Substack — may restrict access to their websites via password-protection to limit the ability of AI training to search their websites. In June, 2023, Elon Musk that only logged-in accounts could access Twitter due to AI companies "scraping vast amounts of data" from Twitter, which required the company to bring more costly servers online. Other websites might require users to agree to terms of service before accessing the site. These could expressly prohibit using the website’s data to train AI.

In the meantime, to avoid these tricky input questions, some AI creators proactively license the data they use to feed their machine learning. Others are deliberately vague about what data they use to train their AI.

Courts will certainly have more to say about artificial intelligence and intellectual property in the future.

Who owns the copyright to works generated by artificial intelligence?

In some cases, perhaps nobody.

“The copyright office — at least so far — has to copyright things that were solely generated by AI, unless there was substantial human involvement,” according to Schary, who believes this will limit the utility of artificial intelligence as a tool for some content creators. 

It would be unreasonable for the use of AI-retouch features to invalidate a photographer’s copyright over the resulting image. But what about a user simply prompting an image generator to create an image of a knight saving a princess? What qualifies as substantial human involvement will likely be subject to legal debate.

What about attempts to remove Section 230 immunity from generative AI platforms?

is a law that gives immunity to online platforms for legal liability stemming from speech provided by their users. An AI’s outputs are generated by a user’s prompts — that is, an AI’s outputs rely on information provided by someone other than the AI’s creator. As such, the AI creator may be able to claim immunity for the outputs.

Despite AI’s nascency, we have seen the introduction of legislation that proposes to remove Section 230 immunity for online platforms from claims or charges involving the use or provision of artificial intelligence capable of generating content such as text, video, images, and/or audio based on prompts or other data provided by a user. This means, at a minimum, that in the earlier hypothetical, not only would someone be liable for spreading a defamatory story that Greg Lukianoff committed arson, ChatGPT’s creators OpenAI would be liable for defamation for giving that person exactly what they asked for. Worse, this approach would reach beyond models like ChatGPT and DALL-E and likely apply to AI-trained search and recommendation systems like Google and YouTube, which compile, excerpt, and display results that rely to some degree on artificial intelligence for their creation.

Facebook logo censored

Why repealing or weakening Section 230 is a very bad idea

News

Section 230 made the internet fertile ground for speech, creativity, and innovation, supporting the formation and growth of diverse online communities and platforms. Repealing or weakening Section 230 would jeopardize all of that.

Read More

While there may be open and complicated questions about platform liability in the context of generative AI, sticking AI creators with blanket liability for user-generated content produced through their services will be a disaster for technological advancement and free speech on the internet. As FIREhas argued in the context of proposals to strip Section 230 protection from social media platforms:

 [W]ebsites would be left with a menu of unattractive options to avoid lawsuits over their users’ speech. Many would likely change their business model and stop hosting user-generated content altogether, creating a scarcity of platforms that sustain our ability to communicate with each other online.

[ . . . ] 

Surviving platforms would moderate content more aggressively and maybe even screen all content before it’s posted. That isn’t a recipe for a thriving, free-speech-friendly internet.

The same would be true for AI providers — fear of lawsuits stemming from the bad actions of users would destroy incentives for innovation and advancement of this potentially transformative technology. Wholesale removal of Section 230 protection would effectively smother AI in the crib.

Can artificial intelligence be biased?

Yes, artificial intelligence can be biased.

AI tools are created by humans who make choices that influence the outputs produced by artificial intelligence. Some of these choices influence more mundane outputs, such as how the AI displays information on a webpage to improve readability for end users. Other choices may be more ideological, such as to write a script about why fascism is good but agreeing to write a script about why communism is good. Parallels lie in how humans create content moderation algorithms for social media, which can also utilize aspects of artificial intelligence.

Some biases in the outputs of artificial intelligence may not be the result of deliberate choices made by the AI’s creator. For example, generative artificial intelligences — such as ChatGPT and Bard — are typically built on so-called large language models (LLMs), which are large caches of data that the AI utilizes to train itself and inform its outputs. These data caches can include information on the internet written by humans. To the extent these data caches are themselves biased, the AI outputs may be biased. In this way, AI may incorporate the biases of society writ large. 

An artificial intelligence creator can program “guardrails” into their AI models to prevent responses that the creator finds offensive or otherwise politically objectionable. Parallels again lie in how some social media companies moderate content on their platforms.

Over time, as AI content makes up more of the content on the internet and it becomes more difficult to sift genuine content from AI-generated content, this AI-generated content will also feed the LLMs, further influencing AI outputs and potentially ingraining certain biases over time. Some have this could eventually lead to “model collapse,” where the prevalence of AI-generated content on the internet becomes so widespread that pollution of training data with AI-generated content leads to AI models failing to reflect reality.

Regardless of how one feels about the biases incorporated into artificial intelligence tools by LLMs and their human programmers, the incorporation of these attitudes into an AI used for expressive purposes may be an editorial decision made by AI creators. FIREbelieves those decisions are protected by the First Amendment. 

Can artificial intelligence censor speech?

An artificial intelligence creator can program “” into their AI models to prevent responses that the creator finds offensive or otherwise . Parallels again lie in how some social media companies moderate content on their platforms. 

Image generated by Stable Diffusion with the prompt "freedom of speech for robots"

Censoring the technologies of free expression

News

What if you could generate a photorealistic picture of anything you could imagine? It would sound like wizardry even 10 years ago, but it’s now reality.

Read More

The decision by private individuals and companies — and not the government — to not display certain information is an editorial decision and therefore protected by the First Amendment. The First Amendment protects the right of individuals and companies to publish — or not to publish — information as they see fit. However, the fact that these editorial decisions are legally protected does not mean they are immune from criticism.

There are also concerns about the government to individuals and institutions to create AI tools that will monitor and police speech online. As artificial intelligence tools become more prevalent, the government may wish to begin using them to police speech, but the First Amendment prohibits the government from policing protected speech, regardless of what tools it uses.

Can artificial intelligence produce inaccurate results?

Yes, artificial intelligence can produce inaccurate results. Generative AI tools are typically built on large language models consisting of data spread across the internet and elsewhere. Not all information online is accurate. As a result, sometimes AIs are trained on inaccurate information. Generative AI is also known to “” new, inaccurate information when it cannot find a specific answer to a user’s question. For example, a lawyer was for in a legal brief that were generated by ChatGPT. Users should double-check the accuracy of AI-generated information before using or disseminating it.

Do proposals to regulate artificial intelligence violate the First Amendment?

It depends on the regulation and the type of AI to which the regulation would be applied. 

Artificial intelligence is an emerging technology with many uses. Some of those uses are non-expressive, such as the AI used by Tesla for its self-driving cars. Other AIs can be tools for expression, such as the generative tools discussed above (ChatGPT, Stable Diffusion, Bard, etc.). To the extent a tool serves an expressive purpose, the First Amendment applies. In that case, the regulation must be narrowly tailored to serve a compelling governmental purpose, and the regulation must restrict as little expression as is necessary to achieve that purpose. 

Can the government compel disclosure or watermarking when an AI tool is used?

In most cases, the government can no more compel an artist to disclose whether they created a painting from a human model as opposed to a mannequin than it can compel someone to disclose that they used artificial intelligence tools in creating an expressive work. Government may be able to compel disclosure in a few narrow circumstances, such as in election ads, or when seeking a copyright, but these circumstances are exceptions to the general rule.

Similarly, proposals have been made to require AI creators to embed “” in AI-generated content to reduce that content’s ability to fool viewers. AI makers are free to implement such watermarking features, but the First Amendment restricts the government from compelling companies to do so.

Hasn’t experience with AI in countries other than the U.S. suggested a role for regulation?

Other countries not subject to our First Amendment and with fewer speech protections have greater leeway for governmental regulation of AI. For example, the European Union has introduced the , which bans AI use it considers an “unacceptable risk,” and for generative AI, requires a published summary of copyrighted material used for model training, guardrails to prevent the creation of illegal content, and disclosure of content created by AI models. Many of the regulations of the AI Act would not likely pass constitutional muster in the United States. For example, “unacceptable risk” is a vague standard, and outside of purely factual and noncontroversial disclosures that seek to avoid potentially misleading commercial speech, compelled disclosures raise serious constitutional concerns.

Wow! Doesn’t all of this pose more questions than it answers?

Sure does! Like the technology itself, the law surrounding artificial intelligence is poised to evolve rapidly. These and other questions about artificial intelligence and the First Amendment will surely be subject to extensive litigation in the years to come. FIREwill be tracking these issues closely and writing more on the intersection of AI and First Amendment rights. Follow us for the latest.

So to Speak Logo

VIDEO: Host Nico Perrino discusses the free speech implications surrounding artificial intelligence technologies with guests Eugene Volokh, David Greene, and Alison Schary.

Artificial intelligence: Is it protected by the First Amendment?

So to Speak: The Free Speech Podcast

What does the rise of artificial intelligence mean for the future of free speech and the First Amendment? Who is liable for what AI produces? Can you own a copyright for works produced by AI? Does AI itself violate intellectual property rights when it uses others' information to generate content? What about that Morgan Freeman "deep fake"? And is ChatGPT going to make all of our jobs irrelevant?

Listen to the Podcast

Share