Table of Contents
In wake of Buffalo shooting, New York AG’s livestreaming policy recommendations threaten First Amendment rights
On May 14, 2022, a gunman traveled to Buffalo, New York, and opened fire in a grocery store while broadcasting on the livestreaming platform Twitch. The shooting, which killed ten people and injured three others, outlasted the broadcast — within two minutes, Twitch detected that the video depicted a shooting in progress, cut the feed, and took it down.
Twitch’s response was not good enough for New York Attorney General Letitia James, who last month released an examining the role of social media in the shooting and urging changes to federal and state law to rein in livestreaming and video sharing. Unfortunately, many of those policy recommendations — which include a forced “tape delay” for unverified livestreams, civil penalties for “distribution” of footage of “violent criminal content,” and which rely on expansive definitions of “incitement” and “obscenity” — would violate the First Amendment.
Understandably, public officials like Attorney General James are seeking ways to reduce the incidence of unspeakably tragic mass shootings in this country. But they must be careful not to sacrifice Americans’ civil liberties in the process. As ACLU President Anthony Romero in the wake of 9/11, “Pursuing security at the expense of freedom is a dangerous and self-defeating proposition for a democracy.”
In the case of the Attorney General’s recommendations, it is far from clear that the illiberal policies proposed will purchase any security at all. They rest on the deeply unconvincing premise that the law can deter a mass killer from filming their murderous attack when the law is not enough to deter the attack itself.
A slew of unconstitutional proposals
The attorney general’s report makes a series of recommendations that, if enacted, will threaten the First Amendment rights of internet users and platforms. The report recommends:
- Criminalizing the creation of videos or images of a homicide by the person committing it, or by others acting “in concert” with the killer.
- Imposing civil penalties on individuals who distribute or transmit such content and on platforms that fail to take “reasonable steps to prevent unlawful violent criminal content (and solicitation and incitement thereof) from appearing on the platform.”
- Reforming Section 230 of the Communications Decency Act to remove platforms’ immunity from liability for user-generated content if they fail to take these “reasonable steps.”
- Defining “reasonable steps” to include restrictions on livestreaming, including broadcast delays and limiting algorithmic promotion for livestreams by users who are unverified, have few followers, or fail to meet other “trust factors.”
The report attempts — and fails — to justify the constitutionality of these restrictions using two primary rationales.
Broadening incitement and obscenity
Broadly speaking, the First Amendment significantly limits the government’s power to regulate, burden, or prohibit speech unless it falls into an unprotected category such as “true threats,” “child pornography,” or “perjury” — content-based restrictions outside of these categories are subject to, and rarely survive, a analysis. To justify its proposed speech regulations, New York’s report attempts to stretch two of these unprotected categories — “incitement” and “obscenity” — beyond their strict legal definitions.
We hope these recommendations go unheeded, and we will be watching to ensure they do.
Referring to videos of homicide, like that which was live-streamed by the Buffalo shooter, the report claims: “Such videos are an extension of the original criminal act and serve to incite or solicit additional criminal acts. In addition, these videos are obscene on their face.”
As legal analysis, both of these contentions miss the mark.
While “incitement” and “obscenity,” properly defined, are not protected by the First Amendment, those exceptions capture a far narrower range of speech than the report suggests.
Incitement
“Incitement” is shorthand for the category of unprotected speech the Supreme Court described in Brandenburg v. Ohio, which held that speech provoking unlawful activity loses First Amendment protection only if it is “directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Notably, courts, including the Supreme Court in Brandenburg and Hess v. Indiana, have rejected the idea that general advocacy for violence at some unspecified future time falls outside the First Amendment’s protection.
Nevertheless, the report claims, “Even a short video of a mass shooting can be used to incite others to engage in copycat crimes and serve the criminal goals of the perpetrator.” But even granting that such a video is intended to incite others to commit a copycat mass shooting, it doesn’t meet the imminence requirement of the incitement standard, as it is extremely unlikely anyone would plan and execute a similar crime immediately after witnessing a mass shooting. Generally speaking, these videos and their accompanying manifestos are not intended to lead to immediate action, but to urge others to begin thinking about doing the same. In this sense, the report’s authors may mean “incite” in a colloquial — rather than legal — sense, akin to “inspire.” But inspiring crime doesn’t render speech unprotected.
While “incitement” and “obscenity,” properly defined, are not protected by the First Amendment, those exceptions capture a far narrower range of speech than the report suggests.
Our courts were right to set the bar for incitement so high. The First Amendment protects an enormous amount of political advocacy in all political quarters. It is untenable to hold a speaker legally responsible for the criminal actions of others who might have been inspired by the speaker’s words. This unjust rationale has been used to justify the prosecution of organizers of peaceful protests for the actions of a small number of violent participants. (See: NAACP v. Claiborne Hardware, the , and, for a more recent civil case, .)
Nor should speakers face punishment based on a prediction that their public expression will inspire others to commit violent or unlawful acts at some point in the future. A standard based on such a tenuous and speculative connection between speech and action would inevitably invite abuse by those who seek to silence their political opponents. Rev. Dr. Martin Luther King, Jr.’s calls for nonviolent resistance prompted that he was inciting violence, leading the FBI, which infamously King, to call him “the most dangerous Negro of the future in this Nation.”
Our national political conversation is filled with impassioned speech on polarizing, high-stakes issues like abortion, policing, gun control, immigration, and climate change. Empowering the government to crack down on speech that might inspire someone, somewhere, at some time to commit violence would be calamitous for free expression, and the government would no doubt repurpose that authority to target disfavored views. As writer Kevin Drum , “We can’t allow the limits of our political spirit to be routinely dictated by the worst imaginable consequences.”
Even advocacy that intentionally urges people to break the law merits protection. In Hess, an anti-war protestor was convicted for saying, “We’ll take the fucking street later,” before the Supreme Court overturned his conviction because his speech “amounted to nothing more than advocacy of illegal action at some indefinite future time.” Encouragement of civil disobedience has had a central role in the evolution of the First Amendment jurisprudence — the incitement standard in Brandenburg replaced the looser “clear and present danger” test announced in Schenck v. United States, a case that upheld a conviction under the Espionage Act for encouraging men to dodge the World War I draft.
Various political philosophies advocate lawbreaking to institute new political systems, including , various of anarchism, and of . In fact, our First Amendment was written and ratified by men who advocated for overturning a standing government — and did so. Proscribing the general advocacy of lawbreaking would be untenable, ahistorical, and arguably un-American.
As the Supreme Court has recognized, speech “may indeed best serve its high purpose when it induces a condition of unrest, creates dissatisfaction with conditions as they are, or even stirs people to anger.” Of course, people are free to criticize inflammatory or revolutionary rhetoric and its potential consequences, but the First Amendment properly restrains the government from extinguishing speech it considers too fiery.
Obscenity
The report’s claim that images or video of a homicide are “obscene on their face” is similarly unsupported by legal precedent. As a threshold matter, the Supreme Court in Miller v. California made clear that for something to be legally obscene, it must be sexual in nature.
Graphic violence is not what the obscenity exception contemplates. As the Supreme Court bluntly put it in striking down a law barring the sale of certain violent video games to minors, “speech about violence is not obscene.”
None of the attorney general’s proposed restrictions on the creation or dissemination of images and videos can be justified under the incitement or obscenity exceptions to the First Amendment. And, as described below, such restrictions will have the unfortunate effect of restricting significant amounts of valuable speech.
From questionably constitutional to blatantly unconstitutional
The report recommends establishing a criminal penalty to punish the creation of videos or images of a homicide by the person committing it, or by others acting “in concert” with the killer. (Notably, the report recommends that those drafting the law be careful not to penalize bystanders or police with body cameras.)
First and foremost, this penalty seems extraordinarily unlikely to deter a killer from filming or photographing a murder and trying to disseminate it. If the law against murder won’t deter the killer from killing, why would a law deter them from filming that murder, especially when they often don’t plan on surviving their crime? Legally, it’s at-best unclear whether the government can impose criminal liability for a perpetrator’s act of filming his crime, as distinct from their commission of the crime itself. The act of filming a crime is generally a protected exercise of First Amendment rights, and the report’s authors do not provide a persuasive justification for why it would be constitutional to prohibit filming in this circumstance.
The report shifts from recommending questionably constitutional speech-related penalties for the shooter to certainly unconstitutional civil penalties on people who simply share a video of a homicide. Neither the person who shares the video nor the online platform that carries it can be liable for those acts. Again, speech does not lose First Amendment protection merely because it depicts violence.
Troublingly, the report’s authors apparently fail to see or care about why someone might share such images besides trying to glorify a killer or encourage violence. Some of these other purposes are squarely in the public interest — for example, to highlight police or policy failures during the shooting, to engage in the academic study of violence or murder, or to train police and members of the public in how to respond to such situations.
Images of violence and death have carried profound social and political significance: Think of the video of George Floyd’s murder, the Zapruder film, video of the 9/11 attacks, the photo of Emmett Till’s dead body, Kim Phuc Phan Thi (“Napalm Girl”), and various photos and videos of war crimes. These images shock the conscience and may invoke feelings of sympathy or disgust that can be extremely powerful tools when advocating for political changes intended to stop such acts in the future. And that holds true whether the image was captured by the perpetrator or a bystander.
The report’s authors, in arguing those uninvolved in the underlying crime should face legal penalties for video distribution, rely on court decisions involving child sexual abuse material:
The distribution of CSAM material has been upheld as speech integral to illegal conduct — without a market for CSAM material, there would be no motivation to create such material.
But the operative logic of child pornography jurisprudence falls apart when you substitute one situation for the other:
The distribution of [videos of murder] has been upheld as speech integral to illegal conduct — without a market for [videos of murder], there would be no motivation to [make videos of murder].
There is no evidence that prohibiting distribution of videos of murder would deter those who make the videos — who, again, may not plan to survive — from murdering in the first place. Nor is there any reason to think they would be deterred from filming the crime should they possess the knowledge that it will be illegal for others to share. Mass murderers are not known for their regard for others’ well-being.
As UCLA Law professor and First Amendment scholar Eugene Volokh :
Some of the mass killers may be motivated by the desire for fame, but that will generally come entirely apart from the images of the killings themselves (as we’ve often seen with regard to [past] mass killings). It’s hard to imagine someone who’s committing the killing simply to have other people see the images that he or his coconspirators have taken, and who would be deterred by the prospect that those images would no longer be legally available.
A killer’s desire for notoriety or fame isn’t solely reliant on a film they themselves produce. Given that the report’s recommendations would exempt bystander or police footage, there’s no reason to think the killer couldn’t achieve similar notoriety without livestreaming or otherwise filming themselves, casting further doubt that this restriction on speech would have any deterrent effect.
If the purpose is to shut down avenues by which a killer could achieve notoriety, the same logic can be used to argue that journalists should not be allowed to cover mass murders — a similarly unconstitutional result.
Imposing liability on platforms will incentivize broad censorship
The report also recommends imposing liability on “online platforms that fail to take reasonable steps to prevent unlawful violent criminal content from appearing on the platform.”
Unfortunately for the report’s authors, of the Communications Decency Act, which provides online platforms with immunity from liability for user-generated content, stands in their way. Undaunted, the report calls for removing the protections of Section 230 for livestreaming platforms that don’t take these “reasonable steps,” which would include broadcast delays and limits on algorithmic promotion for livestreams by users who are unverified, have few followers, or fail to meet other “trust factors.”
The report cites the two minutes that elapsed before Twitch took down the Buffalo shooter’s stream, characterizing this timespan as too long. It recommends requiring platforms to invest in greater content moderation, and specifically suggests that websites such as 4chan should face liability for user-posted content due to their unwillingness to moderate.
The loss of Section 230 immunity would expose platforms to scores of potential lawsuits, creating broad incentives for platforms to censor much more content than the law would require, with the effect of dramatically reducing freedom of speech on the internet. That’s why EFF Section 230 “perhaps the most influential law to protect the kind of innovation that has allowed the Internet to thrive since 1996.” Retrenching that protection seems, at a minimum, ill-advised and myopic.
Notably, the recommendations provide no exception for smaller and new platforms. The technological — and, thus, financial — burden created by compliance with the proactive scanning requirements of these recommendations would threaten the ability of any new platform to emerge, stifling innovation and ossifying the existing order.
Livestreams on tape delay
The report’s non-exhaustive list of “reasonable steps” — which includes a “tape delay” for unverified users or those with few followers — is anything but reasonable.
A tape delay would be significantly more burdensome on livestreaming than on other modes of live entertainment due to the importance of audience participation. Streamers on , for instance, interact with audiences through live chat. Creating that sense of community through the interaction directly impacts the streamers’ bottom line, as they make money through donations and subscriptions from their audience. Separating streamers from their audience by a lengthy delay makes this impossible. While already-popular streamers would be exempt from the tape delay, this restriction would make it substantially harder to become a popular streamer.
“Livestreaming” most often refers to people using platforms like Twitch or YouTube to broadcast live to a public audience, but the precise definition is somewhat murky. Would the verification requirement apply to other live video streaming apps, such as Zoom and Google Meet? Would it apply to your family’s FaceTime calls with grandma? Such a requirement would burden a lot of businesses and individuals — but if those platforms are carved out from this law, the resulting loophole would undermine any possible effectiveness, as a killer would just use whatever platform is exempted.
Or the killer could just get verified. If verification is required to livestream without a broadcast delay, presumably the platforms like Twitch whose bottom lines depend on livestreaming would want to make the verification process as streamlined as possible. One would think that a mass murderer planning to stream their murder spree would simply add getting verified into their planning process.
In other words, either verification is burdensome and selective, in which case you doom the business model of livestreaming; or verification is easy and open to everyone, in which case it is useless as a screen for possible killers.
Further, while government regulation that specifically targets unpopular speech is bad enough, worse still is regulation that forces users to verify their identities. This burdens , which is protected by the First Amendment for good reason: Anonymity shields speakers from political or economic retaliation, harassment, and threats.
Technology isn’t magic, and filters will censor much more than intended
The report also claims tape delay “would permit livestreaming services to apply automated technology to detect violent crimes, including gunshot detection technology.” But such technology does not currently exist in a form that would be useful for this application. Given the fact that much of livestreaming involves playing video games, and video games famously feature a lot of gunshots and violent crimes, designing a detection engine that can tell when an actual violent crime is committed will not be trivial, and will likely lead to a lot of false positives — that is, a lot of censorship of protected speech. (Notably, like the ACLU and the Electronic Frontier Foundation have on the application of gunshot detection technology in other contexts.)
For an example of how this provision could easily go awry, take , Google’s program for identifying unauthorized use of copyrighted materials. It works by having rights holders send their copyrighted music and videos to Google, which the works, then scans YouTube for matching content on unauthorized channels. Google can then delete the video or redirect proceeds of that video to the rights holder.
In practice, Content ID has caused a host of issues, especially around the doctrine of “,” which authorizes certain uses of copyrighted materials without the rights holder’s permission. Because Content ID is incapable of the careful, contextual analysis fair use requires, the system has deleted or burdened of lawful content over the years. Indeed, the shortcomings of Content ID are so well known that some police officers have figured out that they can to inhibit people from livestreaming — or from uploading videos of the officers at all.
Given that Content ID works from an existing database of known copyrighted material, it has, technologically speaking, a much easier job than whatever would scan livestreams for violent crime, since such a program would scan live videos for criminal content that has never been seen before, then make decisions within whatever broadcast delay has been imposed.
Severe privacy concerns
The report proposes nothing short of a government-mandated surveillance program enacted by tech companies. A state government is recommending a proactive regime of scanning and reporting users for potential violent content. If it is accepted, what’s next? A requirement that Twitter scan and forward tweets to police? What about direct messages? This isn’t just a slippery slope argument out of nowhere. You can find support for it in the report itself, which laments:
Likewise, the shooter documented his plans in detail over the course of several months, but these logs were not flagged. Drafters of reasonableness guidelines should consider whether a platform should implement automated scanning of content to identify violent plans at a moment when long-standing private content is suddenly distributed to a larger group of people.
Widespread government-mandated monitoring of citizens’ expression is characteristic of authoritarian regimes, not free societies. Here, it’s worth noting that the Office of the New York State Attorney General is not the first to recommend delays of livestreams to achieve a policy goal. recommended something similar in September.
Conclusion
The New York attorney general’s report advances unsound recommendations that would be unconstitutional if implemented. These are just recommendations, and do not carry the force of law. But it is disappointing to see a state attorney general’s office propose laws that would violate the First Amendment. Offering unconstitutional salves is an expedient way to look like the state is doing something about horrific violence when it’s effectively doing nothing at all except threatening civil liberties.
We hope these recommendations go unheeded, and we will be watching to ensure they do.
Recent Articles
FIRE’s award-winning Newsdesk covers the free speech news you need to stay informed.