Skip Navigation
Get updates:

We respect your privacy

Thanks for signing up!

WASHINGTON — The U.S. Supreme Court will hear oral arguments Tuesday in Gonzalez v. Google, a case that centers on whether online companies should be held legally liable for harmful third-party content their platforms host or recommend to users. 

The case hinges on the liability protections in Section 230 of the Communications Act, and on whether Google should bear any responsibility for hosting or amplifying terrorist-recruitment videos that sparked a 2015 attack in Paris, allegedly providing “material support” to a terrorist group. Section 230 typically protects platforms and other interactive websites from liability for the content that their users post, while also allowing them to vet, filter and moderate content in accordance with their own community standards.

Both the Ninth Circuit and the federal district court that first heard this case held that Section 230 barred such claims of liability. Free Press Action argued, in congressional testimony in 2021 and in an amicus brief filed with the Supreme Court for today’s case, in support of retaining Section 230’s core protections. Free Press Action has also asserted the need to carefully clarify platforms’ liability for their own knowingly harmful actions.

Free Press Action Vice President of Policy and General Counsel Matt Wood said:

“The facts of the case are tragic, and the attempts by the victim’s family to seek justice cannot be overshadowed by the legal questions in play. However, the Section 230 questions are also of tremendous consequence for the future of the internet and the open exchange of ideas on it.

“Section 230 lets platforms serve different communities. It empowers platforms to moderate while also giving them protection against liability in the first instance for what their users say. If the Supreme Court weighs in at all it has to get this balance right. The law must continue to ensure that people can speak online without intermediaries and gatekeepers policing their every utterance. But it must also ensure that platforms take some responsibility — and have some agency — in stemming the spread of disinformation, bullying and hate that are so often targeted at people of color, women, LGBTQIA+ individuals and other impacted communities.

“In Free Press Action's amicus brief, we argue against interpretations of Section 230 that outright ban engagement algorithms or make their use much less possible. Yet we also think that a platform could be liable as a distributor of user-generated information, even when it is not recommending, promoting or monetizing that content but still has knowledge of the harmful character of what it is hosting.

“Free Press Action believes that platforms could indeed be liable for unlawful content they distribute. In our view, however, this turns on the platforms’ knowledge of the unlawful nature of the third-party content they host, not on whether they recommend or promote it somehow.

“Platforms’ liability shouldn’t rest on their filtering, amplification or recommendation actions. Such an on-off switch for liability would be too blunt a solution and ultimately either force platforms to leave harmful materials untouched and free to circulate, or to take down far more user-generated political and social commentary than they already do. That said, Section 230’s ban on treating an interactive computer service as the publisher or speaker of user-generated content should not protect platforms from potentially being held liable as the distributor of such content when they knowingly distribute harmful material.

“The Court should clarify that platforms cannot be liable as publishers, yet still could potentially be liable under a higher standard for knowingly distributing unlawful or tortious user-generated content. Of course, plaintiffs would still need to plead their cases, and show that platforms actually played a role in causing alleged harm. Section 230 and the First Amendment would still protect platforms from many causes of action and theories of liability. By adopting our view, courts could allow more of those suits to proceed instead of tossing them out at the outset.

“Standing up for the free and open internet means not only defending Section 230, but standing up for users too. At its best, Section 230 encourages the open exchange of ideas while allowing for takedowns of hateful and harmful material. Without those paired protections, we’d risk losing moderation and risk chilling expression too. That risk is especially high for communities that suffer discrimination from powerful entities that are all too willing to sue just to silence statements they don’t like.

“Yet members of these targeted communities can suffer catastrophic harms from platform inaction too. It’s not just in the courtroom that speakers must fear being silenced, harassed and harmed — it’s in the chat room too, in social media, comment sections and other interactive apps. Rebalancing Section 230 would provide more potential relief for those kinds of harms.”

More Press Releases