Skip Navigation
Get updates:

We respect your privacy

Thanks for signing up!

WASHINGTON — On Wednesday, Free Press Action filed an amicus brief in the U.S. Supreme Court’s case Gonzalez v. Google LLC, an appeal of a 2021 Ninth Circuit Court of Appeals decision about Section 230 of the Communications Act. (Read the full brief.)

The case hinges on whether Google should bear any responsibility for hosting or amplifying  terrorist-recruitment videos that sparked a 2015 attack in Paris, allegedly providing “material support” to a terrorist group. Section 230 typically protects platforms and other interactive websites from liability for the content that their users post, while also allowing them to vet, filter and moderate content in accordance with their own community standards.

Both the Ninth Circuit and the federal district court that first heard this case held that Section 230 barred such claims of liability. 

As Free Press Action explained in congressional testimony in 2021 and in the amicus brief today, the media-democracy organization supports retention of Section 230’s core protections alongside careful interpretations of the law that clarify platforms’ liability for their own knowingly harmful actions.

Kevin Russell, Kathleen Foley and Erica Oleszczuk Evans, of the Supreme Court and appellate-focused firm Goldstein & Russell, filed today’s amicus brief for Free Press Action.  ​​

Free Press Action Vice President of Policy and General Counsel Matt Wood said:

“The facts of the case are tragic, and the attempts by the victim’s family to seek justice cannot be overshadowed by the legal questions in play. However, the Section 230 questions are also of tremendous consequence for the future of the internet and the open exchange of ideas on it.

“In our brief, Free Press Action argues that Section 230 is a foundational and necessary law that lowers barriers to people sharing their own content online. Without it, platforms would be forced to vet any and all user content posted on their networks in advance to avoid being liable for everything their users say and do. 

“Platforms that filter, amplify or make any content recommendations at all should not automatically be subject to suit for all content they allow to remain up. That kind of on/off switch for the liability protections in the law would encourage two bad results: either forcing platforms to leave harmful materials untouched and free to circulate, or requiring them to take down far more user-generated political and social commentary than they already do.

“The law rightly protects platforms from being sued as publishers of other parties’ information. It permits and encourages these companies to make content-moderation decisions while retaining that initial protection from liability. In effect, Section 230 encourages the open exchange of ideas and takedowns of hateful and harmful material. Without a careful balancing of those paired protections, we’d risk losing moderation and removal of the very same kinds of videos at issue in this case. 

“Losing the core of Section 230 could risk chilling online expression, since not all plaintiffs suing to remove ideas they don’t like would be proceeding in good faith as the victim's family here clearly did. That would disproportionately harm Black and Brown communities, LGBTQIA+ people, immigrants, religious minorities, dissidents, and all people and ideas targeted for suppression or harassment by powerful forces.

“But as our amicus brief explains, when platforms have actual knowledge of the grievous harm caused by some unlawful content — yet they still continue to host it or even amplify it — Section 230 doesn’t and shouldn’t grant them complete immunity for their decisions. 

“Section 230 allows injured parties to hold platforms liable for those platforms’ own conduct, as distinct from the content they merely host and distribute for users. Platforms could and often should be liable when they knowingly amplify and monetize harmful content by continuing to distribute it even after they’re on notice of the actionable harms traced to those decisions.

“That’s why we’re cautiously supportive of efforts to clarify the meaning of Section 230’s present text in ways that allow suits against platforms’ continued distribution of harmful content once they have actual knowledge of the harm it causes.”

More Press Releases