Skip Navigation
Get updates:

We respect your privacy

Thanks for signing up!

WASHINGTON — On Thursday, The Washington Post and The New York Times published reports on a handful of recent studies showing that Meta’s “platforms play a critical role in funneling users to partisan information with which they are likely to agree.” The researchers, who relied on access to Facebook and Instagram data to run experiments, analyzed “polarization and people’s understanding and opinions about news, government and democracy” during a brief period before the 2020 election and shortly after.

Meta has seized on these limited studies to claim “there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization, or have meaningful effects on key political attitudes, beliefs or behaviors.” But the company is misrepresenting these studies’ findings since Meta allowed researchers to look at data only from a narrow time period. And while Meta hails the research partnership as “unprecedented,” a company spokesperson told WIRED that Meta does not plan to allow similar research in 2024.

The Free Press report Empty Promises: Inside Big Tech’s Weak Effort to Fight Hate and Lies in 2022 finds that Meta shows little regard for the real-world harms caused by its failure to enforce proper safeguards against hate and lies proliferating on its platforms.

Facebook’s own internal research flagged similar concerns much earlier in the 2020 election cycle. In addition, the Jan. 6 Committee’s draft report on social media found that President Trump’s supporters used Facebook to closely track “his claims about a stolen election and subsequently his calls to descend on D.C. to protest the Joint Session of Congress on January 6th, 2021.” The report also found that Facebook’s “delayed response to the rise of far-right extremism—and President Trump’s incitement of his supporters—helped to facilitate the attack on January 6th.”

Nora Benavidez, Free Press’ senior counsel and director of digital justice and civil rights, said:

“The tech companies' dramatic retreat from election-integrity efforts should concern us all as we look to 2024. We know from whistleblowers like Frances Haugen that Meta's 2020 break-glass measures helped limit violent and extremist content from going viral ahead of the presidential election that year. When Meta turned off those functions, violent content surged and fueled support for the insurrection on January 6.

“The January 6 Committee’s draft report on social media also found that Trump used the platform to help incite the violent insurrection. This is something that Meta CEO Mark Zuckerberg himself admitted to when he said that Trump's posts helped spur the insurrection against a democratically elected government.

“Meta execs are seizing on limited research as evidence that they shouldn’t share blame for increasing political polarization and violence. This calculated spin of these surveys is simply part of an ongoing retreat from liability for the scourge of political disinformation that has spread online and undermined free, fair and safe elections worldwide.  

“Platforms should not use this research to justify rolling back efforts to keep violent and extremist content from going viral, particularly in the context of the elections, when access to credible information is so critical for voters and our democracy.

“Manipulation campaigns on platforms don't get spun up just a couple of months before an election — disinformation targets people year-round. It's pure negligence for platforms to treat lies and conspiracy on their services as anecdotal or seasonal. We need a more comprehensive view of political misinformation on platforms to fully understand its impact on elections.

“Studies that Meta endorses, which look piecemeal at narrow time periods, shouldn’t serve as excuses for allowing lies to spread. Social-media platforms should be stepping up more in advance of elections, not concocting new schemes to dodge accountability.”

More Press Releases