Argument
An expert's point of view on a current event.

Free Speech Social Media Doesn’t Exist

Why laws banning hate speech and misinformation are already redundant.

By , CEO of The Future of Free Speech.
The logos of Google, Facebook, Twitter, Tik Tok, Snapchat,  and Instragram shown on a computer screen in Lille.
The logos of Google, Facebook, Twitter, Tik Tok, Snapchat, and Instragram shown on a computer screen in Lille.
The logos of Google, Facebook, Twitter, Tik Tok, Snapchat, and Instragram shown on a computer screen in Lille.

Listen to this article

On the day Meta’s new app, Threads, launched, CEO Mark Zuckerberg explained that it would be “an open and friendly public space for conversation.” In a not-so-subtle dig at Twitter, he argued that keeping the platform “friendly” as it expands would be crucial to its success. Within days, however, Media Matters claimed that “Nazi supporters, anti-gay extremists, and white supremacists” were “flocking to Threads,” posting “slurs and other forms of hate speech.” The group argued that Meta did not have strict enough rules, and that Instagram, the platform that Threads is tied to, has a “long history of allowing hate speech and misinformation to prosper.”

On the day Meta’s new app, Threads, launched, CEO Mark Zuckerberg explained that it would be “an open and friendly public space for conversation.” In a not-so-subtle dig at Twitter, he argued that keeping the platform “friendly” as it expands would be crucial to its success. Within days, however, Media Matters claimed that “Nazi supporters, anti-gay extremists, and white supremacists” were “flocking to Threads,” posting “slurs and other forms of hate speech.” The group argued that Meta did not have strict enough rules, and that Instagram, the platform that Threads is tied to, has a “long history of allowing hate speech and misinformation to prosper.”

Such concerns about hate speech on social media are not new. Last year, EU Commissioner for Internal Market Thierry Breton called efforts to pass the Digital Services Act “a historic step towards the end of the so-called ‘Wild West’ dominating our information space,” which he described as rife with “uncontrolled hate speech.” In January 2023, experts appointed by the United Nations Human Rights Council urged platforms to “address posts and activities that advocate hatred … in line with international standards for freedom of expression.” This panic has led to an explosion in laws that mandate platforms remove illegal or “harmful” content, including in the EU, Germany, Brazil, and India.

These concerns imply that social media is a lawless mayhem when it comes to hate speech. But this characterization is wrong. Most platforms have strict rules prohibiting hate speech, which have expanded significantly over the past several years. Many of these policies go far beyond both what’s required and permissible under international human rights law (IHRL).

We know this because the Future of Free Speech project at Vanderbilt University, which I direct, published a new report analyzing the hate speech policies of eight social media platforms—Facebook, Instagram, Reddit, Snapchat, TikTok, Tumblr, Twitter, and YouTube—from their founding until March 2023

While none of these platforms are formally bound by IHRL, all except Reddit and Tumblr have committed to respect international standards by signing on to the U.N. Guiding Principles on Business and Human Rights. Moreover, in 2018, the U.N. special rapporteur on freedom of opinion and expression proposed a framework for content moderation that “puts human rights at the very centre.” Accordingly, we compared the scope of each platform’s hate speech policy to Articles 19 and 20 of the U.N.’s International Covenant on Civil and Political Rights (ICCPR).

Article 19 ensures “everyone … the right to freedom of expression,” including the rights “to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … media of his choice.” However, this right can be subjected to restrictions that are “provided by law and are necessary” for compelling interests, such as “respect of the rights or reputations of others.” Article 20 mandates that “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.” Any restrictions on freedom of expression under Articles 19 and/or 20 must satisfy strict requirements of legality, legitimacy, and necessity. These requirements are meant to protect against overly vague and broad restrictions, which can be abused to prohibit political and religious dissent, and to safeguard speech that may be deeply offensive, but doesn’t reach the threshold of incitement.

So how do platform hate speech policies measure up to these standards? In some areas, they are aligned closely. A decade ago, more than half of the eight platforms did not have an explicit hate speech prohibition. In 2014, only 38 percent of the analyzed platforms prohibited “hate speech” or “hateful content.” By 2018, this percentage had risen to 88 percent—where it remains today. Similarly, a decade ago, only 25 percent of platforms banned incitement to or threats of violence on the basis of protected characteristics, but today, 88 percent of the platforms do. These changes generally align with the prohibition on incitement to hatred under IHRL.

In other ways, however, platforms’ hate speech restrictions have mushroomed beyond the human rights framework. In 2014, no platforms banned dehumanizing language, denial or mocking of historical atrocities, harmful stereotypes, or conspiracy theories in their hate speech policies—none of which are mentioned by Article 20. By 2023, 63 percent of the platforms banned dehumanization, 50 percent banned denial or mocking of historical atrocities, 38 percent banned harmful stereotypes, and 25 percent banned conspiracy theories. It is doubtful that these prohibitions satisfy Article 19’s requirements of legality and necessity.

Many platforms’ hate speech policies also cover identity-based characteristics that are not included in Article 20. The average number of protected characteristics covered by platform policies has gone from less than five before 2011 to 13 today. Several of the platforms prohibit hate speech targeting characteristics such as weight, pregnancy, age, veteran status, disease, or victimhood in a major event. Under IHRL, most of these characteristics do not enjoy the same protected status as race, religion, or nationality, which have frequently been used as the basis to incite discrimination and hostility against minorities, sometimes contributing to mass atrocities.

Our research cannot identify the exact causes of this scope creep, but platforms have clearly faced mounting financial, regulatory, and reputational pressure to police additional categories of objectionable content. In 2020, more than 1,200 business and civil society groups took part in the Stop Hate for Profit boycott, which leveraged financial levers to pressure Facebook into policing more hateful content. Such concerted pressure creates an incentive to take a “better safe than sorry” approach when it comes to moderation policies. The expansion in protected characteristics may reflect what University of California, Los Angeles, law professor Eugene Volokh calls “censorship envy,” where groups pressure platforms to afford them protection based on the inclusion of other groups, making it difficult for platforms to deny any without appearing biased.

Most platforms refuse to share raw data with researchers, so identifying any causal link between changes in policy scope and enforcement volume is difficult. However, studies in the United States and Denmark suggest that hate speech comprises a relatively small proportion of social media content. There are also numerous examples of hate speech policies causing collateral damage to political speech and dissent. In May 2021, Meta admitted that mistakes in its hate speech detection algorithms led to the inadvertent removal of millions of pro-Palestinian posts. In 2022, Facebook removed a post from a user in Latvia that cited atrocities committed by Russian soldiers in Ukraine, and quoted a poem including the words “kill the fascist,” a decision that the platform’s Oversight Board overturned partially based on IHRL.

The enforcement of hate speech policies can also lead to the erroneous removal of humor and political satire. Facebook’s own data suggests a massive drop in hate speech removals due to AI improvements that allowed it to identify posts that “could have been removed by mistake without appropriate cultural context,” such as “humorous terms of endearment used between friends.” In 2021, the U.S. columnist and humorist David Chartrand  described how it took Facebook all of three minutes to remove a post of his that read “Yes, Virginia, there are Stupid Americans,” for violating its hate speech policies.

Our research shows that the hate speech policies of many platforms currently don’t comply with the human rights standards they claim to respect. So perhaps the right analogy for social media is not a lawless Wild West—but rather a place where no one knows when or how the ever-changing rules will be enforced. If so, the right path forward is not to make these rules even more complex.

Instead, platforms should consider directly tying their hate speech rules to international human rights law. This approach would cultivate a more transparent and speech-protective environment, though it would not eliminate erroneous or inconsistent policy enforcement and would leave up a lot of offensive speech.

Alternatively, platforms could decentralize content moderation. This option would give users the ability to opt out of seeing content that is offensive to them or contrary to their values, but it would also protect expression and reduce platform power over speech. Meta seems to envisage steps in this direction by making Threads part of the so-called fediverse, meaning that it enables users to connect with users on platform protocols not controlled by Meta. Combining IHRL and decentralization is also possible. Content moderation and curation could be decentralized, with the requirement that third-party algorithms still respect international human rights law. None of these options will be perfect or satisfy everyone. But despite the very real challenges and trade-offs that they entail, they are preferable to the status quo.

Jacob Mchangama is CEO of The Future of Free Speech, the author of Free Speech: A History From Socrates to Social Media, and Senior Fellow at the Foundation for Individual Rights and Expression.

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber? .

Join the Conversation

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?

Join the Conversation

Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.

You are commenting as .

More from Foreign Policy

Cardboard figurines depicting U.S. President Joe Biden, Chinese President Xi Jinping and Russian President Vladimir Putin at the Fallas festival in Valencia, on March 16, 2022.
Cardboard figurines depicting U.S. President Joe Biden, Chinese President Xi Jinping and Russian President Vladimir Putin at the Fallas festival in Valencia, on March 16, 2022.

Nobody Is Competing With the U.S. to Begin With

Conflicts with China and Russia are about local issues that Washington can’t win anyway.

Russian President Vladimir Putin and Chinese President Xi Jinping make a toast during a reception following their talks at the Kremlin in Moscow.
Russian President Vladimir Putin and Chinese President Xi Jinping make a toast during a reception following their talks at the Kremlin in Moscow.

The Very Real Limits of the Russia-China ‘No Limits’ Partnership

Intense military cooperation between Moscow and Beijing is a problem for the West. Their bilateral trade is not.

Soldiers wearing camouflage fatigues visit a makeshift memorial for Wagner Group leader Yevgeny Prigozhin in Moscow. The informal memorial is on the side of a street and is covered with flags, photos of Prigozhin, and candles.
Soldiers wearing camouflage fatigues visit a makeshift memorial for Wagner Group leader Yevgeny Prigozhin in Moscow. The informal memorial is on the side of a street and is covered with flags, photos of Prigozhin, and candles.

What Do Russians Really Think About Putin’s War?

Polling has gotten harder as autocracy has tightened.

French President Emmanuel Macron walks with Chinese President Xi Jinping after inspecting an honor guard during a welcome ceremony outside the Great Hall of the People in Beijing.
French President Emmanuel Macron walks with Chinese President Xi Jinping after inspecting an honor guard during a welcome ceremony outside the Great Hall of the People in Beijing.

Can Xi Win Back Europe?

The Chinese leader’s visit follows weeks of escalating tensions between China and the continent.