Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Atlantic (Deprecated 2020-08-31)
The Atlantic (Deprecated 2020-08-31)
Technology
Yair Rosenberg

Mark Zuckerberg Is Doubly Wrong About Holocaust Denial

This week, Mark Zuckerberg kicked off another firestorm about Facebook when he appeared to defend the intentions of Holocaust deniers on the platform. In an interview with Recode’s Kara Swisher, the Facebook CEO was asked why the site doesn’t just remove malevolent misinformation, like the claim that the Sandy Hook massacre never happened. Zuckerberg responded with an example from his own experience.

“I’m Jewish, and there’s a set of people who deny that the Holocaust happened,” he said. “I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong, but I think it’s hard to impugn intent and to understand the intent.” Unless individuals are “trying to organize harm against someone, or attacking someone,” he went on, “you can put up that content on your page, even if people might disagree with it or find it offensive.”

This position is so bizarre, it’s hard to know where to begin. For one, surely spreading hateful misinformation about the Holocaust—designed to mislead the masses and undermine societal awareness of historic anti-Semitic prejudice—constitutes “trying to organize harm.” Moreover, even if we accept Zuckerberg’s questionable claim that some deny the Holocaust out of ignorance rather than malice, this does not absolve Facebook of responsibility for uncritically hosting and spreading that content.

But Zuckerberg’s absurd acrobatics on Holocaust denial don’t mean that his critics have offered a better solution to the problem, when they call for simply erasing any mendacious material.

There are two reasons why censorship is not an adequate response to bigoted misinformation. The first is that censorship suppresses a symptom of hate, not the source. Silencing speech does not rebut it, and punishing those who express hateful views can just as easily make them into martyrs and lend their views greater notoriety. This is one reason why Holocaust denial and anti-Semitism continue to thrive across Europe, far more so than in the United States, despite the many laws against Holocaust denial in those countries, from France to Belgium. You cannot legislate away a worldview; you need to counter it.

The second reason that censorship is not an effective response to internet anti-Semitism and racism is that erasing online hate erodes awareness of the bigotry in the real world. After all, it’s easy to pretend your society doesn’t have a prejudice problem when your social-media platforms are systematically suppressing all evidence of it.

To take one telling example: In 2012, a blatantly anti-Semitic hashtag went viral in France and soon became the third-most trending topic on Twitter in the entire country. Following the threat of a lawsuit from anti-racist activists, Twitter took down all the offending tweets. In the years since, Jewish targets have been victimized by a wave of brutal and violent attacks. In retrospect, the viral anti-Semitic hashtag was a warning that was swept under the rug. Its popularity exposed France’s anti-Semitic underbelly and made it impossible to ignore, splashing the evidence inconveniently across the country’s social-media feeds—until it was made to disappear.

Now imagine that Twitter had implemented preemptive censorship protocols like those being urged on Facebook and had immediately squelched the hashtag with an anti-hate algorithm: The public at large would never have known about this outpouring of prejudice, leaving it free to propagate unchecked. Think about all the instances of bigotry by those in power—from the racist internet rants of police officers to the D.C. politician who posted a video claiming Jewish bankers control the weather—that we wouldn’t know about if Facebook had instantly censored them. Think of the lost opportunities to address and counter those sentiments.

Truly tackling the problem of hateful misinformation online requires rejecting the false choice between leaving it alone or censoring it outright. The real solution is one that has not been entertained by either Zuckerberg or his critics: counter-programming hateful or misleading speech with better speech.

How would this work in practice?

Take the Facebook page of the “Committee for Open Debate on the Holocaust,” a long-standing Holocaust-denial front. For years, the page has operated without any objection from Facebook, just as Zuckerberg acknowledged in his interview. Now, imagine if instead of taking it down, Facebook appended a prominent disclaimer atop the page: “This page promotes the denial of the Holocaust, the systematic 20th-century attempt to exterminate the Jewish people which left 6 million of them dead, alongside millions of political dissidents, LGBT people, and others the Nazis considered undesirable. To learn more about this history and not be misled by propaganda, visit these links to our partners at the United State Holocaust Museum and Israel’s Yad Vashem.”

Obviously, this intervention would not deter a hardened Holocaust denier, but it would prevent the vast majority of normal readers who might stumble across the page and its innocuous name from being taken in. A page meant to promote anti-Semitism and misinformation would be turned into an educational tool against both. The same could easily be done for pages and posts promoting conspiracy theories ranging from 9/11 trutherism to Islamophobic obsessions with impending Sharia law, working with partners ranging from the NAACP to the Anti-Defamation League to craft relevant responses and source materials.

Conspiracy theorists and racists rely on algorithms to surface their content for unsuspecting readers, whether in a YouTube sidebar or in a news feed. Adding the internet equivalent of a “surgeon’s general warning” to their misinformation along with specific links to rebut it would significantly undercut their ability to mislead. Such an approach would have the added advantage of demoralizing the propagandists, who revel in the prospect of ensnaring naïve readers, but would now find themselves undermined at the outset. The trolls would find themselves trolled. Louis Farrakhan could no longer rely on Facebook shares to gain new converts, and white-supremacist outlets like the Occidental Observer would not be easily able to insinuate themselves into respectable conversations.

It's a technique I’ve tested myself against some of Twitter’s most pernicious bad actors: impersonator trolls. Among its colorful array of bigots, Twitter plays host to racists who assume the identities of Jews and other minorities, insinuate themselves into conversations with high-profile users, and then proceed to say horrifying and offensive things, defaming the group they claim to belong to in the process. To counter this insidious attempt to slander minorities, together with the developer Neal Chandra, I built Impostor Buster, a Twitter bot that unmasked the impersonators by automatically  alerting any unsuspecting user they attempted to engage. The bot proved remarkably effective in derailing the deception—so effective, in fact, that the impersonators deluged Twitter with complaints until the service removed it.

To be clear, such interventions should be restricted to extreme cases of fairly universal agreement: Holocaust denial, Pizzagate, Sandy Hook truthers, and so on. Facebook and other services should not be in the business of adjudicating political disputes, something no corporation should be trusted to do. But simply tackling the obvious instances of misinformation on Facebook in this way would do much to alleviate the problem.

In any case, whether or not Facebook adopts this specific approach, the key to countering today’s propagandists is to observe how they operate and turn their tools against them. We need more creative efforts like these to turn back the tide against online abuse. Only by learning to troll for truth in this manner will we be able to rescue our platforms from those who would corrupt them.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.