Pressured by governments around the world, four companies operating some of the world’s most popular Internet sites and services – Facebook, Twitter, Google’s YouTube and Microsoft – announced this week a joint effort to censor “violent terrorist imagery or terrorist recruitment videos or images.” It’s an effort to fight a bad use of technology with more technology, in the hope of curtailing the use of social media by Islamic State and other terrorist organizations to recruit followers and promote their murderous agendas.

According to a blog post by Google, whenever one of the four companies deletes a terrorism-related image or video, it will have the option of submitting the file’s unique identifier to a shared database. The other companies will then review the file to see whether it violates their terms of service, and if so, they can use its unique identifier to delete it from their pages.

The database is similar to the one the National Center for Missing and Exploited Children uses to catalog and remove child pornography. And the goal of the companies’ effort is similar as well: to push jihadist material out of the most widely used spaces online, making it harder for the Islamic State and its ilk to radicalize people remotely.

The announcement is both welcome and worrisome. Counterterrorism experts have long sought to close the spigot of jihadist propaganda online, arguing that social media plays an integral role in sustaining groups like the Islamic State. Yet Monday’s brief, relatively low-key announcement leaves many questions about what criteria the companies will use to decide what to block, how proactive they will be in censoring content, and whether users will be able to appeal when their images or videos are blacklisted.

This isn’t mere quibbling over the finer points of free-speech rights (and besides, the First Amendment isn’t applicable here because it limits only censorship by the government, not by private companies). The four entities involved have enormous power to shape users’ experiences online, which is one reason they should remain as neutral as possible when it comes to the content they host. We may all agree that jihadists shouldn’t be allowed to use these platforms to distribute beheading videos, but that’s just a fraction of the material used to recruit and radicalize. Should speeches by Islamic State leaders and sermons by extremist clerics be censored too? What about news photographs of, say, victims of Israeli or American air strikes, or photos of detainees tortured in Abu Ghraib? Bear in mind that the database doesn’t distinguish between different potential uses of blocked imagery; if Facebook censored a video circulated by Islamic State propagandists, CNN wouldn’t be able to use that footage on its Facebook pages either.

Clearly, the line between what is and isn’t acceptable will be hard to draw. To their credit, at least some of the four companies plan to take a narrow view of what to block, at least initially, and to have humans, not software, decide when a file needs to be censored. That will give them more time to develop clearer standards for determining what content is unacceptable and avenues to appeal their decisions to censor.

Advertisement

Admittedly, the threat posed by jihadist groups is so great, the usual bromides about defeating bad speech with more speech don’t offer much reassurance. And the longer the companies maintained their hands-off posture, the more they risked being compelled by Congress or European governments to act — a development that could have been far more threatening to the free flow of information online.

Yet the slope the companies have started down is slippery. Governments will surely push to extend the blocking effort to more online sites and services, such as Telegram (a messaging app that’s popular with extremists) and Google’s search engine. They may also pressure companies to use technology to scour their sites proactively for suspected terrorist content, as they do for child pornography, rather than letting human reviewers decide what to block.

And if the effort to interdict jihadist propaganda is successful, what other kinds of content will governments want these platforms to exclude? Hate speech? Indecent material? Fake news? Cartoons that ridicule the government — a form of expression that’s a crime in some parts of the world? The precedent being set here is that a handful of powerful private companies could take the place of courts and juries in setting limits on speech that, because of these companies’ collective dominance online, would apply broadly across the Internet.

That’s why the collaboration on violent terrorist imagery causes anxiety in spite of the benefits it could yield in the battle against jihadism.

The participating companies need to offer much more clarity and transparency about the way they will judge content, as well as an expeditious process for appealing decisions to block files. And the public needs to guard against the government pressuring these companies to expand their coordinated takedowns from truly dangerous material to more kinds of speech that the government simply doesn’t like.


Only subscribers are eligible to post comments. Please subscribe or login first for digital access. Here’s why.

Use the form below to reset your password. When you've submitted your account email, we will send an email with a reset code.