Facebook chief executive Mark Zuckerberg was “making false and deceptive statements” when he told Congress that the company removes content that violates its hate-speech rules, a lawsuit alleges.

The suit, filed Thursday in Washington D.C. Superior Court, alleges that since 2017, civil rights groups and other experts have brought hundreds of anti-Muslim groups and pages on the platform to Facebook’s attention, but that the company has failed to penalize more than half of them.

It also alleges that Facebook and its top executives violated the D.C. Consumer Protection Procedures Act, under which it is illegal for a company to make material misrepresentations about a good or service. Civil rights group Muslim Advocates, the law firm Gupta Wessler and University of Chicago law professor Aziz Z. Huq brought the suit.

“Every day, ordinary people are bombarded with harmful content that violates Facebook’s own policies on hate speech, bullying, harassment, dangerous organizations, and violence,” the suit alleges. “The anti-Muslim hate that’s pervasive on Facebook presents an enormous problem – both online and in real life.”

Facebook has created an atmosphere where “Muslims feel under siege,” the suit says.

Social media platforms have long faced criticism for facilitating a climate of bigotry, and they are now under even more pressure to monitor their services after mass demonstrations against police brutality and rising violence against Asian Americans. Meanwhile, congressional leaders have repeatedly asked the tech CEO to testify on the issue. As recently as last month, they said at a hearing that they want an overhaul of how Big Tech and online speech are regulated.

Advertisement

While the First Amendment protects even hate speech – meaning it’s unlikely the government will regulate it – officials including Sen. Mark R. Warner, D-Va., have proposed legislation that would hold tech companies more accountable for discrimination facilitated by their platforms.

Zuckerberg and other Facebook executives have repeatedly told Congress and the public that content that can cause violence, imminent physical harm or violates hate-speech policies will be taken down. The company’s hate-speech policy bans dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt and other attacks against people on the basis of a protected characteristic such as race, disability or gender.

The company has said it is working to overhaul its hate-speech algorithms, and it has banned more than 250 white supremacist groups and 890 militarized social movements. In 2019, the company took down 12 million pieces of content in Facebook groups for violating policies on hate speech.

Between December 2017 and 2019, Muslim Advocates and Megan Squire, a computer science professor at Elon University who studies right-wing extremism online, periodically brought Facebook lists of at least 227 anti-Muslim groups and pages on Facebook with names like “Infidels Unite Against Islam,” “Death to Islam Undercover,” “Veterans Against Islamic Filth,” “Purge Worldwide,” “The Cure for the Islamic disease in your country,” “Filth of Islam” and “Death to Murdering Islamic Muslim Cult Members,” according to the filing.

In some cases, Facebook did not respond, despite numerous follow-ups, said Muslim Advocates Legal Director Mary Bauer. In other instances, the company refused to remove the content.

Muslim Advocates says 120 of these groups and pages are still active.

Advertisement

That includes content from groups associated with violence, Muslim Advocates alleges.

Zuckerberg told the House Financial Services Committee during testimony in October 2019 that “If anyone, including a politician, is saying things that can cause, that is calling for violence or could risk imminent physical harm … we will take that content down.”

But the company historically has given wide latitude to divisive content that falls short of outright calls for violence – and has sometimes ignored content with more directly violent ties.

For years, Facebook allowed the proliferation of QAnon, an extremist ideology that has radicalized its followers, even though perpetrators of several violent acts had cited such beliefs as a motivation for their crimes. The company took a much harder line against QAnon last year.

In 2015, when then-candidate Donald Trump posted a video calling for a ban on Muslims entering the United States, the company made an exception to its hate-speech rules for newsworthy content. In 2020, when Trump lifted language from a segregationist to warn that racial justice demonstrators might be shot, Facebook determined that the divisive language did not constitute hate speech.

A militia group called the Kenosha Guard last year created a Facebook event page that encouraged people to “take up arms” and defend the Wisconsin city from “evil thugs.” Subsequently, two protesters opposing police violence there were fatally shot; the man accused of killing them had traveled there armed. BuzzFeed reported that the page was flagged to Facebook more than 450 times, but it wasn’t taken down until after the shootings. The page administrator took down the event page, not Facebook.

Comments are not available on this story.