WASHINGTON — Twitter is now using spam-fighting technology to seek out accounts that might be promoting terrorist activity and is examining other accounts related to those flagged for possible removal, the company announced Friday.

The announcement demonstrated efforts by Twitter to automatically identify tweets supporting terrorism, reflecting increased pressure placed by the U.S. government for social media companies to respond to abuse more proactively.

Child pornography has previously been the only abuse that was automatically flagged for human review on social media, using a different kind of technology that sources a database of known images.

Twitter also said Friday it has suspended more than 125,000 accounts for threatening or promoting terrorist acts, mainly related to Islamic State militants, in the last eight months.

Tech companies are dedicating increasingly more resources to tracking reports of violent threats. Twitter said Friday that it has increased the size of its team reviewing reports to reduce their response time “significantly.” The San Francisco-based company also changed its policy in April, adding language to make clear that “threatening or promoting terrorism” specifically counted as abusive behavior and violated its terms of use.

The White House on Friday said Twitter’s announcement was “very much welcome.”

“The administration is committed to taking every action possible to confront and interdict terrorist activities wherever they may occur, including in cyberspace, and we welcome constructive steps from our private sector partners,” the White House said.


Only subscribers are eligible to post comments. Please subscribe or login first for digital access. Here’s why.

Use the form below to reset your password. When you've submitted your account email, we will send an email with a reset code.