Technology experts are skeptical of President Trump’s call for internet companies to work with law enforcement and the Justice Department to develop tools to detect mass shootings before they even happen.

They say the Trump administration has an especially bad track record on addressing violence on social media – and has ignored major opportunities to take action on this front both at home and with other countries. Instead, they lament, Trump’s tech policy focus has been heavily centered on accusing Big Tech of anti-conservative bias – accusations that the companies deny and that have not been backed by substantial evidence.

Trump promised that the “perils of the internet and social media cannot be ignored and they will not be ignored,” after last weekend’s shootings in El Paso, Texas, and Dayton, Ohio. The president spent more time blaming the internet and social media for the shootings than racism, hatred or access to guns, according to an analysis by The Washington Post.

While Trump is now promising to “shine a light” on the dark corners of the internet, experts note that the administration did not even sign on to the Christchurch call, a key international agreement to curb violent extremism online after the New Zealand shootings. Twitter, Facebook, Google, Amazon and Microsoft all signed on to the May agreement to work closely with each other and the 18 participating governments to stop their platforms from fostering terrorism. The United Kingdom, Canada and France were among those who did sign on.

And violent speech on social media “wasn’t even mentioned” as the White House hosted a high-profile social media summit last month, said Clint Watts, a former FBI special agent now at the Foreign Policy Research Institute. The summit’s circus-like atmosphere seemed designed to amplify conservatives’ accusations of bias against the platforms.

Trump’s call for social media companies to take a greater role in searching for possible predictors of violent acts could also be difficult to square with his charges that Big Tech is already going overboard in the effort to moderate accounts, Watts also points out.

Advertisement

One primary reason that social media companies do remove or limit the reach of accounts is due to concerns the speech could lead to violence, he said.

“What the administration is saying is constantly at odds with itself,” Watts said in an interview. “If you’re the main social media platforms, what do you do?”

Case in point: The White House expressed concerns that signing the Christchurch call could run afoul of the First Amendment despite his personal crusade to regulate companies for perceived bias against conservatives. Now, he’s opening the door to more drastic moderation from the companies themselves.

It also was unclear from Trump’s brief comments how the potential work with the department would differ from what technology companies already do to combat violent extremism on their platforms.

Michael Beckerman, chief executive of the Internet Association, said the tech industry is committed to continue working with law enforcement on these issues.

“Violent and terroristic speech violate IA member company policies and have no place either online or in our society. IA members work everyday to find dangerous content and remove it from their platforms,” said Beckerman, whose trade group represents some of the large technology companies including Google and Facebook in Washington. “IA members are committed to continuing to work with law enforcement, stakeholders, and policymakers to make their platforms safer, and to prevent people from using their services as a vehicle for disseminating violent, hateful content.”

Advertisement

Facebook, Google’s YouTube and Twitter declined to comment on the record, but the companies all have policies that state they cooperate with law enforcement.

The mainstream technology platforms have been stepping up their efforts to combat violence and hate speech on their platforms following broad global pressure, especially after the public backlash following the New Zealand attacks earlier this year. Still, the spotlight is only growing as the El Paso gunman is believed to have posted a white nationalist manifesto to 8chan. It was the third mass shooting this year to begin with a hateful screed on the website.

Watts says the rise of fringe platforms like 8chan is actually evidence that the tech companies’ efforts to moderate content have been working, and extremists have to turn to alternative platforms where less content moderation occurs.

Watts also was skeptical that the Justice Department has the resources and and expertise to conduct research on the rise of domestic terrorism on social media.

Jessica González, co-founder of Change The Terms, a coalition of civil rights groups focused on fighting the spread of hate speech on social media, said the government can’t be trusted to take on this issue, either – and warned new tools they develop could be “dangerous” and risk infringing on free speech. It should be up to the companies to do even more to effectively enforce their policies and invest in better content moderation, González argues.

González also criticized Trump’s own attacks on immigrants on social media as “part of the problem.”

“I’m not interested in seeing predictive policing online,” González said. “I’m interested in seeing him ramp down his rhetoric online.”

Copy the Story Link

Only subscribers are eligible to post comments. Please subscribe or login first for digital access. Here’s why.

Use the form below to reset your password. When you've submitted your account email, we will send an email with a reset code.