Propaganda accounts controlled by foreign entities aiming to influence U.S. politics are flourishing on X even after they’ve been exposed by other social media platforms or criminal proceedings, a Washington Post analysis shows.

Elon Musk heads to a September Senate hearing on artificial intelligence. The Washington Post

Previously, tech companies including Twitter, Facebook owner Meta and Google’s YouTube worked with each other, outside researchers and federal law enforcement agencies to limit foreign interference campaigns, following revelations that Russian operatives used fake social media accounts to spread misinformation and exacerbate divisions in 2016.

But X has been largely absent from that effort since Elon Musk bought it in 2022, when it was still Twitter, and for months hasn’t sent representatives to biweekly meetings in which the companies share notes on networks of fake accounts they are investigating or planning to take down, according to other participants. “They just kind of disappeared,” one said.

The result has been that accounts spreading disinformation that the other social media companies took down remain active on X. That allows the disinformation to be spread from there, including back to the other platforms.

“Anyone trying to run a disinformation campaign is going to do it across multiple mainstream platforms,” said Yael Eisenstat, a senior fellow at the nonprofit Cybersecurity for Democracy. “With foreign influence, we are less protected than we were in 2020.”

The last X representative to attend one of the information-sharing sessions was Ireland-based expert Aaron Rodericks, said the people familiar with the meetings, who spoke on the condition of anonymity to discuss internal matters. Rodericks was suspended from X after liking posts critical of Musk and is suing him and the company. Before that, Twitter’s representative was its safety chief, Yoel Roth, who resigned not long after Musk’s takeover and had to flee his home after Musk wrongly implied that he was soft on pedophiles.


One result: In the months since Meta identified 150 artificial influence accounts on X in a series of public reports last year, 136 were still present on X as of Thursday evening, according to The Post’s review. That includes a Turkey-based account with more than 1 million followers and five other accounts that have X’s blue check mark designating them as verified.

Most troubling to some researchers, out of 123 accounts that Meta called out in May, August and December for participating in deceptive China-based campaigns, all but eight remain on X.

Meta said this week that such China-based campaigns have been multiplying: Of 10 networks taken down by the company since 2017, six were identified in the past year.

“There has been a markedly increased emphasis in [Communist] Party leadership in taking a much more robust approach to influencing foreign audiences through all tools available at their disposal,” said Kieran Green, an analyst for advisory firm Exovera and the lead author of a study being published Friday on Chinese censorship and propaganda for the U.S.-China Economic and Security Review Commission, a body Congress created in 2000 to monitor U.S.-China relations.

“Methods include flooding hashtags with junk, impersonating high-profile experts that are critical of the government and using bot accounts to give the false impression of social consensus,” Green said. “The object is not necessarily to change hearts and minds but to muddy the discourse to the degree that it’s impossible to form an anti-China narrative.”

Meta and YouTube declined to comment. X did not respond to a request for comment.


The retreat by X is just one of the new challenges in the quest to counter determined foreign interference.

The U.S. government stopped warning social networks about disinformation campaigns in July after a court ruling barred some communications between the White House and tech companies over censorship concerns. Tech companies have also slashed thousands of workers – some of whom were responsible for guarding their platforms against misinformation – while reversing policies prohibiting some election-related lies.

“The industry has sort of regressed. We staffed up, and got everything into what was going to be the best trust and safety of its time before the 2020 election. And, all of that has now gone back to pre-2016 preparedness,” said Anika Collier Navaroli, a senior fellow at the Tow Center for Digital Journalism at Columbia University and a former senior Twitter policy official.

“You’re seeing the lack of communication between government and companies, the lack of communication between companies and companies. That’s something that took a very long time to work on,” Navaroli said.

In addition, probes by House Republicans and lawsuits by conservative activists have forced some disinformation researchers to rethink efforts to study or counter the spread of online misinformation as they battle accusations that their work leads to censorship.

Though some Chinese propaganda is focused on deflecting concerns about its human rights record, treatment of Hong Kong and ambitions in the South China Sea, it has increasingly sought to stoke existing U.S. divisions in the same way Russia has, researchers said. Chinese influence operations have rapidly expanded to more platforms and more languages, Microsoft reported in September.


In the past few weeks, one of the X accounts listed by Meta as part of a covert China-based campaign, @boltinMich2800, has posted links to stories about hot-button political issues on obscure media sites. Some of them covered political events such as the Ohio governor’s veto of a bill restricting transgender care for teens or the qualification of candidates for a televised debate.

Other posts and reposts promoted far-right ideas, including that of “banning” liberal financier George Soros from politics.

Another account in the same network, @JeroenWolf52208, has been posting right-wing takes on race and the Texas border controversy, as well as a story on Israel’s war plans from Russia’s government-controlled RT. The two accounts did not respond to direct messages sent on X.

A separate preliminary analysis from Stanford University researchers of Meta’s November quarterly report on what is known as coordinated inauthentic behavior found 86 of those accounts are still active on X. Of those accounts, two were connected to Russia, three to Iran and the remaining 81 were connected to China.

The analysis shared exclusively with The Post, found that the majority of the China-based accounts are posing as North Americans, sometimes scraping photos from real Americans’ LinkedIn pages but changing their names. The accounts often post about China, Elon Musk, President Biden, and the U.S. election.

“The presence of these accounts reinforces the fact that state actors continue to try to influence U.S. politics by masquerading as media and fellow Americans,” said Renée DiResta, the technical research manager for the Stanford Internet Observatory. “Ahead of the 2022 midterms, researchers and platform integrity teams were collaborating to disrupt foreign influence efforts. That collaboration seems to have ground to a halt; Twitter does not seem to be addressing even networks identified by its peers, and that’s not great.”


Meanwhile, an account that is accused of being run by the Chinese Ministry of Public Security is still on X 10 months after U.S. prosecutors cited its tweets in a criminal complaint. It was posted as recently as Jan. 24.

The Post was able to link the account, @Bag_monk, to the “912 Special Project Working Group” by comparing the text of two of its tweets to those referenced by prosecutors in an April 2023 complaint, which match verbatim.

Prosecutors at the time described the unit as being part of a “broad effort to influence and shape public perceptions of the [Chinese] government, the [Chinese Communist Party] and its leaders in the United States and around the world.”

The account appears tied to dozens of accounts on X and other platforms that post and repost inflammatory messages. On Thursday, the London nonprofit Institute for Strategic Dialogue issued a report connecting some of the indictment’s posts to Spamouflage, a seven-year-old covert influence campaign suspected of being driven by the Chinese Communist Party.

The content includes negative depictions of both Biden and former president Donald Trump, as well as material that “appears aimed at creating a sense of dismay over the state of America without any clear partisan bent. It focuses on issues like urban decay, the fentanyl crisis, dirty drinking water, police brutality, gun violence and crumbling infrastructure,” wrote Elise Thomas, an analyst at the institute.

Some images shared by @Bag_monk and similar accounts have six fingers or body parts that blend, an indicator that they may have been created with artificial intelligence tools.

The apparent AI-generated images also depict Trump in an orange prison jumpsuit. One such graphic, which also depicted Biden and his son Hunter, was posted by 10 separate accounts. Their posts all used the same caption and were published after federal prosecutors last year charged Trump with illegally retaining classified documents. The posts garnered around 16,900 views altogether, according to X’s public tabulation.

Researchers said that China, again like Russia, has also begun planting articles advancing its political views on what appear to be local news sites. One campaign that the University of Toronto’s Citizen Lab recently attributed to a Beijing marketing company included 123 websites in 30 countries.

Will Oremus contributed to this report.

Only subscribers are eligible to post comments. Please subscribe or login first for digital access. Here’s why.

Use the form below to reset your password. When you've submitted your account email, we will send an email with a reset code.