Twitter One Year

Workers install lighting on an “X” sign atop the headquarters of the company formerly known as Twitter in San Francisco, on July 28. New research finds X has emerged as a conduit to mainstream exposure for a fresh wave of automated hate memes, generated using cutting-edge AI image tools. Noah Berger/Associated Press, file

It looks like a poster for a new Pixar movie. But the film’s title is “Dancing Israelis.” Billing the film as “a Mossad/CIA production,” the poster depicts a caricatured stereotype of a dancing Jewish man whose boot is knocking down the World Trade Center towers – a reference to antisemitic 9/11 conspiracy theories.

Posted to X on Oct. 27 by a verified user with about 220,000 followers who bills himself as an “America-first patriot,” the image garnered about 190,000 views, including 8,000 likes and 1,500 reshares. Content moderators at X, formerly Twitter, took no action against the post, and the user posted it again on Nov. 16, racking up an additional 194,000 views. Both tweets remained on the site as of Wednesday, even after researchers flagged them as hate posts using the social network’s reporting system.

An antisemitic post on Elon Musk’s X is not exactly news. But new research finds the site has emerged as a conduit to mainstream exposure for a fresh wave of automated hate memes, generated by trolls on the notorious online forum 4chan using cutting-edge AI image tools. The research by the nonprofit Center for Countering Digital Hate, shared with and verified by The Washington Post, finds that a campaign by 4chan members to spread “AI Jew memes” in the wake of the Oct. 7 Hamas attack resulted in 43 different images reaching a combined 2.2 million views on X between Oct. 5 and Nov. 16, according to the site’s publicly displayed metrics.

Examples of widely viewed posts include a depiction of U.S. Army soldiers kneeling before a Jewish man on a throne; Taylor Swift in a Nazi officer’s uniform sliding a Jewish man into an oven; and a Jewish man pulling the strings on a puppet of a Black man. The latter may be a reference to the “Great Replacement” conspiracy theory, which was cited as motivation by the 18-year-old white man who slaughtered 10 Black people at a Buffalo, New York, grocery store in May 2022, and which Musk seemed to endorse in a tweet last month.

More than half of the posts were made by verified accounts, whose owners pay X a monthly fee for special status and whose posts are prioritized in users’ feeds by the site’s algorithms. The verified user who tweeted the image of U.S. Army soldiers bowing to a Jewish ruler, with a tweet claiming that Jews seek to enslave the world, ran for U.S. Senate in Utah as a Republican in 2018 and has 86,000 followers on X.

The proliferation of machine-generated bigotry, which 4chan users created using AI tools such as Microsoft’s Image Creator, calls into question recent claims by Musk and X CEO Linda Yaccarino that the company is cracking down on antisemitic content amid a pullback by major advertisers. In a Nov. 14 blog post, X said it had expanded its automated moderation of antisemitic content and provided its moderators with “a refresher course on antisemitism.”

Advertisement

But the researchers said that of 66 posts they reported as hate speech on Dec. 7, X appeared to have taken action on just three as of Monday. Two of those three had their visibility limited, while one was taken down. The Post independently verified that the 63 others remained publicly available on X as of Wednesday, without any indication that the company had taken action on them. Most appeared to violate X’s hateful conduct policy.

Several of the same AI-generated images also have been posted to other major platforms, including TikTok, Instagram, Reddit, YouTube and Facebook, the researchers noted. But the CCDH said it focused on X because the site’s cutbacks on moderation under Musk have made it a particularly hospitable environment for explicitly hateful content to reach a wider audience. The Post’s own review of the 4chan archives suggested that X has been a favored platform for sharing the antisemitic images, though not the only platform.

X’s business is reeling after some of its largest advertisers pulled their ads last month. The backlash came in response to Musk’s antisemitic tweet and a report from another nonprofit, Media Matters for America, that showed posts pushing Nazi propaganda were running alongside major brands’ ads on the site.

Among the companies to pull its spending was Disney, whose brand features prominently in many of the AI-generated hate memes now circulating on X. Speaking at a conference organized by the New York Times last month, Musk unleashed a profane rant against advertisers who paused their spending on X, accusing them of “blackmail” and saying they’re going to “kill the company.” He mentioned Disney’s CEO by name.

The most widely shared post in the CCDH’s research was a tweet that read “Pixar’s Nazi Germany,” with a montage of four AI-generated scenes from an imaginary animated movie, depicting smiling Nazis running concentration camps and leading Jewish children and adults into gas chambers (Pixar is owned by Disney). It was one of the few posts in the study that had been labeled by X’s content moderators, with a note that read, “Visibility limited: this Post may violate X’s rules against Hateful Conduct.” Even so, as of Wednesday, it had been viewed more than half a million times, according to X’s metrics.

Another verified X account has posted dozens of the AI hate memes, including faux Pixar movie posters that feature Adolf Hitler as a protagonist, without any apparent sanction from the platform.

Advertisement

Musk, the world’s richest person, has sued both Media Matters for America and the Center for Countering Digital Hate over their research of hate speech on X. After the latest wave of criticism over antisemitism, Musk announced strict new policies against certain pro-Palestinian slogans. And he visited Israel to declare his support for the country, broadcasting his friendly meeting with the country’s right-wing prime minister, Benjamin Netanyahu.

Yaccarino, who was appointed CEO by Musk in May, said in a November tweet that X has been “extremely clear about our efforts to combat antisemitism and discrimination.” The company did not respond to an email asking whether the antisemitic AI memes violate its policies.

4chan is an anonymous online messaging board that has long served as a hub for offensive and extremist content. When Musk bought Twitter last fall, 4chan trolls celebrated by flooding the site with racist slurs. Early in October of this year, members of 4chan’s “Politically Incorrect” message board began teaching and encouraging one another to generate racist and antisemitic right-wing memes using AI image tools, as first reported by the tech blog 404 Media.

The 4chan posts described ways to evade measures intended to prevent people from generating offensive content. Those included a “quick method” using Microsoft’s Image Creator, formerly called Bing Image Creator, which is built around OpenAI’s Dall-E 3 software and viewed as having flimsier restrictions on sensitive content.

“If you add words you think will trip the censor, space them out from the part of the prompt you are working on,” one 4chan post advised, describing how to craft text prompts that would yield successful results. “Example: rabbi at the beginning, big nose at the end.”

After the Oct. 7 Hamas attack on Israel, the focus among 4chan users on antisemitic content seemed to sharpen. Numerous “AI Jew memes” threads emerged with various sub-themes, such as the “Second Holocaust edition” and the “Ovens Run All Day edition.”

Advertisement

Microsoft’s director of communications, Caitlin Roulston, said in a statement, “When these reports surface, we take the appropriate steps to address them, as we’ve done in the past. … As with any new technology, some are trying to use it in unintended ways, and any repeated attempts to produce content that goes against our policy guidelines may result in loss of access to the service.” Microsoft did not say how many people have been denied access to its imaging program because they violated its rules.

The ability to generate extremist imagery using digital tools isn’t new. Programs such as Adobe Photoshop have long allowed people to manipulate images without moderating the content they can produce from it.

But the ability to create complex images from scratch in seconds with only a few lines of text, whether in the form of a Pixar movie poster or a photorealistic war image, is different. And the ability of overt hate accounts to be verified and amplified on X has made spreading such messages easier than ever, said Imran Ahmed, CCDH’s CEO. “Clearly the cost of producing and disseminating extremist material has never been lower.”

Sara Aniano, disinformation analyst at the Anti-Defamation League’s Center on Extremism, said AI seems to be ushering in “the next phase of meme culture.”

The goal of extremists in sharing AI hate memes to mainstream social media platforms is to “redpill” ordinary people, meaning to lead them down a path of radicalization and conspiracism, Aniano added. “You can always expect this rhetoric to be in fringe spaces. but they love it when it escapes those spaces.”

Not all of the AI memes flourishing on X are antisemitic. Ashlea Simon, chair of the United Kingdom’s far-right Britain First party, has taken to posting apparently AI-generated images that target Muslim migrants, suggesting that they want to rape white women and “replace our peoples.”

Advertisement

The Britain First party and some of its leaders, boosted by Donald Trump on Twitter in 2017, had been banned from Twitter for hate speech under the previous ownership. But Musk reinstated them soon after buying the company, then gave the party its gold “official organization” verification label in April.

While Musk has said he’s personally against antisemitism, he has at times defended the presence of antisemitic content on X. “Free speech does at times mean that someone you don’t like is saying something you don’t like,” he said in his conversation with Netanyahu in September. “If you don’t have that, then it’s not free speech.”

Ahmed said the problem is that social media platforms, without careful moderation, tend to amplify extreme and offensive viewpoints because they treat people’s shocked and outraged responses as a signal of engagement.

“If you’re Jewish or if you’re Muslim, and every day you open up X and you see new images at the top of your timeline that depict you as a bloodsucking monster, it makes you feel like maybe these platforms, but also society more broadly, might be against you,” he said.

Comments are not available on this story.