Meta Platforms Inc., the parent company of Facebook and Instagram, is changing its policies to allow more AI-generated content to stay up on its sites, even if that content is misleading.

The company will begin to label, not remove, misleading content that is AI-generated but doesn’t violate any other policies. Previously, Meta’s “manipulated media” policy stated that the company would remove videos that had “been edited or synthesized,” in ways “that are not apparent to an average person and would likely mislead an average person” into thinking someone in the video had said something they did not.

The new policy extends to “digitally created or altered images, video or audio,” the company wrote Friday in a blog post. Misleading posts can still be fact-checked as well and labeled as such, a spokesperson confirmed.

Meta announced the change after its Oversight Board, an independent group of academics and researchers that review the company’s content moderation decisions, criticized the “manipulated media” policy in February, calling it “incoherent.” The board also recommended using more labels on AI-generated content instead of removing the videos or posts.

“We agree that providing transparency and additional context is now the better way to address this content,” Meta wrote in the blog post. The company will label AI-generated video, audio and images as “Made with AI” under the new policy, which will begin in May.

How Meta deals with AI-generated content has been an important policy topic this year given the looming US elections in November. Meta has previously talked about the need to automatically label AI-generated posts, including those posts created using competitors’ technology.


Only subscribers are eligible to post comments. Please subscribe or login first for digital access. Here’s why.

Use the form below to reset your password. When you've submitted your account email, we will send an email with a reset code.