Meta Advised to Enhance Oversight of AI-Generated Fake Content
Meta should increase efforts to address the "proliferation" of fake content created using artificial intelligence (AI) tools on its platforms, according to the social media company's own advisory group.
The 21-member Oversight Board expressed concerns while criticizing Meta for allowing an AI-generated video to remain on its platform without a label. The video falsely depicted extensive damage in Haifa, Israel, allegedly caused by Iranian forces.
The board urged Meta to revise its AI policies, cautioning that the rise of fake AI videos related to global military conflicts has "challenged the public's ability to distinguish fabrication from fact ... risking a general distrust of all information." Meta responded by stating it would label the video in question within seven days.
Role and Influence of Meta's Oversight Board
Established in 2020, Meta's Oversight Board operates as a semi-independent entity overseeing content moderation decisions across Meta's platforms, including Facebook, Instagram, and WhatsApp.
While the board frequently disagrees with Meta's decisions, the company has continued to relax its content policing measures, raising questions about the board's actual influence.
The board highlighted that Meta's handling of the Haifa video reflected previously identified issues concerning "inefficiencies in Meta's current approach during armed conflicts."
Current Content Moderation Practices and Board Recommendations
Meta currently depends largely on users to "self-disclose" when content they post is AI-generated. Absent such disclosure, the company waits for complaints to its content moderation team, which may then decide to apply a label.
The Oversight Board recommended that Meta should proactively label fake AI content "much more frequently." It criticized the company's existing methods as "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content, particularly during a crisis or conflict where there is heightened engagement on the platform."
Background of the Haifa Video Incident
The board's review was prompted by a video posted in June by a Facebook account based in the Philippines, which described itself as a news source.
This video was among several fake AI-generated videos posted to social media following the onset of the conflict, featuring content supporting either Israel or Iran. According to a BBC analysis at the time, these videos collectively amassed at least 100 million views.
Despite the Facebook video being AI-generated and depicting fabricated content, and despite Meta receiving multiple user complaints, the company neither labeled the video as AI-generated nor removed it.
It was only after a Facebook user appealed directly to the Oversight Board and the board took up the matter that Meta responded to the concerns.
Meta asserted that the video, which had nearly 1 million views, did not require a label or removal because it did not "directly contribute to the risk of imminent physical harm."
Oversight Board's Ruling and Meta's Response
The board stated that this threshold was too high for labeling AI-generated content, especially when the subject involves armed conflict. On Tuesday, it ruled that the video should have been assigned a "high risk AI label."
"Meta must do more to address the proliferation of deceptive AI-generated content on its platforms... so that users can distinguish between what is real and fake,"
the board declared.
In its statement, Meta indicated it would follow the board's recommendations when encountering "identical" content in the "same context" as the reviewed video in the future.







