Officials in Brazil, India, and the U.S. now issue “do not share” bulletins several times a week after AI footage sparks panics about fabricated wildfires, police raids, or celebrity scandals. Even when creators leave Sora watermarks in the corner, viewers crop or re-edit the clips before reposting them to Facebook Reels or WhatsApp groups, stripping away every hint that the footage is synthetic.
OpenAI, Meta, and TikTok all say they apply automatic “AI-generated” disclosures, yet internal metrics shared with regulators show those badges reduce sharing by only 9%. People still pass around explainers claiming the clips are “real but stylized,” a phrase misinformers use to dodge takedowns. Researchers call the resulting torrent “AI slop”—content designed to engage, not inform—because it’s cheap to manufacture and impossible to debunk at scale.
The distrust has real economic consequences: travel boards and retailers that rely on user-generated videos are watching legitimate posts get buried beneath surreal footage of “tsunami weddings” and “sky jellyfish” rendered in Sora. Ad buyers are pressing platforms for harder filters, while lawmakers in the EU and California are drafting bills that would force companies to watermark every generation at the file level and offer takedown tools for public officials.
OpenAI says upgrades are coming, including invisible provenance markers and a broadcast API that newsrooms can ping to verify whether a viral clip touched Sora’s servers. Until those tools ship, however, “AI slop” will keep outrunning the fact-checkers tasked with cleaning it up.