People familiar with Meta’s internal Q&A said chief AI officer Alexandr Wang and product chief Chris Cox framed Mango as the company’s flagship visual model, designed to spit out broadcast-worthy video sequences from short prompts. Employees were told to expect Mango to support both stills and loopable shots so it can power Instagram story filters, Quest capture tools, and enterprise creative suites.

The initiative runs in parallel with Avocado, a next-generation language model aimed at reasoning and coding tasks. Meta wants both models ready in early 2026 so developers can mix Mango’s renders with Avocado-driven narration, ecommerce copy, or in-app assistants. Meta’s stock popped 2.3% after The Wall Street Journal reported on the push last week, underscoring investor appetite for a credible Sora rival.

The urgency is obvious: OpenAI, Runway, and Google already demo controllable video features, while Luma’s Ray3 Modify is picking up film studio pilots. Meta sources said Mango is supposed to differentiate through longer clips, brand-safe guardrails, and hooks into Meta’s advertising stack so marketers can generate assets directly inside Business Suite.

Meta is keeping Mango inside a “superintelligence” lab that CEO Mark Zuckerberg greenlit this fall, pooling compute budgets across Instagram, WhatsApp, and Reality Labs. Contractors described steady GPU deliveries to Meta’s Iowa data center, where Mango’s training run is scheduled to start in Q1 2026.

A public preview hasn’t been scheduled, but insiders expect Mango demos to surface at Meta’s 2026 developer conference if training stays on track. Until then, the company is funneling creators toward Emu and Imagine, its existing image tools, while engineers race to make Mango the default engine for every Meta app that touches video.