The clearest public signal is that Meta is trying to balance two things at once:
First, Meta wants a flagship model line that can power its consumer products at enormous scale and Meta is willing to keep those models more closed at launch than it did in earlier Llama eras. With Muse Spark, Meta shifted away from immediate open release by offering a limited “private preview” and keeping technical specifics private.
Second, Meta still wants to remain a major “open” player. Reporting states Meta intends to eventually offer versions under an open source licence, but only after evaluating safety risk and by keeping “some pieces proprietary,” Meta can reduce the chance that open releases create new misuse pathways. This fits a broader industry reality highlighted by major safety work in 2026: general purpose AI capabilities are improving fast, but risks (misuse, malfunctions, systemic disruption) are rising too especially for cyber abuse and harmful knowledge. “Go fast, open everything immediately” is harder to justify when models can be used for advanced malware, persuasion scams, and more.

The “open source” catch: what “open” will likely look like in 2026
Let’s keep it simple, : “open source AI model” can mean different things.
In practice, most “open” AI releases are open weights (you can download model weights), not fully open like traditional software where everything is reproducible from source code + dataset. That said, licenses matter a lot.
A key reference point is Llama 4, which was released with weights and broad commercial use rights but still under a community licence with restrictions (for example, the multimodal Llama 4 licence terms described by Microsoft’s model catalogue include EU related restrictions for certain rights grants).
In contrast, Gemma 4 from Google was explicitly released under the Apache 2.0 licence, a widely used permissive open source licence that is designed for broad commercial flexibility.
So when Meta says “open source versions later,” the realistic, business shaped interpretation is:
- The first release of the most capable model(s) may stay closed (or partially closed) while Meta validates safety outcomes and performance in the wild.
- The later “open” releases may be smaller, distilled, or have capability limits, with some components (filters, agent tooling, specialised safety layers) held back.
- The result is an ecosystem where developers can build locally on open weights, but still rely on paid APIs for the “best” frontier capabilities. (This pattern is consistent with how many vendors balance developer adoption and monetisation.)
Avocado and Mango: what’s known, what’s reported, and what’s still unclear
A lot of the public chatter uses “Avocado” and “Mango” like official product names. They are better understood as internal codenames used in reporting and those can change anytime.
Here’s what strong, recent reporting supports:
Muse Spark is described as part of an internal series called “Avocado,” and it is the first public model release from Meta’s rebuilt superintelligence effort.
The new effort is led by Alexandr Wang, who joined Meta after a deal in which Meta agreed to take a 49% stake in Scale AI for about $14.8 billion (reported June 2025).
Separately, reporting has described “Mango” as an image/video focused model in development. Because some primary articles are paywalled, the safest phrasing is: major business press has reported Meta is working on a model code named “Mango” focused on image and video generation, alongside “Avocado” as a text model family.
What is not publicly confirmed in detail (so don’t overclaim in your blog):
- Exact parameter sizes, training data scope, or a firm release calendar for future “Avocado” or “Mango” variants. Meta avoided disclosing core details even for Muse Spark.
- Claims like “natively generates 3D assets” for Mango, this may be plausible long term, but it’s not backed by the strongest public reporting in the sources above.
One important nuance: Meta does have a serious media generation research history. For example, Meta’s Movie Gen research (2024) described generating HD video with synchronised audio and capabilities like editing and personalisation showing Meta has worked on video+audio generation even if “Mango” product details aren’t public.

Why Meta is shifting to hybrid now
This change is not happening in a vacuum. It’s being pushed by competition, safety pressure, and money (a lot of money).
Meta’s open releases brought huge adoption, but keeping pace at the frontier is expensive. Meta has publicly guided 2026 capital expenditures in the range of $115–$135 billion, explicitly tying the growth to infrastructure investment supporting “Meta Superintelligence Labs” and the core business.
At the same time, the “open model” landscape is more crowded. Two examples that matter for Meta’s positioning:
- Gemma 4 is positioned as a family of open models released under Apache 2.0 (commercially permissive), which reduces friction for businesses that want legal clarity.
- Qwen3 publicly stated it released model weights (including MoE variants) under Apache 2.0, putting additional pressure on western labs to offer competitive open options.
Add to this the safety reality: major international safety work in 2026 explicitly documents misuse risks (scams, manipulation, cyberattacks, bio/chem information) and notes that developers have been adding safeguards when models cross new capability thresholds.
Finally, leadership framing matters. Meta CEO Mark Zuckerberg has argued that open-source AI helps prevent power concentrating in a small number of companies and supports a broad ecosystem but he has also acknowledged that very advanced systems may require being careful about what is released openly.
In short: Meta wants to stay “open enough” to keep developer mindshare, while being “closed enough” to protect its frontier edge and manage risk.

What this means for developers and businesses
If you’re building on Meta models (or any open model), 2026 is moving toward a two tier world. Not good or bad, just reality.
Expect a tiered ecosystem
The pattern that appears to be emerging is:
- Open models for: local hosting, cost sensitive workloads, private deployments, custom fine tuning, and fast iteration. (This is why open model ecosystems became so popular in the first place.)
- Proprietary models/APIs for: peak reasoning, premium multimodal workflows, agentic task execution, and features tightly integrated with big consumer platforms. Muse Spark itself launched with private preview access rather than immediate open
“Agentic AI” is not just hype it’s becoming productised
Muse Spark includes modes designed around deeper reasoning and multi agent execution (Meta described “Contemplating Mode” as running multiple agents simultaneously). That’s a direct signal that “AI that acts” not just “AI that chats”, is a near term product direction. This direction also increases governance needs. The International AI Safety Report explicitly categorises risks from malicious use (fraud, manipulation, cyber), malfunctions, and systemic impacts and warns evidence is growing even as capability progress remains jagged.

Practical checklist for teams right now
If you want to write a blog that brings leads, give readers a clear “do this next” list (simple English, can follow one):
- Decide which workloads must stay on‑prem or inside your VPC (privacy, compliance, sensitive business data).
- Build an evaluation harness now (quality, hallucinations, refusal behaviour, data leakage risk). Real world performance can differ from benchmarks.
- Plan procurement: open models reduce vendor lock in, but frontier APIs can still be necessary for top tier outputs.
- Don’t ignore licensing. “Open” licences vary a lot: Apache 2.0 is very permissive, while community licences may include restrictions.

A Dubai based business angle: how to turn this news into ROI
If your goal is more website traffic and more qualified leads, your blog should connect global AI news to local business problems:
- “How do I cut manual work?”
- “How do I automate support and operations without leaking data?”
- “How do I add AI to ERP, CRM, inventory, finance, HR?”
- “How do I keep cost predictable if AI prices change?”
This is exactly where Quintessential Tech positions itself: business technology consulting beyond ERP, with services spanning consultancy, implementation/support, development, digital marketing, and AI solutions and the company is headquartered in Dubai.
A strong lead friendly angle for your site is: “Hybrid AI means you can run ‘open’ locally for day to day work, and use premium models only when needed. We help you design that architecture safely.” This aligns with the direction Meta is signalling (closed first for flagship, open later for broader ecosystem).
Practical CTA wording that fits your homepage style:
Add a mid article CTA box:
Feeling overwhelmed by systems?
“Let’s simplify, automate, and grow. We’ll review your workflows and propose a hybrid AI + ERP plan that fits your budget and compliance needs.” Then use the same button text you already use on the site: “Schedule a Meeting”.

FAQ people are searching right now
Will Meta actually open source its next AI models?
Reporting on 6 April 2026 said Meta plans to eventually offer versions under an open source licence, but expects to keep some pieces proprietary first to manage safety risk
Is Muse Spark open source?
No not at launch. Meta released Muse Spark through its app and website and offered only a private preview to partners, signalling a shift away from immediate open releases for its newest flagship model line.
What is the “Avocado” model?
Muse Spark is described as part of an internal series called “Avocado,” built by Meta’s new superintelligence team.
What is “Mango” in Meta AI news?
Major press reporting has described “Mango” as a code name for an image/video focused model under development. Specific capabilities and timing are not confirmed in public technical detail.
Why is everyone talking about Gemma 4 and Apache 2.0?
Gemma 4 was released under the Apache 2.0 licence, which is commercially permissive and reduces licensing friction for businesses, one reason it has become a major talking point in the open model ecosystem.

Leave a Reply