How to keep frontier open weights viable
By Lucas Ewing
Open-weight model releases have created a strange expectation: if the weights are downloadable, every commercial use should be free by default.
We do not think that expectation is sustainable. If we want more frontier open-weight models, model builders need a way to capture value from commercial usage.
Why open weights matter
The phrase "open source model" gets used loosely. In software, open source has a specific meaning tied to license terms. In AI, many releases are better described as open-weight: the model weights are available to inspect, run, evaluate, or fine tune, but the license may still place limits on commercial deployment.
Open-weight releases are good for developers because they make models inspectable, portable, benchmarkable, and deployable outside one vendor's API. That matters for trust, debugging, latency, cost, privacy, and independence.
That is the part worth protecting.
The bad outcome is fewer open weights
Training a frontier open-weight model is expensive. So is evaluation, safety work, serving compatibility, documentation, and community support. If every commercial API provider can immediately resell the model without compensating the lab, the rational move for model builders is to keep the best models closed or API-only.
Qwen is a useful warning sign. The Qwen ecosystem has produced important open-weight releases, but recent flagship variants like Qwen3.6-Plus have been released as closed, API-only models. That may make business sense, but it leaves developers with fewer inspectable, portable models to build on.
We want the opposite: more strong model weights being released, not fewer.
Licensing lets everyone do their job
The healthiest version of the ecosystem has model builders focused on building great models, inference providers focused on serving them quickly and reliably, and customers getting a simple production API with clear rights.
Commercial authorization is not the enemy of open weights. It is one of the ways open weights stay economically rational.
MiniMax M2.7 is a useful example. Its public Hugging Face license permits broad non-commercial use and asks commercial users to obtain authorization from MiniMax. Through Lilac, customers get that production path without having to run the model themselves or reason through license ambiguity alone.
That is a better division of labor. MiniMax builds the model. Lilac handles the inference stack. Developers get an OpenAI-compatible endpoint they can actually ship against.
Why Lilac supports this
Lilac exists because model access and GPU economics should be simpler for developers. We route inference to capacity that can serve workloads efficiently, expose it through OpenAI-compatible APIs, and make pricing visible before a team commits.
Respecting open-weight licenses is part of doing that responsibly. We do not want to route around the model builder's terms. We want to make commercial usage clear, keep the API simple for customers, and make it easier for model builders to justify future open-weight releases.
Open-weight models are too important to treat as free inventory. If we want the best models to stay inspectable, portable, and usable outside a single hosted API, the ecosystem needs to reward the teams that release them.