MiniMax M2.7 API
MiniMax M2.7 API.
$0.30/M input, $0.055/M cached input, $1.20/M output (FP8) for coding agents and tool-use workloads.
Point your OpenAI SDK at api.getlilac.com/v1 and request minimaxai/minimax-m2.7.
Model pricing
Pay per token. No commitments.
Lilac served MiniMax M2.7 at a sustained 60 tok/s/user at 160-way concurrency, with 100% request success, 98.80% tool-call match, 99.80% schema accuracy, and 0% error-only reasoning in the MiniMax Provider Verifier.
More models are coming soon and will be added as they go live.
Integration
One base URL change.
Keep the OpenAI SDK and point it at Lilac. Your existing code just works.
from openai import OpenAI
client = OpenAI(
base_url="https://api.openai.com/v1",
api_key="sk_...",
)
response = client.chat.completions.create(
model="openai/gpt-5.4",
messages=[{"role": "user", "content": "Hello!"}],
)
# Same code. Same SDK. Fraction of the price.
Standard OpenAI client -- just change the base URL.
Commercially licensed MiniMax access through Lilac.
Built for coding, long-horizon tasks, and tool-heavy agents.
Frequently asked questions
How do I call the API?
Set base_url to https://api.getlilac.com/v1 in the OpenAI SDK, model name minimaxai/minimax-m2.7.
How much does it cost?
$0.30/M input, $0.055/M cached input, and $1.20/M output.
Can I use MiniMax M2.7 commercially through Lilac?
Yes. Lilac hosts MiniMax M2.7 commercially through our partnership with MiniMax.
Start running inference in minutes.
No contracts, no commitments. Swap your base URL and pay less for the same output quality.
No commitment required.