I2
Balance

Inception: Mercury 2

by inception

Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving >1,000 tokens/sec on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs like Claude 4.5 Haiku and GPT 5 Mini, at a fraction of the cost. Mercury 2 supports tunable reasoning levels, 128K context, native tool use, and schema-aligned JSON output. Built for coding workflows

128KContext Window
Category
Economy
API access
Unified context
RAG + Knowledge Base
24/7 Support
Try This ModelCompare models

🚀 Capabilities

Long context
JSON mode
Function calling
Streaming output
Structured output

Specifications

Providerinception
Context Window128,000 tokens
Max Output50,000 tokens
Minimum PlanBalance

Pricing

Input Price$0.2500 / 1M tokens
Output Price$0.7500 / 1M tokens

💡 With PRO subscription, cost is reduced by 20%

Ready to try Inception: Mercury 2?

Get 1,000 tokens free on signup

Start for free