Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving >1,000 tokens/sec on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs like Claude 4.5 Haiku and GPT 5 Mini, at a fraction of the cost. Mercury 2 supports tunable reasoning levels, 128K context, native tool use, and schema-aligned JSON output. Built for coding workflows
128KContext Window
Category
Economy
API access
Unified context
RAG + Knowledge Base
24/7 Support
🚀 Capabilities
Long context
JSON mode
Function calling
Streaming output
Structured output
Specifications
| Provider | inception |
| Context Window | 128,000 tokens |
| Max Output | 50,000 tokens |
| Minimum Plan | Balance |
Pricing
| Input Price | $0.2500 / 1M tokens |
| Output Price | $0.7500 / 1M tokens |
💡 With PRO subscription, cost is reduced by 20%