Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving >1,000 tokens/sec on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs like Claude 4.5 Haiku and GPT 5 Mini, at a fraction of the cost. Mercury 2 supports tunable reasoning levels, 128K context, native tool use, and schema-aligned JSON output. Built for coding workflows
128K上下文窗口
🚀 能力
长上下文
JSON模式
函数调用
流式输出
结构化输出
规格
| 提供商 | inception |
| 上下文窗口 | 128,000 代币 |
| 最大输出 | 50,000 代币 |
| 最低套餐 | 平衡 |
价格
| 输入价格 | $0.2500 / 1M 代币 |
| 输出价格 | $0.7500 / 1M 代币 |
💡 PRO订阅可享20%折扣