Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving >1,000 tokens/sec on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs like Claude 4.5 Haiku and GPT 5 Mini, at a fraction of the cost. Mercury 2 supports tunable reasoning levels, 128K context, native tool use, and schema-aligned JSON output. Built for coding workflows
128KVentana de contexto
Categoría
Económico
Acceso API
Contexto unificado
RAG + Knowledge Base
Soporte 24/7
🚀 Capacidades
Contexto largo
Modo JSON
Llamadas a funciones
Salida en streaming
Salida estructurada
Especificaciones
| Proveedor | inception |
| Ventana de contexto | 128,000 tokens |
| Salida máx | 50,000 tokens |
| Plan mínimo | Equilibrio |
Precios
| Precio de entrada | $0.2500 / 1M tokens |
| Precio de salida | $0.7500 / 1M tokens |
💡 Con la suscripción PRO, el costo se reduce un 20%