A
Premium

Arcee AI: Virtuoso Large

by arcee-ai

Virtuoso-Large is Arcee's flagship general-purpose LLM, boasting 72 billion parameters. It is meticulously tuned for superior performance in cross-domain reasoning, creative writing, and enterprise-grade question answering. A key differentiator is its impressive 128k context window, inherited from Qwen 2.5, enabling it to process extensive documents like books, codebases, or financial filings in their entirety. This model is a powerhouse for applications requiring deep contextual understanding. Its advanced training regimen includes DeepSeek R1 distillation, multi-epoch supervised fine-tuning, and a final DPO/RLHF alignment stage, resulting in strong benchmarks on BIG-Bench-Hard, GSM-8K, and long-context Needle-In-Haystack tests. Enterprises leverage Virtuoso-Large as a reliable "fallback" brain in Conductor pipelines. Despite its size, aggressive KV-cache optimizations ensure low-second first-token latency on 8× H100 nodes. It supports functions and streaming, with a 131K token context window and 4K token max output. Pricing is competitive at $0.75/1.20 per 1M tokens (input/output).

LLMEnterprise AIReasoningText GenerationLarge Context
75%Quality
131KContext Window
70%Speed
Category
Standard
API access
Unified context
RAG + Knowledge Base
24/7 Support
Try This ModelCompare models

Best For

Chat
Cross-domain Reasoning
Creative Writing
Enterprise QA

🚀 Capabilities

Long context
Functions
Streaming

Limitations

No image generation

Specifications

Providerarcee-ai
Context Window131,072 tokens
Max Output64,000 tokens
Minimum PlanPremium

Pricing

Input Price$0.7500 / 1M tokens
Output Price$1.2000 / 1M tokens

💡 With PRO subscription, cost is reduced by 20%

Ready to try Arcee AI: Virtuoso Large?

Get 1,000 tokens free on signup

Start for free