Virtuoso-Large is Arcee's top-tier general-purpose LLM at 72 B parameters, tuned to tackle cross-domain reasoning, creative writing and enterprise QA. Unlike many 70 B peers, it retains the 128 k context inherited from Qwen 2.5, letting it ingest books, codebases or financial filings wholesale. Training blended DeepSeek R1 distillation, multi-epoch supervised fine-tuning and a final DPO/RLHF alignment stage, yielding strong performance on BIG-Bench-Hard, GSM-8K and long-context Needle-In-Haystack tests. Enterprises use Virtuoso-Large as the "fallback" brain in Conductor pipelines when other SLMs flag low confidence. Despite its size, aggressive KV-cache optimizations keep first-token latency in the low-second range on 8x H100 nodes, making it a practical production-grade powerhouse.
Try Now131,072 tokens
64,000 tokens
$0.75
$1.20
$15
$0.19
Arcee flagship; limited availability and testing