Virtuoso Large

arcee

Virtuoso-Large is Arcee's top-tier general-purpose LLM at 72 B parameters, tuned to tackle cross-domain reasoning, creative writing and enterprise QA. Unlike many 70 B peers, it retains the 128 k context inherited from Qwen 2.5, letting it ingest books, codebases or financial filings wholesale. Training blended DeepSeek R1 distillation, multi-epoch supervised fine-tuning and a final DPO/RLHF alignment stage, yielding strong performance on BIG-Bench-Hard, GSM-8K and long-context Needle-In-Haystack tests. Enterprises use Virtuoso-Large as the "fallback" brain in Conductor pipelines when other SLMs flag low confidence. Despite its size, aggressive KV-cache optimizations keep first-token latency in the low-second range on 8× H100 nodes, making it a practical production-grade powerhouse.

Try Now

Capabilities

Tool Use

Example Use Cases

Complex cross-domain reasoning

Creative writing or enterprise QA

Long-context analysis of large documents

Technical Specifications

Context Window

131,072 tokens

Max Output

64,000 tokens

Cache Miss Cost

$0.75 per 1M tokens

Non-Reasoning Cost

$1.20 per 1M tokens

Web Search Cost

$15 per 1K calls

Code Execution Cost

$0.19 per 1K calls

⚠️ Legacy

Made legacy on

Reason

Untested

Recommended Replacement

Qwen3.5 Plus