Coder Large

arcee

Coder-Large is a 32 B-parameter offspring of Qwen 2.5-Instruct that has been further trained on permissively-licensed GitHub, CodeSearchNet and synthetic bug-fix corpora. It supports a 32k context window, enabling multi-file refactoring or long diff review in a single call, and understands 30-plus programming languages with special attention to TypeScript, Go and Terraform. Internal benchmarks show 5-8 pt gains over CodeLlama-34 B-Python on HumanEval and competitive BugFix scores thanks to a reinforcement pass that rewards compilable output. The model emits structured explanations alongside code blocks by default, making it suitable for educational tooling as well as production copilot scenarios. Cost-wise, Together AI prices it well below proprietary incumbents, so teams can scale interactive coding without runaway spend.

Try Now

Example Use Cases

Code generation or refactoring

Multi-language coding task

Educational code explanation

Technical Specifications

Context Window

32,768 tokens

Max Output

32,768 tokens

Cache Miss Cost

$0.50 per 1M tokens

Non-Reasoning Cost

$0.80 per 1M tokens

Web Search Cost

$15 per 1K calls

Code Execution Cost

$0.19 per 1K calls

⚠️ Legacy

Made legacy on

Reason

Untested

Recommended Replacement

Qwen3.5 Plus