Coder-Large is a 32 B-parameter offspring of Qwen 2.5-Instruct that has been further trained on permissively-licensed GitHub, CodeSearchNet and synthetic bug-fix corpora. It supports a 32k context window, enabling multi-file refactoring or long diff review in a single call, and understands 30-plus programming languages with special attention to TypeScript, Go and Terraform. Internal benchmarks show 5-8 pt gains over CodeLlama-34 B-Python on HumanEval and competitive BugFix scores thanks to a reinforcement pass that rewards compilable output. The model emits structured explanations alongside code blocks by default, making it suitable for educational tooling as well as production copilot scenarios. Cost-wise, Together AI prices it well below proprietary incumbents, so teams can scale interactive coding without runaway spend.
Try NowCode generation or refactoring
Multi-language coding task
Educational code explanation
32,768 tokens
32,768 tokens
$0.50 per 1M tokens
$0.80 per 1M tokens
$15 per 1K calls
$0.19 per 1K calls