Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-context reasoning over repositories. The model features 480 billion total parameters, with 35 billion active per forward pass (8 out of 160 experts). Pricing for the Alibaba endpoints varies by context length. Once a request is greater than 128k input tokens, the higher pricing is used.
Try NowMost capable open-source coding model
Agentic coding with tool use
Repository-level code reasoning
262,144 tokens
65,536 tokens
$0.22 per 1M tokens
$1 per 1M tokens
$0.022 per 1M tokens
$15 per 1K calls
$0.19 per 1K calls