A sophisticated text-based Mixture-of-Experts (MoE) model featuring 21B total parameters with 3B activated per token, delivering exceptional multimodal understanding and generation through heterogeneous MoE structures and modality-isolated routing. Supporting an extensive 131K token context length, the model achieves efficient inference via multi-expert parallel collaboration and quantization, while advanced post-training techniques including SFT, DPO, and UPO ensure optimized performance across diverse applications with specialized routing and balancing losses for superior task handling.
Try NowBudget chinese-english text task
Efficient moe with tool use
Lightweight general text generation
120,000 tokens
8,000 tokens
$0.07 per 1M tokens
$0.28 per 1M tokens
$15 per 1K calls
$0.19 per 1K calls