The Qwen3.5 native vision-language series Plus models are built on a hybrid architecture that integrates linear attention mechanisms with sparse mixture-of-experts models, achieving higher inference efficiency. In a variety of task evaluations, the 3.5 series consistently demonstrates performance on par with state-of-the-art leading models. Compared to the 3 series, these models show a leap forward in both pure-text and multimodal capabilities.
Try NowBalanced alibaba multimodal task
Text or vision reasoning at moderate cost
Long-context multimodal understanding with alibaba
1,000,000 tokens
65,536 tokens
$0.40 per 1M tokens
$2.40 per 1M tokens
$0.04 per 1M tokens
$0.50 per 1M tokens
$15 per 1K calls
$0.19 per 1K calls