MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting hybrid attention architecture. MiMo-V2-Flash supports a hybrid-thinking toggle and a 256K context window, and excels at reasoning, coding, and agent scenarios. On SWE-bench Verified and SWE-bench Multilingual, MiMo-V2-Flash ranks as the top #1 open-source model globally, delivering performance comparable to Claude Sonnet 4.5 while costing only about 3.5% as much. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean.
Try NowTop open-source coding agent
Swe-bench-level software engineering
Budget reasoning and coding with tools
262,144 tokens
32,768 tokens
$0.09 per 1M tokens
$0.29 per 1M tokens
$0.045 per 1M tokens
$15 per 1K calls
$0.19 per 1K calls