Molmo2-8B is an open vision-language model developed by the Allen Institute for AI (Ai2) as part of the Molmo2 family, supporting image, video, and multi-image understanding and grounding. It is based on Qwen3-8B and uses SigLIP 2 as its vision backbone, outperforming other open-weight, open-data models on short videos, counting, and captioning, while remaining competitive on long-video tasks.
Try NowImage or video understanding
Visual grounding and captioning
Multi-image analysis
36,864 tokens
36,864 tokens
$0.20 per 1M tokens
$0.20 per 1M tokens
$15 per 1K calls
$0.19 per 1K calls