Llama 3.2 11B Vision Instruct

meta

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis. Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research. Usage of this model is subject to Meta's Acceptable Use Policy.

Try Now

Capabilities

Image Input

Example Use Cases

Image captioning or visual question answering

Lightweight multimodal task

Visual reasoning with text

Technical Specifications

Context Window

131,072 tokens

Max Output

16,384 tokens

Cache Miss Cost

$0.049 per 1M tokens

Non-Reasoning Cost

$0.049 per 1M tokens

Web Search Cost

$15 per 1K calls

Code Execution Cost

$0.19 per 1K calls

⚠️ Legacy

Made legacy on

Reason

Untested

Recommended Replacement

Qwen3.5 Plus