Mixtral 8x7B

mistral

A 7B sparse Mixture-of-Experts (SMoE). Uses 12.9B active parameters out of 45B total.

Try Now

Capabilities

Tool Use

Image Input

PDF Input

Example Use Cases

General text task at moderate cost

Multilingual content generation

Efficient inference at scale

Technical Specifications

Context Window

32,768 tokens

Max Output

32,768 tokens

Cache Miss Cost

$0.70 per 1M tokens

Non-Reasoning Cost

$0.70 per 1M tokens

Web Search Cost

$15 per 1K calls

Code Execution Cost

$0.19 per 1K calls

⚠️ Legacy

Made legacy on

Reason

Untested

Recommended Replacement

Qwen3.5 Plus