Mixtral 8x22B

Mistral

Mixtral 8x22B is currently the most performant open model. A 22B sparse Mixture-of-Experts (SMoE). Uses only 39B active parameters out of 141B.

Try Now

Capabilities

Tool Use

Image Input

PDF Input

Technical Specifications

Context Window

65,536 tokens

Max Output

65,536 tokens

Pricing

Token Costs (per 1M tokens)

Cache Miss Input

$2

Non-Reasoning Output

$6

Tool Costs (per 1K calls)

Web Search

$15

Code Execution

$0.19

Legacy

Made legacy on

Reason

Old MoE model; superseded by Mistral Large

Recommended Replacement

Mistral Large