R1 Distill Qwen 32B

DeepSeek

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on Qwen 2.5 32B, using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. Benchmark results include AIME 2024 pass@1: 72.6, MATH-500 pass@1: 94.3, and CodeForces Rating: 1691. The model leverages fine-tuning from DeepSeek R1 outputs, enabling competitive performance comparable to larger frontier models.

Try Now

Capabilities

Thinking

Technical Specifications

Context Window

32,768 tokens

Max Output

32,768 tokens

Pricing

Token Costs (per 1M tokens)

Cache Miss Input

$0.29

Non-Reasoning Output

$0.29

Legacy

Made legacy on

Reason

Distilled 32B model; superseded by native DeepSeek V3.2

Recommended Replacement

DeepSeek V3.2