R1 Distill Qwen 32B

deepseek

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on Qwen 2.5 32B, using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. Benchmark results include AIME 2024 pass@1: 72.6, MATH-500 pass@1: 94.3, and CodeForces Rating: 1691. The model leverages fine-tuning from DeepSeek R1 outputs, enabling competitive performance comparable to larger frontier models.

Try Now

Capabilities

Extended Thinking

Example Use Cases

Reasoning task with compact model

Math or competitive programming on a budget

Distilled R1 reasoning with small footprint

Technical Specifications

Context Window

32,768 tokens

Max Output

32,768 tokens

Cache Miss Cost

$0.29 per 1M tokens

Non-Reasoning Cost

$0.29 per 1M tokens

Web Search Cost

$15 per 1K calls

Code Execution Cost

$0.19 per 1K calls

⚠️ Legacy

Made legacy on

Reason

Untested

Recommended Replacement

Qwen3.5 Plus