The flagship DeepSeek V4 model, a 1.6T parameter Mixture-of-Experts (49B activated) with a 1M token context window. Matches or exceeds leading closed-source models on coding and agentic benchmarks (93.5% LiveCodeBench, 80.6% SWE-Verified, 3206 Codeforces rating at max reasoning effort). Switches between fast non-thinking responses and explicit chain-of-thought reasoning with configurable effort (up to "max" for the hardest problems like mathematics, competitive programming, and scientific analysis). Tool calls are supported in both modes. Choose this when you need frontier-level quality on complex tasks.
Try Now1,000,000 tokens
262,144 tokens
$1.74
$3.48
$0.145
$15
$0.19