DeepSeek V3.2 Experimental

DeepSeek

An innovative experimental model pioneering breakthrough efficiency in long-context processing. Using revolutionary Sparse Attention technology, this model handles massive contexts (164K tokens) with exceptional speed and minimal resource use while maintaining quality. With thinking capabilities and an impressive 65K output window, it excels at tasks requiring extensive context understanding. Perfect for processing large documents, codebases, or datasets where traditional models slow down. Exceptional value for long-context work.

Try Now

Capabilities

Tool Use

Example Use Cases

Processing very large documents

Need long output generation (50k+ tokens)

Long-context task on a budget

Technical Specifications

Context Window

163,840 tokens

Max Output

65,536 tokens

Pricing

Token Costs (per 1M tokens)

Cache Miss Input

$0.27

Non-Reasoning Output

$0.41

Tool Costs (per 1K calls)

Web Search

$15

Code Execution

$0.19

Legacy

Made legacy on

Reason

Outdated model

Recommended Replacement

DeepSeek V3.2