Kimi K2

moonshotai

Kimi K2 is a Mixture-of-Experts (MoE) foundation model with 1 trillion total parameters and 32 billion activated parameters. Outperforms leading open-source models across general knowledge reasoning, programming, mathematics, and agent tasks. Context length 256K with automatic context caching, ToolCalls, JSON Mode, Partial Mode, and internet search support.

Try Now

Capabilities

Tool Use

Example Use Cases

Coding and agent tasks

Need strong foundation model

General-purpose MoE generation

Technical Specifications

Context Window

262,144 tokens

Max Output

16,384 tokens

Cache Miss Cost

$0.60 per 1M tokens

Non-Reasoning Cost

$2.50 per 1M tokens

Cache Read Cost

$0.15 per 1M tokens

Web Search Cost

$15 per 1K calls

Code Execution Cost

$0.19 per 1K calls

⚠️ Legacy

Made legacy on

Reason

Superseded by Kimi K2.5

Recommended Replacement

Kimi K2.5