Hunyuan A13B Instruct

tencent

Hunyuan-A13B is a 13B active parameter Mixture-of-Experts (MoE) language model developed by Tencent, with a total parameter count of 80B and support for reasoning via Chain-of-Thought. It offers competitive benchmark performance across mathematics, science, coding, and multi-turn reasoning tasks, while maintaining high inference efficiency via Grouped Query Attention (GQA) and quantization support (FP8, GPTQ, etc.).

Try Now

Capabilities

Extended Thinking

Example Use Cases

Efficient moe reasoning task

Math or science with thinking

Budget multi-turn reasoning

Technical Specifications

Context Window

131,072 tokens

Max Output

131,072 tokens

Cache Miss Cost

$0.14 per 1M tokens

Non-Reasoning Cost

$0.57 per 1M tokens

Web Search Cost

$15 per 1K calls

Code Execution Cost

$0.19 per 1K calls

⚠️ Legacy

Made legacy on

Reason

Untested

Recommended Replacement

Qwen3.5 Plus